Sample records for failure analysis approach

  1. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  2. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  3. Space Shuttle Stiffener Ring Foam Failure Analysis, a Non-Conventional Approach

    NASA Technical Reports Server (NTRS)

    Howard, Philip M.

    2015-01-01

    The Space Shuttle Program made use of the excellent properties of rigid polyurethane foam for cryogenic tank insulation and as structural protection on the solid rocket boosters. When foam applications de-bond, classical methods of failure analysis did not provide root cause of the failure of the foam. Realizing that foam is the ideal media to document and preserve its own mode of failure, thin sectioning was seen as a logical approach for foam failure analysis to observe the three dimensional morphology of the foam cells. The cell foam morphology provided a much greater understanding of the failure modes than previously achieved.

  4. Predictive failure analysis: planning for the worst so that it never happens!

    PubMed

    Hipple, Jack

    2008-01-01

    This article reviews an alternative approach to failure analysis involving a deliberate saboteurial approach rather than a checklist approach to disaster and emergency preparedness. This process is in the form of an algorithm that is easily applied to any planning situation.

  5. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  6. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    NASA Astrophysics Data System (ADS)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  7. Specifying design conservatism: Worst case versus probabilistic analysis

    NASA Technical Reports Server (NTRS)

    Miles, Ralph F., Jr.

    1993-01-01

    Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.

  8. A novel approach for evaluating the risk of health care failure modes.

    PubMed

    Chang, Dong Shang; Chung, Jenq Hann; Sun, Kuo Lung; Yang, Fu Chiang

    2012-12-01

    Failure mode and effects analysis (FMEA) can be employed to reduce medical errors by identifying the risk ranking of the health care failure modes and taking priority action for safety improvement. The purpose of this paper is to propose a novel approach of data analysis. The approach is to integrate FMEA and a mathematical tool-Data envelopment analysis (DEA) with "slack-based measure" (SBM), in the field of data analysis. The risk indexes (severity, occurrence, and detection) of FMEA are viewed as multiple inputs of DEA. The practicality and usefulness of the proposed approach is illustrated by one case of health care. Being a systematic approach for improving the service quality of health care, the approach can offer quantitative corrective information of risk indexes that thereafter reduce failure possibility. For safety improvement, these new targets of the risk indexes could be used for management by objectives. But FMEA cannot provide quantitative corrective information of risk indexes. The novel approach can surely overcome this chief shortcoming of FMEA. After combining DEA SBM model with FMEA, the two goals-increase of patient safety, medical cost reduction-can be together achieved.

  9. Micromechanics Based Failure Analysis of Heterogeneous Materials

    NASA Astrophysics Data System (ADS)

    Sertse, Hamsasew M.

    In recent decades, heterogeneous materials are extensively used in various industries such as aerospace, defense, automotive and others due to their desirable specific properties and excellent capability of accumulating damage. Despite their wide use, there are numerous challenges associated with the application of these materials. One of the main challenges is lack of accurate tools to predict the initiation, progression and final failure of these materials under various thermomechanical loading conditions. Although failure is usually treated at the macro and meso-scale level, the initiation and growth of failure is a complex phenomena across multiple scales. The objective of this work is to enable the mechanics of structure genome (MSG) and its companion code SwiftComp to analyze the initial failure (also called static failure), progressive failure, and fatigue failure of heterogeneous materials using micromechanics approach. The initial failure is evaluated at each numerical integration point using pointwise and nonlocal approach for each constituent of the heterogeneous materials. The effects of imperfect interfaces among constituents of heterogeneous materials are also investigated using a linear traction-displacement model. Moreover, the progressive and fatigue damage analyses are conducted using continuum damage mechanics (CDM) approach. The various failure criteria are also applied at a material point to analyze progressive damage in each constituent. The constitutive equation of a damaged material is formulated based on a consistent irreversible thermodynamics approach. The overall tangent modulus of uncoupled elastoplastic damage for negligible back stress effect is derived. The initiation of plasticity and damage in each constituent is evaluated at each numerical integration point using a nonlocal approach. The accumulated plastic strain and anisotropic damage evolution variables are iteratively solved using an incremental algorithm. The damage analyses are performed for both brittle failure/high cycle fatigue (HCF) for negligible plastic strain and ductile failure/low cycle fatigue (LCF) for large plastic strain. The proposed approach is incorporated in SwiftComp and used to predict the initial failure envelope, stress-strain curve for various loading conditions, and fatigue life of heterogeneous materials. The combined effects of strain hardening and progressive fatigue damage on the effective properties of heterogeneous materials are also studied. The capability of the current approach is validated using several representative examples of heterogeneous materials including binary composites, continuous fiber-reinforced composites, particle-reinforced composites, discontinuous fiber-reinforced composites, and woven composites. The predictions of MSG are also compared with the predictions obtained using various micromechanics approaches such as Generalized Methods of Cells (GMC), Mori-Tanaka (MT), and Double Inclusions (DI) and Representative Volume Element (RVE) Analysis (called as 3-dimensional finite element analysis (3D FEA) in this document). This study demonstrates that a micromechanics based failure analysis has a great potential to rigorously and more accurately analyze initiation and progression of damage in heterogeneous materials. However, this approach requires material properties specific to damage analysis, which are needed to be independently calibrated for each constituent.

  10. Introduction to Concurrent Engineering: Electronic Circuit Design and Production Applications

    DTIC Science & Technology

    1992-09-01

    STD-1629. Failure mode distribution data for many different types of parts may be found in RAC publication FMD -91. FMEA utilizes inductive logic in a...contrasts with a Fault Tree Analysis ( FTA ) which utilizes deductive logic in a "top down" approach. In FTA , a system failure is assumed and traced down...Analysis ( FTA ) is a graphical method of risk analysis used to identify critical failure modes within a system or equipment. Utilizing a pictorial approach

  11. Simulation Assisted Risk Assessment: Blast Overpressure Modeling

    NASA Technical Reports Server (NTRS)

    Lawrence, Scott L.; Gee, Ken; Mathias, Donovan; Olsen, Michael

    2006-01-01

    A probabilistic risk assessment (PRA) approach has been developed and applied to the risk analysis of capsule abort during ascent. The PRA is used to assist in the identification of modeling and simulation applications that can significantly impact the understanding of crew risk during this potentially dangerous maneuver. The PRA approach is also being used to identify the appropriate level of fidelity for the modeling of those critical failure modes. The Apollo launch escape system (LES) was chosen as a test problem for application of this approach. Failure modes that have been modeled and/or simulated to date include explosive overpressure-based failure, explosive fragment-based failure, land landing failures (range limits exceeded either near launch or Mode III trajectories ending on the African continent), capsule-booster re-contact during separation, and failure due to plume-induced instability. These failure modes have been investigated using analysis tools in a variety of technical disciplines at various levels of fidelity. The current paper focuses on the development and application of a blast overpressure model for the prediction of structural failure due to overpressure, including the application of high-fidelity analysis to predict near-field and headwinds effects.

  12. A global analysis approach for investigating structural resilience in urban drainage systems.

    PubMed

    Mugume, Seith N; Gomez, Diego E; Fu, Guangtao; Farmani, Raziyeh; Butler, David

    2015-09-15

    Building resilience in urban drainage systems requires consideration of a wide range of threats that contribute to urban flooding. Existing hydraulic reliability based approaches have focused on quantifying functional failure caused by extreme rainfall or increase in dry weather flows that lead to hydraulic overloading of the system. Such approaches however, do not fully explore the full system failure scenario space due to exclusion of crucial threats such as equipment malfunction, pipe collapse and blockage that can also lead to urban flooding. In this research, a new analytical approach based on global resilience analysis is investigated and applied to systematically evaluate the performance of an urban drainage system when subjected to a wide range of structural failure scenarios resulting from random cumulative link failure. Link failure envelopes, which represent the resulting loss of system functionality (impacts) are determined by computing the upper and lower limits of the simulation results for total flood volume (failure magnitude) and average flood duration (failure duration) at each link failure level. A new resilience index that combines the failure magnitude and duration into a single metric is applied to quantify system residual functionality at each considered link failure level. With this approach, resilience has been tested and characterised for an existing urban drainage system in Kampala city, Uganda. In addition, the effectiveness of potential adaptation strategies in enhancing its resilience to cumulative link failure has been tested. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    2008-01-01

    Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.

  14. Space Shuttle Stiffener Ring Foam Failure, a Non-Conventional Approach

    NASA Technical Reports Server (NTRS)

    Howard, Philip M.

    2007-01-01

    The Space Shuttle makes use of the excellent properties of rigid polyurethane foam for cryogenic tank insulation and as structural protection on the solid rocket boosters. When foam applications debond, classical methods of analysis do not always provide root cause of the failure of the foam. Realizing that foam is the ideal media to document and preserve its own mode of failure, thin sectioning was seen as a logical approach for foam failure analysis. Thin sectioning in two directions, both horizontal and vertical to the application, was chosen to observe the three dimensional morphology of the foam cells. The cell foam morphology provided a much greater understanding of the failure modes than previously achieved.

  15. NASA Structural Analysis Report on the American Airlines Flight 587 Accident - Local Analysis of the Right Rear Lug

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S; Glaessgen, Edward H.; Mason, Brian H; Krishnamurthy, Thiagarajan; Davila, Carlos G

    2005-01-01

    A detailed finite element analysis of the right rear lug of the American Airlines Flight 587 - Airbus A300-600R was performed as part of the National Transportation Safety Board s failure investigation of the accident that occurred on November 12, 2001. The loads experienced by the right rear lug are evaluated using global models of the vertical tail, local models near the right rear lug, and a global-local analysis procedure. The right rear lug was analyzed using two modeling approaches. In the first approach, solid-shell type modeling is used, and in the second approach, layered-shell type modeling is used. The solid-shell and the layered-shell modeling approaches were used in progressive failure analyses (PFA) to determine the load, mode, and location of failure in the right rear lug under loading representative of an Airbus certification test conducted in 1985 (the 1985-certification test). Both analyses were in excellent agreement with each other on the predicted failure loads, failure mode, and location of failure. The solid-shell type modeling was then used to analyze both a subcomponent test conducted by Airbus in 2003 (the 2003-subcomponent test) and the accident condition. Excellent agreement was observed between the analyses and the observed failures in both cases. From the analyses conducted and presented in this paper, the following conclusions were drawn. The moment, Mx (moment about the fuselage longitudinal axis), has significant effect on the failure load of the lugs. Higher absolute values of Mx give lower failure loads. The predicted load, mode, and location of the failure of the 1985-certification test, 2003-subcomponent test, and the accident condition are in very good agreement. This agreement suggests that the 1985-certification and 2003- subcomponent tests represent the accident condition accurately. The failure mode of the right rear lug for the 1985-certification test, 2003-subcomponent test, and the accident load case is identified as a cleavage-type failure. For the accident case, the predicted failure load for the right rear lug from the PFA is greater than 1.98 times the limit load of the lugs. I.

  16. Fractography of ceramic and metal failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-01-01

    STP 827 is organized into the two broad areas of ceramics and metals. The ceramics section covers fracture analysis techniques, surface analysis techniques, and applied fractography. The metals section covers failure analysis techniques, and latest approaches to fractography, and applied fractography.

  17. A Big Data Analysis Approach for Rail Failure Risk Assessment.

    PubMed

    Jamshidi, Ali; Faghih-Roohi, Shahrzad; Hajizadeh, Siamak; Núñez, Alfredo; Babuska, Robert; Dollevoet, Rolf; Li, Zili; De Schutter, Bart

    2017-08-01

    Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  18. SCADA alarms processing for wind turbine component failure detection

    NASA Astrophysics Data System (ADS)

    Gonzalez, E.; Reder, M.; Melero, J. J.

    2016-09-01

    Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.

  19. Mod 1 wind turbine generator failure modes and effects analysis

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A failure modes and effects analysis (FMEA) was directed primarily at identifying those critical failure modes that would be hazardous to life or would result in major damage to the system. Each subsystem was approached from the top down, and broken down to successive lower levels where it appeared that the criticality of the failure mode warranted more detail analysis. The results were reviewed by specialists from outside the Mod 1 program, and corrective action taken wherever recommended.

  20. Improving Attachments of Non-Invasive (Type III) Electronic Data Loggers to Cetaceans

    DTIC Science & Technology

    2015-09-30

    animals in human care will be performed to test and validate this approach. The cadaver trials will enable controlled testing to failure or with both...quantitative metrics and analysis tools to assess the impact of a tag on the animal . Here we will present: 1) the characterization of the mechanical...fine scale motion analysis for swimming animals . 2 APPROACH Our approach is divided into four subtasks: Task 1: Forces and failure modes

  1. Analysis of Emergency Diesel Generators Failure Incidents in Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Hunt, Ronderio LaDavis

    In early years of operation, emergency diesel generators have had a minimal rate of demand failures. Emergency diesel generators are designed to operate as a backup when the main source of electricity has been disrupted. As of late, EDGs (emergency diesel generators) have been failing at NPPs (nuclear power plants) around the United States causing either station blackouts or loss of onsite and offsite power. These failures occurred from a specific type called demand failures. This thesis evaluated the current problem that raised concern in the nuclear industry which was averaging 1 EDG demand failure/year in 1997 to having an excessive event of 4 EDG demand failure year which occurred in 2011. To determine the next occurrence of the extreme event and possible cause to an event of such happening, two analyses were conducted, the statistical and root cause analysis. Considering the statistical analysis in which an extreme event probability approach was applied to determine the next occurrence year of an excessive event as well as, the probability of that excessive event occurring. Using the root cause analysis in which the potential causes of the excessive event occurred by evaluating, the EDG manufacturers, aging, policy changes/ maintenance practices and failure components. The root cause analysis investigated the correlation between demand failure data and historical data. Final results from the statistical analysis showed expectations of an excessive event occurring in a fixed range of probability and a wider range of probability from the extreme event probability approach. The root-cause analysis of the demand failure data followed historical statistics for the EDG manufacturer, aging and policy changes/ maintenance practices but, indicated a possible cause regarding the excessive event with the failure components. Conclusions showed the next excessive demand failure year, prediction of the probability and the next occurrence year of such failures, with an acceptable confidence level, was difficult but, it was likely that this type of failure will not be a 100 year event. It was noticeable to see that the majority of the EDG demand failures occurred within the main components as of 2005. The overall analysis of this study provided from percentages, indicated that it would be appropriate to make the statement that the excessive event was caused by the overall age (wear and tear) of the Emergency Diesel Generators in Nuclear Power Plants. Future Work will be to better determine the return period of the excessive event once the occurrence has happened for a second time by implementing the extreme event probability approach.

  2. X-framework: Space system failure analysis framework

    NASA Astrophysics Data System (ADS)

    Newman, John Steven

    Space program and space systems failures result in financial losses in the multi-hundred million dollar range every year. In addition to financial loss, space system failures may also represent the loss of opportunity, loss of critical scientific, commercial and/or national defense capabilities, as well as loss of public confidence. The need exists to improve learning and expand the scope of lessons documented and offered to the space industry project team. One of the barriers to incorporating lessons learned include the way in which space system failures are documented. Multiple classes of space system failure information are identified, ranging from "sound bite" summaries in space insurance compendia, to articles in journals, lengthy data-oriented (what happened) reports, and in some rare cases, reports that treat not only the what, but also the why. In addition there are periodically published "corporate crisis" reports, typically issued after multiple or highly visible failures that explore management roles in the failure, often within a politically oriented context. Given the general lack of consistency, it is clear that a good multi-level space system/program failure framework with analytical and predictive capability is needed. This research effort set out to develop such a model. The X-Framework (x-fw) is proposed as an innovative forensic failure analysis approach, providing a multi-level understanding of the space system failure event beginning with the proximate cause, extending to the directly related work or operational processes and upward through successive management layers. The x-fw focus is on capability and control at the process level and examines: (1) management accountability and control, (2) resource and requirement allocation, and (3) planning, analysis, and risk management at each level of management. The x-fw model provides an innovative failure analysis approach for acquiring a multi-level perspective, direct and indirect causation of failures, and generating better and more consistent reports. Through this approach failures can be more fully understood, existing programs can be evaluated and future failures avoided. The x-fw development involved a review of the historical failure analysis and prevention literature, coupled with examination of numerous failure case studies. Analytical approaches included use of a relational failure "knowledge base" for classification and sorting of x-fw elements and attributes for each case. In addition a novel "management mapping" technique was developed as a means of displaying an integrated snapshot of indirect causes within the management chain. Further research opportunities will extend the depth of knowledge available for many of the component level cases. In addition, the x-fw has the potential to expand the scope of space sector lessons learned, and contribute to knowledge management and organizational learning.

  3. Simulation Assisted Risk Assessment Applied to Launch Vehicle Conceptual Design

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie; Gee, Ken; Lawrence, Scott

    2008-01-01

    A simulation-based risk assessment approach is presented and is applied to the analysis of abort during the ascent phase of a space exploration mission. The approach utilizes groupings of launch vehicle failures, referred to as failure bins, which are mapped to corresponding failure environments. Physical models are used to characterize the failure environments in terms of the risk due to blast overpressure, resulting debris field, and the thermal radiation due to a fireball. The resulting risk to the crew is dynamically modeled by combining the likelihood of each failure, the severity of the failure environments as a function of initiator and time of the failure, the robustness of the crew module, and the warning time available due to early detection. The approach is shown to support the launch vehicle design process by characterizing the risk drivers and identifying regions where failure detection would significantly reduce the risk to the crew.

  4. Structural Analysis of the Right Rear Lug of American Airlines Flight 587

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Glaessgen, Edward H.; Mason, Brian H.; Krishnamurthy, Thiagarajan; Davila, Carlos G.

    2006-01-01

    A detailed finite element analysis of the right rear lug of the American Airlines Flight 587 - Airbus A300-600R was performed as part of the National Transportation Safety Board s failure investigation of the accident that occurred on November 12, 2001. The loads experienced by the right rear lug are evaluated using global models of the vertical tail, local models near the right rear lug, and a global-local analysis procedure. The right rear lug was analyzed using two modeling approaches. In the first approach, solid-shell type modeling is used, and in the second approach, layered-shell type modeling is used. The solid-shell and the layered-shell modeling approaches were used in progressive failure analyses (PFA) to determine the load, mode, and location of failure in the right rear lug under loading representative of an Airbus certification test conducted in 1985 (the 1985-certification test). Both analyses were in excellent agreement with each other on the predicted failure loads, failure mode, and location of failure. The solid-shell type modeling was then used to analyze both a subcomponent test conducted by Airbus in 2003 (the 2003-subcomponent test) and the accident condition. Excellent agreement was observed between the analyses and the observed failures in both cases. The moment, Mx (moment about the fuselage longitudinal axis), has significant effect on the failure load of the lugs. Higher absolute values of Mx give lower failure loads. The predicted load, mode, and location of the failure of the 1985- certification test, 2003-subcomponent test, and the accident condition are in very good agreement. This agreement suggests that the 1985-certification and 2003-subcomponent tests represent the accident condition accurately. The failure mode of the right rear lug for the 1985-certification test, 2003-subcomponent test, and the accident load case is identified as a cleavage-type failure. For the accident case, the predicted failure load for the right rear lug from the PFA is greater than 1.98 times the limit load of the lugs.

  5. Advances on the Failure Analysis of the Dam-Foundation Interface of Concrete Dams.

    PubMed

    Altarejos-García, Luis; Escuder-Bueno, Ignacio; Morales-Torres, Adrián

    2015-12-02

    Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern.

  6. Advances on the Failure Analysis of the Dam—Foundation Interface of Concrete Dams

    PubMed Central

    Altarejos-García, Luis; Escuder-Bueno, Ignacio; Morales-Torres, Adrián

    2015-01-01

    Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern. PMID:28793709

  7. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  8. Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Lotts, Christine G.; Sleight, David W.

    1999-01-01

    This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.

  9. User-Defined Material Model for Progressive Failure Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F. Jr.; Reeder, James R. (Technical Monitor)

    2006-01-01

    An overview of different types of composite material system architectures and a brief review of progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model (or UMAT) for use with the ABAQUS/Standard1 nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details and use of the UMAT subroutine are described in the present paper. Parametric studies for composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented.

  10. Reducing unscheduled plant maintenance delays -- Field test of a new method to predict electric motor failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Homce, G.T.; Thalimer, J.R.

    1996-05-01

    Most electric motor predictive maintenance methods have drawbacks that limit their effectiveness in the mining environment. The US Bureau of Miens (USBM) is developing an alternative approach to detect winding insulation breakdown in advance of complete motor failure. In order to evaluate the analysis algorithms necessary for this approach, the USBM has designed and installed a system to monitor 120 electric motors in a coal preparation plant. The computer-based experimental system continuously gathers, stores, and analyzes electrical parameters for each motor. The results are then correlated to data from conventional motor-maintenance methods and in-service failures to determine if the analysismore » algorithms can detect signs of insulation deterioration and impending failure. This paper explains the on-line testing approach used in this research, and describes monitoring system design and implementation. At this writing data analysis is underway, but conclusive results are not yet available.« less

  11. Top-down and bottom-up definitions of human failure events in human reliability analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald Laurids

    2014-10-01

    In the probabilistic risk assessments (PRAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question is crucial, however, as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PRAs tend to be top-down—defined as a subset of the PRA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) often tend to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less

  12. A case study in nonconformance and performance trend analysis

    NASA Technical Reports Server (NTRS)

    Maloy, Joseph E.; Newton, Coy P.

    1990-01-01

    As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.

  13. Review of the probabilistic failure analysis methodology and other probabilistic approaches for application in aerospace structural design

    NASA Technical Reports Server (NTRS)

    Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.

    1993-01-01

    Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.

  14. One Size Does Not Fit All: Human Failure Event Decomposition and Task Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald Laurids Boring, PhD

    2014-09-01

    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered or exacerbated by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally,more » both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down—defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications. In this paper, I first review top-down and bottom-up approaches for defining HFEs and then present a seven-step guideline to ensure a task analysis completed as part of human error identification decomposes to a level suitable for use as HFEs. This guideline illustrates an effective way to bridge the bottom-up approach with top-down requirements.« less

  15. Practical, transparent prospective risk analysis for the clinical laboratory.

    PubMed

    Janssens, Pim Mw

    2014-11-01

    Prospective risk analysis (PRA) is an essential element in quality assurance for clinical laboratories. Practical approaches to conducting PRA in laboratories, however, are scarce. On the basis of the classical Failure Mode and Effect Analysis method, an approach to PRA was developed for application to key laboratory processes. First, the separate, major steps of the process under investigation are identified. Scores are then given for the Probability (P) and Consequence (C) of predefined types of failures and the chances of Detecting (D) these failures. Based on the P and C scores (on a 10-point scale), an overall Risk score (R) is calculated. The scores for each process were recorded in a matrix table. Based on predetermined criteria for R and D, it was determined whether a more detailed analysis was required for potential failures and, ultimately, where risk-reducing measures were necessary, if any. As an illustration, this paper presents the results of the application of PRA to our pre-analytical and analytical activities. The highest R scores were obtained in the stat processes, the most common failure type in the collective process steps was 'delayed processing or analysis', the failure type with the highest mean R score was 'inappropriate analysis' and the failure type most frequently rated as suboptimal was 'identification error'. The PRA designed is a useful semi-objective tool to identify process steps with potential failures rated as risky. Its systematic design and convenient output in matrix tables makes it easy to perform, practical and transparent. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  16. A systematic risk management approach employed on the CloudSat project

    NASA Technical Reports Server (NTRS)

    Basilio, R. R.; Plourde, K. S.; Lam, T.

    2000-01-01

    The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Lin, Guang

    In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less

  18. Failure Mode Identification Through Clustering Analysis

    NASA Technical Reports Server (NTRS)

    Arunajadai, Srikesh G.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Research has shown that nearly 80% of the costs and problems are created in product development and that cost and quality are essentially designed into products in the conceptual stage. Currently, failure identification procedures (such as FMEA (Failure Modes and Effects Analysis), FMECA (Failure Modes, Effects and Criticality Analysis) and FTA (Fault Tree Analysis)) and design of experiments are being used for quality control and for the detection of potential failure modes during the detail design stage or post-product launch. Though all of these methods have their own advantages, they do not give information as to what are the predominant failures that a designer should focus on while designing a product. This work uses a functional approach to identify failure modes, which hypothesizes that similarities exist between different failure modes based on the functionality of the product/component. In this paper, a statistical clustering procedure is proposed to retrieve information on the set of predominant failures that a function experiences. The various stages of the methodology are illustrated using a hypothetical design example.

  19. A Hybrid Approach to Composite Damage and Failure Analysis Combining Synergistic Damage Mechanics and Peridynamics

    DTIC Science & Technology

    2017-03-30

    Composite Damage and Failure Analysis Combining Synergistic Damage Mechanics and Peridynamics 5b. GRANT NUMBER NOOO 14-16-1-21 73 5c. PROGRAM...ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Texas A&M Engineering Experiment Station (TEES) 400 Harvey Mitchell Parkway, Suite 300 M160 1473 I...Failure Analysis Combining Synergistic Damage Mechanics and Peridynamics Award Number N00014-16-1-2173 DOD-NAVY- Office of Naval Research PI: Ramesh

  20. Independent Orbiter Assessment (IOA): Analysis of the crew equipment subsystem

    NASA Technical Reports Server (NTRS)

    Sinclair, Susan; Graham, L.; Richard, Bill; Saxon, H.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical (PCIs) items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results coresponding to the Orbiter crew equipment hardware are documented. The IOA analysis process utilized available crew equipment hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 352 failure modes analyzed, 78 were determined to be PCIs.

  1. Prediction of Composite Laminate Strength Properties Using a Refined Zigzag Plate Element

    NASA Technical Reports Server (NTRS)

    Barut, Atila; Madenci, Erdogan; Tessler, Alexander

    2013-01-01

    This study presents an approach that uses the refined zigzag element, RZE(exp2,2) in conjunction with progressive failure criteria to predict the ultimate strength of composite laminates based on only ply-level strength properties. The methodology involves four major steps: (1) Determination of accurate stress and strain fields under complex loading conditions using RZE(exp2,2)-based finite element analysis, (2) Determination of failure locations and failure modes using the commonly accepted Hashin's failure criteria, (3) Recursive degradation of the material stiffness, and (4) Non-linear incremental finite element analysis to obtain stress redistribution until global failure. The validity of this approach is established by considering the published test data and predictions for (1) strength of laminates under various off-axis loading, (2) strength of laminates with a hole under compression, and (3) strength of laminates with a hole under tension.

  2. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  3. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  4. Independent Orbiter Assessment (IOA): Analysis of the pyrotechnics subsystem

    NASA Technical Reports Server (NTRS)

    Robinson, W. W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Pyrotechnics hardware. The IOA analysis process utilized available pyrotechnics hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  5. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang

    Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vsmore » treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.« less

  6. 3D visualization of membrane failures in fuel cells

    NASA Astrophysics Data System (ADS)

    Singh, Yadvinder; Orfino, Francesco P.; Dutta, Monica; Kjeang, Erik

    2017-03-01

    Durability issues in fuel cells, due to chemical and mechanical degradation, are potential impediments in their commercialization. Hydrogen leak development across degraded fuel cell membranes is deemed a lifetime-limiting failure mode and potential safety issue that requires thorough characterization for devising effective mitigation strategies. The scope and depth of failure analysis has, however, been limited by the 2D nature of conventional imaging. In the present work, X-ray computed tomography is introduced as a novel, non-destructive technique for 3D failure analysis. Its capability to acquire true 3D images of membrane damage is demonstrated for the very first time. This approach has enabled unique and in-depth analysis resulting in novel findings regarding the membrane degradation mechanism; these are: significant, exclusive membrane fracture development independent of catalyst layers, localized thinning at crack sites, and demonstration of the critical impact of cracks on fuel cell durability. Evidence of crack initiation within the membrane is demonstrated, and a possible new failure mode different from typical mechanical crack development is identified. X-ray computed tomography is hereby established as a breakthrough approach for comprehensive 3D characterization and reliable failure analysis of fuel cell membranes, and could readily be extended to electrolyzers and flow batteries having similar structure.

  7. IDHEAS – A NEW APPROACH FOR HUMAN RELIABILITY ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. W. Parry; J.A Forester; V.N. Dang

    2013-09-01

    This paper describes a method, IDHEAS (Integrated Decision-Tree Human Event Analysis System) that has been developed jointly by the US NRC and EPRI as an improved approach to Human Reliability Analysis (HRA) that is based on an understanding of the cognitive mechanisms and performance influencing factors (PIFs) that affect operator responses. The paper describes the various elements of the method, namely the performance of a detailed cognitive task analysis that is documented in a crew response tree (CRT), and the development of the associated time-line to identify the critical tasks, i.e. those whose failure results in a human failure eventmore » (HFE), and an approach to quantification that is based on explanations of why the HFE might occur.« less

  8. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  9. SU-F-T-247: Collision Risks in a Modern Radiation Oncology Department: An Efficient Approach to Failure Modes and Effects Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schubert, L; Westerly, D; Vinogradskiy, Y

    Purpose: Collisions between treatment equipment and patients are potentially catastrophic. Modern technology now commonly involves automated remote motion during imaging and treatment, yet a systematic assessment to identify and mitigate collision risks has yet to be performed. Failure modes and effects analysis (FMEA) is a method of risk assessment that has been increasingly used in healthcare, yet can be resource intensive. This work presents an efficient approach to FMEA to identify collision risks and implement practical interventions within a modern radiation therapy department. Methods: Potential collisions (e.g. failure modes) were assessed for all treatment and simulation rooms by teams consistingmore » of physicists, therapists, and radiation oncologists. Failure modes were grouped into classes according to similar characteristics. A single group meeting was held to identify implementable interventions for the highest priority classes of failure modes. Results: A total of 60 unique failure modes were identified by 6 different teams of physicists, therapists, and radiation oncologists. Failure modes were grouped into four main classes: specific patient setups, automated equipment motion, manual equipment motion, and actions in QA or service mode. Two of these classes, unusual patient setups and automated machine motion, were identified as being high priority in terms severity of consequence and addressability by interventions. The two highest risk classes consisted of 33 failure modes (55% of the total). In a single one hour group meeting, 6 interventions were identified. Those interventions addressed 100% of the high risk classes of failure modes (55% of all failure modes identified). Conclusion: A class-based approach to FMEA was developed to efficiently identify collision risks and implement interventions in a modern radiation oncology department. Failure modes and interventions will be listed, and a comparison of this approach against traditional FMEA methods will be presented.« less

  10. Continuum Damage Mechanics Models for the Analysis of Progressive Failure in Open-Hole Tension Laminates

    NASA Technical Reports Server (NTRS)

    Song, Kyonchan; Li, Yingyong; Rose, Cheryl A.

    2011-01-01

    The performance of a state-of-the-art continuum damage mechanics model for interlaminar damage, coupled with a cohesive zone model for delamination is examined for failure prediction of quasi-isotropic open-hole tension laminates. Limitations of continuum representations of intra-ply damage and the effect of mesh orientation on the analysis predictions are discussed. It is shown that accurate prediction of matrix crack paths and stress redistribution after cracking requires a mesh aligned with the fiber orientation. Based on these results, an aligned mesh is proposed for analysis of the open-hole tension specimens consisting of different meshes within the individual plies, such that the element edges are aligned with the ply fiber direction. The modeling approach is assessed by comparison of analysis predictions to experimental data for specimen configurations in which failure is dominated by complex interactions between matrix cracks and delaminations. It is shown that the different failure mechanisms observed in the tests are well predicted. In addition, the modeling approach is demonstrated to predict proper trends in the effect of scaling on strength and failure mechanisms of quasi-isotropic open-hole tension laminates.

  11. Failure Engineering Study and Accelerated Stress Test Results for the Mars Global Surveyor Spacecraft's Power Shunt Assemblies

    NASA Technical Reports Server (NTRS)

    Gibbel, Mark; Larson, Timothy

    2000-01-01

    An Engineering-of-Failure approach to designing and executing an accelerated product qualification test was performed to support a risk assessment of a "work-around" necessitated by an on-orbit failure of another piece of hardware on the Mars Global Surveyor spacecraft. The proposed work-around involved exceeding the previous qualification experience both in terms of extreme cold exposure level and in terms of demonstrated low cycle fatigue life for the power shunt assemblies. An analysis was performed to identify potential failure sites, modes and associated failure mechanisms consistent with the new use conditions. A test was then designed and executed which accelerated the failure mechanisms identified by analysis. Verification of the resulting failure mechanism concluded the effort.

  12. Fault management for the Space Station Freedom control center

    NASA Technical Reports Server (NTRS)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  13. Independent Orbiter Assessment (IOA): Analysis of the communication and tracking subsystem

    NASA Technical Reports Server (NTRS)

    Gardner, J. R.; Robinson, W. M.; Trahan, W. H.; Daley, E. S.; Long, W. C.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Communication and Tracking hardware. The IOA analysis process utilized available Communication and Tracking hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  14. Task Decomposition in Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald Laurids; Joe, Jeffrey Clark

    2014-06-01

    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less

  15. Fault tree analysis of failure cause of crushing plant and mixing bed hall at Khoy cement factory in Iran☆

    PubMed Central

    Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad

    2014-01-01

    Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433

  16. Fault tree analysis of failure cause of crushing plant and mixing bed hall at Khoy cement factory in Iran.

    PubMed

    Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad

    2014-04-01

    Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.

  17. The analysis of the pilot's cognitive and decision processes

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    Articles are presented on pilot performance in zero-visibility precision approach, failure detection by pilots during automatic landing, experiments in pilot decision-making during simulated low visibility approaches, a multinomial maximum likelihood program, and a random search algorithm for laboratory computers. Other topics discussed include detection of system failures in multi-axis tasks and changes in pilot workload during an instrument landing.

  18. Simulation as a preoperative planning approach in advanced heart failure patients. A retrospective clinical analysis.

    PubMed

    Capoccia, Massimo; Marconi, Silvia; Singh, Sanjeet Avtaar; Pisanelli, Domenico M; De Lazzari, Claudio

    2018-05-02

    Modelling and simulation may become clinically applicable tools for detailed evaluation of the cardiovascular system and clinical decision-making to guide therapeutic intervention. Models based on pressure-volume relationship and zero-dimensional representation of the cardiovascular system may be a suitable choice given their simplicity and versatility. This approach has great potential for application in heart failure where the impact of left ventricular assist devices has played a significant role as a bridge to transplant and more recently as a long-term solution for non eligible candidates. We sought to investigate the value of simulation in the context of three heart failure patients with a view to predict or guide further management. CARDIOSIM © was the software used for this purpose. The study was based on retrospective analysis of haemodynamic data previously discussed at a multidisciplinary meeting. The outcome of the simulations addressed the value of a more quantitative approach in the clinical decision process. Although previous experience, co-morbidities and the risk of potentially fatal complications play a role in clinical decision-making, patient-specific modelling may become a daily approach for selection and optimisation of device-based treatment for heart failure patients. Willingness to adopt this integrated approach may be the key to further progress.

  19. TU-AB-BRD-02: Failure Modes and Effects Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huq, M.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  20. A quality risk management model approach for cell therapy manufacturing.

    PubMed

    Lopez, Fabio; Di Bartolo, Chiara; Piazza, Tommaso; Passannanti, Antonino; Gerlach, Jörg C; Gridelli, Bruno; Triolo, Fabio

    2010-12-01

    International regulatory authorities view risk management as an essential production need for the development of innovative, somatic cell-based therapies in regenerative medicine. The available risk management guidelines, however, provide little guidance on specific risk analysis approaches and procedures applicable in clinical cell therapy manufacturing. This raises a number of problems. Cell manufacturing is a poorly automated process, prone to operator-introduced variations, and affected by heterogeneity of the processed organs/tissues and lot-dependent variability of reagent (e.g., collagenase) efficiency. In this study, the principal challenges faced in a cell-based product manufacturing context (i.e., high dependence on human intervention and absence of reference standards for acceptable risk levels) are identified and addressed, and a risk management model approach applicable to manufacturing of cells for clinical use is described for the first time. The use of the heuristic and pseudo-quantitative failure mode and effect analysis/failure mode and critical effect analysis risk analysis technique associated with direct estimation of severity, occurrence, and detection is, in this specific context, as effective as, but more efficient than, the analytic hierarchy process. Moreover, a severity/occurrence matrix and Pareto analysis can be successfully adopted to identify priority failure modes on which to act to mitigate risks. The application of this approach to clinical cell therapy manufacturing in regenerative medicine is also discussed. © 2010 Society for Risk Analysis.

  1. Independent Orbiter Assessment (IOA): Analysis of the auxiliary power unit

    NASA Technical Reports Server (NTRS)

    Barnes, J. E.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Auxiliary Power Unit (APU). The APUs are required to provide power to the Orbiter hydraulics systems during ascent and entry flight phases for aerosurface actuation, main engine gimballing, landing gear extension, and other vital functions. For analysis purposes, the APU system was broken down into ten functional subsystems. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. A preponderance of 1/1 criticality items were related to failures that allowed the hydrazine fuel to escape into the Orbiter aft compartment, creating a severe fire hazard, and failures that caused loss of the gas generator injector cooling system.

  2. Progressive Failure And Life Prediction of Ceramic and Textile Composites

    NASA Technical Reports Server (NTRS)

    Xue, David Y.; Shi, Yucheng; Katikala, Madhu; Johnston, William M., Jr.; Card, Michael F.

    1998-01-01

    An engineering approach to predict the fatigue life and progressive failure of multilayered composite and textile laminates is presented. Analytical models which account for matrix cracking, statistical fiber failures and nonlinear stress-strain behavior have been developed for both composites and textiles. The analysis method is based on a combined micromechanics, fracture mechanics and failure statistics analysis. Experimentally derived empirical coefficients are used to account for the interface of fiber and matrix, fiber strength, and fiber-matrix stiffness reductions. Similar approaches were applied to textiles using Repeating Unit Cells. In composite fatigue analysis, Walker's equation is applied for matrix fatigue cracking and Heywood's formulation is used for fiber strength fatigue degradation. The analysis has been compared with experiment with good agreement. Comparisons were made with Graphite-Epoxy, C/SiC and Nicalon/CAS composite materials. For textile materials, comparisons were made with triaxial braided and plain weave materials under biaxial or uniaxial tension. Fatigue predictions were compared with test data obtained from plain weave C/SiC materials tested at AS&M. Computer codes were developed to perform the analysis. Composite Progressive Failure Analysis for Laminates is contained in the code CPFail. Micromechanics Analysis for Textile Composites is contained in the code MicroTex. Both codes were adapted to run as subroutines for the finite element code ABAQUS and CPFail-ABAQUS and MicroTex-ABAQUS. Graphic user interface (GUI) was developed to connect CPFail and MicroTex with ABAQUS.

  3. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  4. A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.

    ERIC Educational Resources Information Center

    Stephens, Kent G.

    Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…

  5. Fuzzy Failure Analysis: A New Approach to Service Quality Analysis in Higher Education Institutions (Case Study: Vali-e-asr University of Rafsanjan-Iran)

    ERIC Educational Resources Information Center

    Takalo, Salim Karimi; Abadi, Ali Reza Naser Sadr; Vesal, Seyed Mahdi; Mirzaei, Amir; Nawaser, Khaled

    2013-01-01

    In recent years, concurrent with steep increase in the growth of higher education institutions, improving of educational service quality with an emphasis on students' satisfaction has become an important issue. The present study is going to use the Failure Mode and Effect Analysis (FMEA) in order to evaluate the quality of educational services in…

  6. [Failure mode and effects analysis (FMEA) of insulin in a mother-child university-affiliated health center].

    PubMed

    Berruyer, M; Atkinson, S; Lebel, D; Bussières, J-F

    2016-01-01

    Insulin is a high-alert drug. The main objective of this descriptive cross-sectional study was to evaluate the risks associated with insulin use in healthcare centers. The secondary objective was to propose corrective measures to reduce the main risks associated with the most critical failure modes in the analysis. We conducted a failure mode and effects analysis (FMEA) in obstetrics-gynecology, neonatology and pediatrics. Five multidisciplinary meetings occurred in August 2013. A total of 44 out of 49 failure modes were analyzed. Nine out of 44 (20%) failure modes were deemed critical, with a criticality score ranging from 540 to 720. Following the multidisciplinary meetings, everybody agreed that an FMEA was a useful tool to identify failure modes and their relative importance. This approach identified many corrective measures. This shared experience increased awareness of safety issues with insulin in our mother-child center. This study identified the main failure modes and associated corrective measures. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  7. Failure of Non-Circular Composite Cylinders

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.

    2004-01-01

    In this study, a progressive failure analysis is used to investigate leakage in internally pressurized non-circular composite cylinders. This type of approach accounts for the localized loss of stiffness when material failure occurs at some location in a structure by degrading the local material elastic properties by a certain factor. The manner in which this degradation of material properties takes place depends on the failure modes, which are determined by the application of a failure criterion. The finite-element code STAGS, which has the capability to perform progressive failure analysis using different degradation schemes and failure criteria, is utilized to analyze laboratory scale, graphite-epoxy, elliptical cylinders with quasi-isotropic, circumferentially-stiff, and axially-stiff material orthotropies. The results are divided into two parts. The first part shows that leakage, which is assumed to develop if there is material failure in every layer at some axial and circumferential location within the cylinder, does not occur without failure of fibers. Moreover before fibers begin to fail, only matrix tensile failures, or matrix cracking, takes place, and at least one layer in all three cylinders studied remain uncracked, preventing the formation of a leakage path. That determination is corroborated by the use of different degradation schemes and various failure criteria. Among the degradation schemes investigated are the degradation of different engineering properties, the use of various degradation factors, the recursive or non-recursive degradation of the engineering properties, and the degradation of material properties using different computational approaches. The failure criteria used in the analysis include the noninteractive maximum stress criterion and the interactive Hashin and Tsai-Wu criteria. The second part of the results shows that leakage occurs due to a combination of matrix tensile and compressive, fiber tensile and compressive, and inplane shear failure modes in all three cylinders. Leakage develops after a relatively low amount of fiber damage, at about the same pressure for three material orthotropies, and at approximately the same location.

  8. Determination of UAV pre-flight Checklist for flight test purpose using qualitative failure analysis

    NASA Astrophysics Data System (ADS)

    Hendarko; Indriyanto, T.; Syardianto; Maulana, F. A.

    2018-05-01

    Safety aspects are of paramount importance in flight, especially in flight test phase. Before performing any flight tests of either manned or unmanned aircraft, one should include pre-flight checklists as a required safety document in the flight test plan. This paper reports on the development of a new approach for determination of pre-flight checklists for UAV flight test based on aircraft’s failure analysis. The Lapan’s LSA (Light Surveillance Aircraft) is used as a study case, assuming this aircraft has been transformed into the unmanned version. Failure analysis is performed on LSA using fault tree analysis (FTA) method. Analysis is focused on propulsion system and flight control system, which fail of these systems will lead to catastrophic events. Pre-flight checklist of the UAV is then constructed based on the basic causes obtained from failure analysis.

  9. Dynamics of functional failures and recovery in complex road networks

    NASA Astrophysics Data System (ADS)

    Zhan, Xianyuan; Ukkusuri, Satish V.; Rao, P. Suresh C.

    2017-11-01

    We propose a new framework for modeling the evolution of functional failures and recoveries in complex networks, with traffic congestion on road networks as the case study. Differently from conventional approaches, we transform the evolution of functional states into an equivalent dynamic structural process: dual-vertex splitting and coalescing embedded within the original network structure. The proposed model successfully explains traffic congestion and recovery patterns at the city scale based on high-resolution data from two megacities. Numerical analysis shows that certain network structural attributes can amplify or suppress cascading functional failures. Our approach represents a new general framework to model functional failures and recoveries in flow-based networks and allows understanding of the interplay between structure and function for flow-induced failure propagation and recovery.

  10. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  11. Structural analysis considerations for wind turbine blades

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1979-01-01

    Approaches to the structural analysis of wind turbine blade designs are reviewed. Specifications and materials data are discussed along with the analysis of vibrations, loads, stresses, and failure modes.

  12. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  13. Fault Tree Analysis: An Operations Research Tool for Identifying and Reducing Undesired Events in Training.

    ERIC Educational Resources Information Center

    Barker, Bruce O.; Petersen, Paul D.

    This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…

  14. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Technical Reports Server (NTRS)

    Flores, Melissa; Malin, Jane T.

    2013-01-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component s functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  15. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Astrophysics Data System (ADS)

    Flores, Melissa D.; Malin, Jane T.; Fleming, Land D.

    2013-09-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component's functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  16. TU-AB-BRD-03: Fault Tree Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunscombe, P.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  17. Regression analysis of case K interval-censored failure time data in the presence of informative censoring.

    PubMed

    Wang, Peijie; Zhao, Hui; Sun, Jianguo

    2016-12-01

    Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.

  18. An engineering approach to the prediction of fatigue behavior of unnotched/notched fiber reinforced composite laminates

    NASA Technical Reports Server (NTRS)

    Kulkarni, S. V.; Mclaughlin, P. V., Jr.

    1978-01-01

    An engineering approach is proposed for predicting unnotched/notched laminate fatigue behavior from basic lamina fatigue data. The fatigue analysis procedure was used to determine the laminate property (strength/stiffness) degradation as a function of fatigue cycles in uniaxial tension and in plane shear. These properties were then introduced into the failure model for a notched laminate to obtain damage growth, residual strength, and failure mode. The approach is thus essentially a combination of the cumulative damage accumulation (akin to the Miner-Palmgren hypothesis and its derivatives) and the damage growth rate (similar to the fracture mechanics approach) philosophies. An analysis/experiment correlation appears to confirm the basic postulates of material wearout and the predictability of laminate fatigue properties from lamina fatigue data.

  19. Independent Orbiter Assessment (IOA): Analysis of the life support and airlock support subsystems

    NASA Technical Reports Server (NTRS)

    Arbet, Jim; Duffy, R.; Barickman, K.; Saiidi, Mo J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Life Support System (LSS) and Airlock Support System (ALSS). Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The LSS provides for the management of the supply water, collection of metabolic waste, management of waste water, smoke detection, and fire suppression. The ALSS provides water, oxygen, and electricity to support an extravehicular activity in the airlock.

  20. FMEA of manual and automated methods for commissioning a radiotherapy treatment planning system.

    PubMed

    Wexler, Amy; Gu, Bruce; Goddu, Sreekrishna; Mutic, Maya; Yaddanapudi, Sridhar; Olsen, Lindsey; Harry, Taylor; Noel, Camille; Pawlicki, Todd; Mutic, Sasa; Cai, Bin

    2017-09-01

    To evaluate the level of risk involved in treatment planning system (TPS) commissioning using a manual test procedure, and to compare the associated process-based risk to that of an automated commissioning process (ACP) by performing an in-depth failure modes and effects analysis (FMEA). The authors collaborated to determine the potential failure modes of the TPS commissioning process using (a) approaches involving manual data measurement, modeling, and validation tests and (b) an automated process utilizing application programming interface (API) scripting, preloaded, and premodeled standard radiation beam data, digital heterogeneous phantom, and an automated commissioning test suite (ACTS). The severity (S), occurrence (O), and detectability (D) were scored for each failure mode and the risk priority numbers (RPN) were derived based on TG-100 scale. Failure modes were then analyzed and ranked based on RPN. The total number of failure modes, RPN scores and the top 10 failure modes with highest risk were described and cross-compared between the two approaches. RPN reduction analysis is also presented and used as another quantifiable metric to evaluate the proposed approach. The FMEA of a MTP resulted in 47 failure modes with an RPN ave of 161 and S ave of 6.7. The highest risk process of "Measurement Equipment Selection" resulted in an RPN max of 640. The FMEA of an ACP resulted in 36 failure modes with an RPN ave of 73 and S ave of 6.7. The highest risk process of "EPID Calibration" resulted in an RPN max of 576. An FMEA of treatment planning commissioning tests using automation and standardization via API scripting, preloaded, and pre-modeled standard beam data, and digital phantoms suggests that errors and risks may be reduced through the use of an ACP. © 2017 American Association of Physicists in Medicine.

  1. Weighted Fuzzy Risk Priority Number Evaluation of Turbine and Compressor Blades Considering Failure Mode Correlations

    NASA Astrophysics Data System (ADS)

    Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-06-01

    Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.

  2. Risk assessment of component failure modes and human errors using a new FMECA approach: application in the safety analysis of HDR brachytherapy.

    PubMed

    Giardina, M; Castiglia, F; Tomarchio, E

    2014-12-01

    Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.

  3. Design of Rock Slope Reinforcement: An Himalayan Case Study

    NASA Astrophysics Data System (ADS)

    Tiwari, Gaurav; Latha, Gali Madhavi

    2016-06-01

    The stability analysis of the two abutment slopes of a railway bridge proposed at about 359 m above the ground level, crossing a river and connecting two hill faces in the Himalayas, India, is presented. The bridge is located in a zone of high seismic activity. The rock slopes are composed of a heavily jointed rock mass and the spacing, dip and dip direction of joint sets are varying at different locations. Geological mapping was carried out to characterize all discontinuities present along the slopes. Laboratory and field investigations were conducted to assess the geotechnical properties of the intact rock, rock mass and joint infill. Stability analyses of these rock slopes were carried out using numerical programmes. Loads from the foundations resting on the slopes and seismic accelerations estimated from site-specific ground response analysis were considered. The proposed slope profile with several berms between successive foundations was simulated in the numerical model. An equivalent continuum approach with Hoek and Brown failure criterion was initially used in a finite element model to assess the global stability of the slope abutments. In the second stage, finite element analysis of rock slopes with all joint sets with their orientations, spacing and properties explicitly incorporated into the numerical model was taken up using continuum with joints approach. It was observed that the continuum with joints approach was able to capture the local failures in some of the slope sections, which were verified using wedge failure analysis and stereographic projections. Based on the slope deformations and failure patterns observed from the numerical analyses, rock anchors were designed to achieve the target factors of safety against failure while keeping the deformations within the permissible limits. Detailed design of rock anchors and comparison of the stability of slopes with and without reinforcement are presented.

  4. A comparison of two prospective risk analysis methods: Traditional FMEA and a modified healthcare FMEA.

    PubMed

    Rah, Jeong-Eun; Manger, Ryan P; Yock, Adam D; Kim, Gwe-Ya

    2016-12-01

    To examine the abilities of a traditional failure mode and effects analysis (FMEA) and modified healthcare FMEA (m-HFMEA) scoring methods by comparing the degree of congruence in identifying high risk failures. The authors applied two prospective methods of the quality management to surface image guided, linac-based radiosurgery (SIG-RS). For the traditional FMEA, decisions on how to improve an operation were based on the risk priority number (RPN). The RPN is a product of three indices: occurrence, severity, and detectability. The m-HFMEA approach utilized two indices, severity and frequency. A risk inventory matrix was divided into four categories: very low, low, high, and very high. For high risk events, an additional evaluation was performed. Based upon the criticality of the process, it was decided if additional safety measures were needed and what they comprise. The two methods were independently compared to determine if the results and rated risks matched. The authors' results showed an agreement of 85% between FMEA and m-HFMEA approaches for top 20 risks of SIG-RS-specific failure modes. The main differences between the two approaches were the distribution of the values and the observation that failure modes (52, 54, 154) with high m-HFMEA scores do not necessarily have high FMEA-RPN scores. In the m-HFMEA analysis, when the risk score is determined, the basis of the established HFMEA Decision Tree™ or the failure mode should be more thoroughly investigated. m-HFMEA is inductive because it requires the identification of the consequences from causes, and semi-quantitative since it allows the prioritization of high risks and mitigation measures. It is therefore a useful tool for the prospective risk analysis method to radiotherapy.

  5. Life Cycle Costing: A Working Level Approach

    DTIC Science & Technology

    1981-06-01

    Effects Analysis ( FMEA ) ...... ................ .. 59 Logistics Performance Factors (LPFs) 60 Planning the Use of Life Cycle Cost in the Demonstration...form. Failure Mode and Effects Analysis ( FMEA ). Description. FMEA is a technique that attempts to improve the design of any particular unit. The FMEA ...failure modes and also eliminate extra parts or ones that are used to achieve more performance than is necessary (16:5-14]. Advantages. FMEA forces

  6. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  7. Sequential experimental design based generalised ANOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less

  8. TU-AB-BRD-00: Task Group 100

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  9. TU-AB-BRD-01: Process Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palta, J.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  10. TU-AB-BRD-04: Development of Quality Management Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomadsen, B.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  11. Experimental and Numerical Analysis of Triaxially Braided Composites Utilizing a Modified Subcell Modeling Approach

    NASA Technical Reports Server (NTRS)

    Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.

    2015-01-01

    A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles of 0deg, 30deg, 45deg, 60deg and 90deg relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.

  12. Experimental and Numerical Analysis of Triaxially Braided Composites Utilizing a Modified Subcell Modeling Approach

    NASA Technical Reports Server (NTRS)

    Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.

    2015-01-01

    A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles [0deg, 30deg, 45deg, 60deg and 90deg] relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.

  13. Potential of airborne LiDAR data analysis to detect subtle landforms of slope failure: Portainé, Central Pyrenees

    NASA Astrophysics Data System (ADS)

    Ortuño, María; Guinau, Marta; Calvet, Jaume; Furdada, Glòria; Bordonau, Jaume; Ruiz, Antonio; Camafort, Miquel

    2017-10-01

    Slope failures have been traditionally detected by field inspection and aerial-photo interpretation. These approaches are generally insufficient to identify subtle landforms, especially those generated during the early stages of failures, and particularly where the site is located in forested and remote terrains. We present the identification and characterization of several large and medium size slope failures previously undetected within the Orri massif, Central Pyrenees. Around 130 scarps were interpreted as being part of Rock Slope Failures (RSFs), while other smaller and more superficial failures were interpreted as complex movements combining colluvium slow flow/slope creep and RSFs. Except for one of them, these slope failures had not been previously detected, albeit they extend across a 15% of the studied region. The failures were identified through the analysis of a high-resolution (1 m) LIDAR-derived bare earth Digital Elevation Model (DEM). Most of the scarps are undetectable either by fieldwork, photo interpretation or 5 m resolution topography analysis owing to their small heights (0.5 to 2 m) and their location within forest areas. In many cases, these landforms are not evident in the field due to the presence of other minor irregularities in the slope and the lack of open views due to the forest. 2D and 3D visualization of hillshade maps with different sun azimuths provided an overall picture of the scarp assemblage and permitted a more complete analysis of the geometry of the scarps with respect to the slope and the structural fabric. The sharpness of some of the landforms suggests ongoing activity, which should be explored in future detailed studies in order to assess potential hazards affecting the Portainé ski resort. Our results reveal that close analysis of the 1 m LIDAR-derived DEM can significantly help to detect early-stage slope deformations in high mountain regions, and that expert judgment of the DEM is essential when dealing with subtle landforms. The incorporation of this approach in regional mapping represents a great advance in completing the catalogue of slope failures and will eventually contribute to a better understanding of the spatial factors controlling them.

  14. Independent Orbiter Assessment (IOA): Analysis of the atmospheric revitalization pressure control subsystem

    NASA Technical Reports Server (NTRS)

    Saiidi, M. J.; Duffy, R. E.; Mclaughlin, T. D.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Atmospheric Revitalization and Pressure Control Subsystem (ARPCS) are documented. The ARPCS hardware was categorized into the following subdivisions: (1) Atmospheric Make-up and Control (including the Auxiliary Oxygen Assembly, Oxygen Assembly, and Nitrogen Assembly); and (2) Atmospheric Vent and Control (including the Positive Relief Vent Assembly, Negative Relief Vent Assembly, and Cabin Vent Assembly). The IOA analysis process utilized available ARPCS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  15. Independent Orbiter Assessment (IOA): Analysis of the mechanical actuation subsystem

    NASA Technical Reports Server (NTRS)

    Bacher, J. L.; Montgomery, A. D.; Bradway, M. W.; Slaughter, W. T.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). The IOA analysis process utilized available MAS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  16. Regression analysis of clustered failure time data with informative cluster size under the additive transformation models.

    PubMed

    Chen, Ling; Feng, Yanqin; Sun, Jianguo

    2017-10-01

    This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.

  17. Experiences with Extra-Vehicular Activities in Response to Critical ISS Contingencies

    NASA Technical Reports Server (NTRS)

    Van Cise, Edward A.; Kelly, Brian J.; Radigan, Jeffery P.; Cranmer, Curtis W.

    2016-01-01

    Initial "Big 14" work was put to the test for the first time in 2010. Deficiencies were found in some of the planning and approaches to that work; Failure Response Assessment Team created in 2010 to address deficiencies -Identify and perform engineering analysis in operations products prior to failure; incorporate results into operations products -Identify actions for protecting ISS against a Next Worse Failure after the first failure occurs -Better document not only EVA products but also planning products, assumptions, and open actions; Pre-failure investments against critical failures best postures ISS for swift response and recovery -A type of insurance policy -Has proven effective in a number of contingency EVA cases since 2010. Planning for MBSU R&R in 2012, Second PM R&R in 2013, EXT MDM R&R in 2014; Current FRAT schedule projects completion of all analysis in 2018

  18. Risk-based planning analysis for a single levee

    NASA Astrophysics Data System (ADS)

    Hui, Rui; Jachens, Elizabeth; Lund, Jay

    2016-04-01

    Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.

  19. Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System

    PubMed Central

    Rezaei, Fatemeh; Yarmohammadian, Mohmmad H.; Haghshenas, Abbas; Fallah, Ali; Ferdosi, Masoud

    2018-01-01

    Background: Methodology of Failure Mode and Effects Analysis (FMEA) is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN)” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1) defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss), occurrence of incident (time - unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) then, (2) risks priority criteria quantified by using RPN index (361 for the highest rate failure). The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models. PMID:29441184

  20. Experiences with Probabilistic Analysis Applied to Controlled Systems

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Giesy, Daniel P.

    2004-01-01

    This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.

  1. Evaluation of Brazed Joints Using Failure Assessment Diagram

    NASA Technical Reports Server (NTRS)

    Flom, Yury

    2012-01-01

    Fitness-for service approach was used to perform structural analysis of the brazed joints consisting of several base metal / filler metal combinations. Failure Assessment Diagrams (FADs) based on tensile and shear stress ratios were constructed and experimentally validated. It was shown that such FADs can provide a conservative estimate of safe combinations of stresses in the brazed joints. Based on this approach, Margins of Safety (MS) of the brazed joints subjected to multi-axial loading conditions can be evaluated..

  2. Failure mode and effects analysis and fault tree analysis of surface image guided cranial radiosurgery.

    PubMed

    Manger, Ryan P; Paxton, Adam B; Pawlicki, Todd; Kim, Gwe-Ya

    2015-05-01

    Surface image guided, Linac-based radiosurgery (SIG-RS) is a modern approach for delivering radiosurgery that utilizes optical stereoscopic imaging to monitor the surface of the patient during treatment in lieu of using a head frame for patient immobilization. Considering the novelty of the SIG-RS approach and the severity of errors associated with delivery of large doses per fraction, a risk assessment should be conducted to identify potential hazards, determine their causes, and formulate mitigation strategies. The purpose of this work is to investigate SIG-RS using the combined application of failure modes and effects analysis (FMEA) and fault tree analysis (FTA), report on the effort required to complete the analysis, and evaluate the use of FTA in conjunction with FMEA. A multidisciplinary team was assembled to conduct the FMEA on the SIG-RS process. A process map detailing the steps of the SIG-RS was created to guide the FMEA. Failure modes were determined for each step in the SIG-RS process, and risk priority numbers (RPNs) were estimated for each failure mode to facilitate risk stratification. The failure modes were ranked by RPN, and FTA was used to determine the root factors contributing to the riskiest failure modes. Using the FTA, mitigation strategies were formulated to address the root factors and reduce the risk of the process. The RPNs were re-estimated based on the mitigation strategies to determine the margin of risk reduction. The FMEA and FTAs for the top two failure modes required an effort of 36 person-hours (30 person-hours for the FMEA and 6 person-hours for two FTAs). The SIG-RS process consisted of 13 major subprocesses and 91 steps, which amounted to 167 failure modes. Of the 91 steps, 16 were directly related to surface imaging. Twenty-five failure modes resulted in a RPN of 100 or greater. Only one of these top 25 failure modes was specific to surface imaging. The riskiest surface imaging failure mode had an overall RPN-rank of eighth. Mitigation strategies for the top failure mode decreased the RPN from 288 to 72. Based on the FMEA performed in this work, the use of surface imaging for monitoring intrafraction position in Linac-based stereotactic radiosurgery (SRS) did not greatly increase the risk of the Linac-based SRS process. In some cases, SIG helped to reduce the risk of Linac-based RS. The FMEA was augmented by the use of FTA since it divided the failure modes into their fundamental components, which simplified the task of developing mitigation strategies.

  3. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  4. Integrating Insults: Using Fault Tree Analysis to Guide Schizophrenia Research across Levels of Analysis.

    PubMed

    MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I

    2015-01-01

    The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.

  5. Failure mode and effect analysis: improving intensive care unit risk management processes.

    PubMed

    Askari, Roohollah; Shafii, Milad; Rafiei, Sima; Abolhassani, Mohammad Sadegh; Salarikhah, Elaheh

    2017-04-18

    Purpose Failure modes and effects analysis (FMEA) is a practical tool to evaluate risks, discover failures in a proactive manner and propose corrective actions to reduce or eliminate potential risks. The purpose of this paper is to apply FMEA technique to examine the hazards associated with the process of service delivery in intensive care unit (ICU) of a tertiary hospital in Yazd, Iran. Design/methodology/approach This was a before-after study conducted between March 2013 and December 2014. By forming a FMEA team, all potential hazards associated with ICU services - their frequency and severity - were identified. Then risk priority number was calculated for each activity as an indicator representing high priority areas that need special attention and resource allocation. Findings Eight failure modes with highest priority scores including endotracheal tube defect, wrong placement of endotracheal tube, EVD interface, aspiration failure during suctioning, chest tube failure, tissue injury and deep vein thrombosis were selected for improvement. Findings affirmed that improvement strategies were generally satisfying and significantly decreased total failures. Practical implications Application of FMEA in ICUs proved to be effective in proactively decreasing the risk of failures and corrected the control measures up to acceptable levels in all eight areas of function. Originality/value Using a prospective risk assessment approach, such as FMEA, could be beneficial in dealing with potential failures through proposing preventive actions in a proactive manner. The method could be used as a tool for healthcare continuous quality improvement so that the method identifies both systemic and human errors, and offers practical advice to deal effectively with them.

  6. Predicting Failure Progression and Failure Loads in Composite Open-Hole Tension Coupons

    NASA Technical Reports Server (NTRS)

    Arunkumar, Satyanarayana; Przekop, Adam

    2010-01-01

    Failure types and failure loads in carbon-epoxy [45n/90n/-45n/0n]ms laminate coupons with central circular holes subjected to tensile load are simulated using progressive failure analysis (PFA) methodology. The progressive failure methodology is implemented using VUMAT subroutine within the ABAQUS(TradeMark)/Explicit nonlinear finite element code. The degradation model adopted in the present PFA methodology uses an instantaneous complete stress reduction (COSTR) approach to simulate damage at a material point when failure occurs. In-plane modeling parameters such as element size and shape are held constant in the finite element models, irrespective of laminate thickness and hole size, to predict failure loads and failure progression. Comparison to published test data indicates that this methodology accurately simulates brittle, pull-out and delamination failure types. The sensitivity of the failure progression and the failure load to analytical loading rates and solvers precision is demonstrated.

  7. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  8. Independent Orbiter Assessment (IOA): Analysis of the active thermal control subsystem

    NASA Technical Reports Server (NTRS)

    Sinclair, S. K.; Parkman, W. E.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical (PCIs) items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Active Thermal Control Subsystem (ATCS) are documented. The major purpose of the ATCS is to remove the heat, generated during normal Shuttle operations from the Orbiter systems and subsystems. The four major components of the ATCS contributing to the heat removal are: Freon Coolant Loops; Radiator and Flow Control Assembly; Flash Evaporator System; and Ammonia Boiler System. In order to perform the analysis, the IOA process utilized available ATCS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 310 failure modes analyzed, 101 were determined to be PCIs.

  9. Independent Orbiter Assessment (IOA): Analysis of the hydraulics/water spray boiler subsystem

    NASA Technical Reports Server (NTRS)

    Duval, J. D.; Davidson, W. R.; Parkman, William E.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Orbiter Hydraulics/Water Spray Boiler Subsystem. The hydraulic system provides hydraulic power to gimbal the main engines, actuate the main engine propellant control valves, move the aerodynamic flight control surfaces, lower the landing gear, apply wheel brakes, steer the nosewheel, and dampen the external tank (ET) separation. Each hydraulic system has an associated water spray boiler which is used to cool the hydraulic fluid and APU lubricating oil. The IOA analysis process utilized available HYD/WSB hardware drawings, schematics and documents for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 430 failure modes analyzed, 166 were determined to be PCIs.

  10. Independent Orbiter Assessment (IOA): Analysis of the remote manipulator system

    NASA Technical Reports Server (NTRS)

    Tangorra, F.; Grasmeder, R. F.; Montgomery, A. D.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Remote Manipulator System (RMS) are documented. The RMS hardware and software are primarily required for deploying and/or retrieving up to five payloads during a single mission, capture and retrieve free-flying payloads, and for performing Manipulator Foot Restraint operations. Specifically, the RMS hardware consists of the following components: end effector; displays and controls; manipulator controller interface unit; arm based electronics; and the arm. The IOA analysis process utilized available RMS hardware drawings, schematics and documents for defining hardware assemblies, components and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 574 failure modes analyzed, 413 were determined to be PCIs.

  11. Risk factors for early failure after peripheral endovascular intervention: application of a reliability engineering approach.

    PubMed

    Meltzer, Andrew J; Graham, Ashley; Connolly, Peter H; Karwowski, John K; Bush, Harry L; Frazier, Peter I; Schneider, Darren B

    2013-01-01

    We apply an innovative and novel analytic approach, based on reliability engineering (RE) principles frequently used to characterize the behavior of manufactured products, to examine outcomes after peripheral endovascular intervention. We hypothesized that this would allow for improved prediction of outcome after peripheral endovascular intervention, specifically with regard to identification of risk factors for early failure. Patients undergoing infrainguinal endovascular intervention for chronic lower-extremity ischemia from 2005 to 2010 were identified in a prospectively maintained database. The primary outcome of failure was defined as patency loss detected by duplex ultrasonography, with or without clinical failure. Analysis included univariate and multivariate Cox regression models, as well as RE-based analysis including product life-cycle models and Weibull failure plots. Early failures were distinguished using the RE principle of "basic rating life," and multivariate models identified independent risk factors for early failure. From 2005 to 2010, 434 primary endovascular peripheral interventions were performed for claudication (51.8%), rest pain (16.8%), or tissue loss (31.3%). Fifty-five percent of patients were aged ≥75 years; 57% were men. Failure was noted after 159 (36.6%) interventions during a mean follow-up of 18 months (range, 0-71 months). Using multivariate (Cox) regression analysis, rest pain and tissue loss were independent predictors of patency loss, with hazard ratios of 2.5 (95% confidence interval, 1.6-4.1; P < 0.001) and 3.2 (95% confidence interval, 2.0-5.2, P < 0.001), respectively. The distribution of failure times for both claudication and critical limb ischemia fit distinct Weibull plots, with different characteristics: interventions for claudication demonstrated an increasing failure rate (β = 1.22, θ = 13.46, mean time to failure = 12.603 months, index of fit = 0.99037, R(2) = 0.98084), whereas interventions for critical limb ischemia demonstrated a decreasing failure rate, suggesting the predominance of early failures (β = 0.7395, θ = 6.8, mean time to failure = 8.2, index of fit = 0.99391, R(2) = 0.98786). By 3.1 months, 10% of interventions failed. This point (90% reliability) was identified as the basic rating life. Using multivariate analysis of failure data, independent predictors of early failure (before 3.1 months) included tissue loss, long lesion length, chronic total occlusions, heart failure, and end-stage renal disease. Application of a RE framework to the assessment of clinical outcomes after peripheral interventions is feasible, and potentially more informative than traditional techniques. Conceptualization of interventions as "products" permits application of product life-cycle models that allow for empiric definition of "early failure" may facilitate comparative effectiveness analysis and enable the development of individualized surveillance programs after endovascular interventions. Copyright © 2013 Annals of Vascular Surgery Inc. Published by Elsevier Inc. All rights reserved.

  12. An Empirical Approach to Analysis of Similarities between Software Failure Regions

    DTIC Science & Technology

    1991-09-01

    cycle costs after the soft- ware has been marketed (Alberts, 1976). 1 Unfortunately, extensive software testing is frequently necessary in spite of...incidence is primarily syntactic. This mixing of semantic and syntactic forms in the same analysis could lead to some distortion, especially since the...of formulae to improve readability or to indicate precedence of operations. * All defintions within ’Condition I’ of a failure region are assumed to

  13. A new statistical methodology predicting chip failure probability considering electromigration

    NASA Astrophysics Data System (ADS)

    Sun, Ted

    In this research thesis, we present a new approach to analyze chip reliability subject to electromigration (EM) whose fundamental causes and EM phenomenon happened in different materials are presented in this thesis. This new approach utilizes the statistical nature of EM failure in order to assess overall EM risk. It includes within-die temperature variations from the chip's temperature map extracted by an Electronic Design Automation (EDA) tool to estimate the failure probability of a design. Both the power estimation and thermal analysis are performed in the EDA flow. We first used the traditional EM approach to analyze the design with a single temperature across the entire chip that involves 6 metal and 5 via layers. Next, we used the same traditional approach but with a realistic temperature map. The traditional EM analysis approach and that coupled with a temperature map and the comparison between the results of considering and not considering temperature map are presented in in this research. A comparison between these two results confirms that using a temperature map yields a less pessimistic estimation of the chip's EM risk. Finally, we employed the statistical methodology we developed considering a temperature map and different use-condition voltages and frequencies to estimate the overall failure probability of the chip. The statistical model established considers the scaling work with the usage of traditional Black equation and four major conditions. The statistical result comparisons are within our expectations. The results of this statistical analysis confirm that the chip level failure probability is higher i) at higher use-condition frequencies for all use-condition voltages, and ii) when a single temperature instead of a temperature map across the chip is considered. In this thesis, I start with an overall review on current design types, common flows, and necessary verifications and reliability checking steps used in this IC design industry. Furthermore, the important concepts about "Scripting Automation" which is used in all the integration of using diversified EDA tools in this research work are also described in detail with several examples and my completed coding works are also put in the appendix for your reference. Hopefully, this construction of my thesis will give readers a thorough understanding about my research work from the automation of EDA tools to the statistical data generation, from the nature of EM to the statistical model construction, and the comparisons among the traditional EM analysis and the statistical EM analysis approaches.

  14. Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr

    Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.

  15. Verification and Validation Process for Progressive Damage and Failure Analysis Methods in the NASA Advanced Composites Consortium

    NASA Technical Reports Server (NTRS)

    Wanthal, Steven; Schaefer, Joseph; Justusson, Brian; Hyder, Imran; Engelstad, Stephen; Rose, Cheryl

    2017-01-01

    The Advanced Composites Consortium is a US Government/Industry partnership supporting technologies to enable timeline and cost reduction in the development of certified composite aerospace structures. A key component of the consortium's approach is the development and validation of improved progressive damage and failure analysis methods for composite structures. These methods will enable increased use of simulations in design trade studies and detailed design development, and thereby enable more targeted physical test programs to validate designs. To accomplish this goal with confidence, a rigorous verification and validation process was developed. The process was used to evaluate analysis methods and associated implementation requirements to ensure calculation accuracy and to gage predictability for composite failure modes of interest. This paper introduces the verification and validation process developed by the consortium during the Phase I effort of the Advanced Composites Project. Specific structural failure modes of interest are first identified, and a subset of standard composite test articles are proposed to interrogate a progressive damage analysis method's ability to predict each failure mode of interest. Test articles are designed to capture the underlying composite material constitutive response as well as the interaction of failure modes representing typical failure patterns observed in aerospace structures.

  16. Fracture and Failure at and Near Interfaces Under Pressure

    DTIC Science & Technology

    1998-06-18

    realistic data for comparison with improved analytical results, and to 2) initiate a new computational approach for stress analysis of cracks at and near...new computational approach for stress analysis of cracks in solid propellants at and near interfaces, which analysis can draw on the ever expanding...tactical and strategic missile systems. The most important and most difficult component of the system analysis has been the predictability or

  17. An overview of engineering concepts and current design algorithms for probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Duffy, S. F.; Hu, J.; Hopkins, D. A.

    1995-01-01

    The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.

  18. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  19. Independent Orbiter Assessment (IOA): Analysis of the ascent thrust vector control actuator subsystem

    NASA Technical Reports Server (NTRS)

    Wilson, R. E.; Riccio, J. R.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Ascent Thrust Vector Control (ATVC) Actuator hardware are documented. The function of the Ascent Thrust Vector Control Actuators (ATVC) is to gimbal the main engines to provide for attitude and flight path control during ascent. During first stage flight, the SRB nozzles provide nearly all the steering. After SRB separation, the Orbiter is steered by gimbaling of its main engines. There are six electrohydraulic servoactuators, one pitch and one yaw for each of the three main engines. Each servoactuator is composed of four electrohydraulic servovalve assemblies, one second stage power spool valve assembly, one primary piston assembly and a switching valve. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Critical failures resulting in loss of ATVC were mainly due to loss of hydraulic fluid, fluid contamination and mechanical failures.

  20. A mid-layer model for human reliability analysis : understanding the cognitive causes of human failure events.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Song-Hua; Chang, James Y. H.; Boring,Ronald L.

    2010-03-01

    The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identifiedmore » human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less

  1. A cost simulation for mammography examinations taking into account equipment failures and resource utilization characteristics.

    PubMed

    Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A

    2010-12-01

    This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.

  2. Failure analysis in the identification of synergies between cleaning monitoring methods.

    PubMed

    Whiteley, Greg S; Derry, Chris; Glasbey, Trevor

    2015-02-01

    The 4 monitoring methods used to manage the quality assurance of cleaning outcomes within health care settings are visual inspection, microbial recovery, fluorescent marker assessment, and rapid ATP bioluminometry. These methods each generate different types of information, presenting a challenge to the successful integration of monitoring results. A systematic approach to safety and quality control can be used to interrogate the known qualities of cleaning monitoring methods and provide a prospective management tool for infection control professionals. We investigated the use of failure mode and effects analysis (FMEA) for measuring failure risk arising through each cleaning monitoring method. FMEA uses existing data in a structured risk assessment tool that identifies weaknesses in products or processes. Our FMEA approach used the literature and a small experienced team to construct a series of analyses to investigate the cleaning monitoring methods in a way that minimized identified failure risks. FMEA applied to each of the cleaning monitoring methods revealed failure modes for each. The combined use of cleaning monitoring methods in sequence is preferable to their use in isolation. When these 4 cleaning monitoring methods are used in combination in a logical sequence, the failure modes noted for any 1 can be complemented by the strengths of the alternatives, thereby circumventing the risk of failure of any individual cleaning monitoring method. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  3. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  4. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control subsystem, volume 1

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 1671 failure modes analyzed, 9 single failures were determined to result in loss of crew or vehicle. Three single failures unique to intact abort were determined to result in possible loss of the crew or vehicle. A possible loss of mission could result if any of 136 single failures occurred. Six of the criticality 1/1 failures are in two rotary and two pushbutton switches that control External Tank and Solid Rocket Booster separation. The other 6 criticality 1/1 failures are fuses, one each per Aft Power Control Assembly (APCA) 4, 5, and 6 and one each per Forward Power Control Assembly (FPCA) 1, 2, and 3, that supply power to certain Main Propulsion System (MPS) valves and Forward Reaction Control System (RCS) circuits.

  5. Independent Orbiter Assessment (IOA): Analysis of the guidance, navigation, and control subsystem

    NASA Technical Reports Server (NTRS)

    Trahan, W. H.; Odonnell, R. A.; Pietz, K. C.; Hiott, J. M.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) is presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Guidance, Navigation, and Control (GNC) Subsystem hardware are documented. The function of the GNC hardware is to respond to guidance, navigation, and control software commands to effect vehicle control and to provide sensor and controller data to GNC software. Some of the GNC hardware for which failure modes analysis was performed includes: hand controllers; Rudder Pedal Transducer Assembly (RPTA); Speed Brake Thrust Controller (SBTC); Inertial Measurement Unit (IMU); Star Tracker (ST); Crew Optical Alignment Site (COAS); Air Data Transducer Assembly (ADTA); Rate Gyro Assemblies; Accelerometer Assembly (AA); Aerosurface Servo Amplifier (ASA); and Ascent Thrust Vector Control (ATVC). The IOA analysis process utilized available GNC hardware drawings, workbooks, specifications, schematics, and systems briefs for defining hardware assemblies, components, and circuits. Each hardware item was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  6. Independent Orbiter Assessment (IOA): Analysis of the manned maneuvering unit

    NASA Technical Reports Server (NTRS)

    Bailey, P. S.

    1986-01-01

    Results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve indepedence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Manned Maneuvering Unit (MMU) hardware. The MMU is a propulsive backpack, operated through separate hand controllers that input the pilot's translational and rotational maneuvering commands to the control electronics and then to the thrusters. The IOA analysis process utilized available MMU hardware drawings and schematics for defining hardware subsystems, assemblies, components, and hardware items. Final levels of detail were evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the worst case severity of the effect for each identified failure mode. The IOA analysis of the MMU found that the majority of the PCIs identified are resultant from the loss of either the propulsion or control functions, or are resultant from inability to perform an immediate or future mission. The five most severe criticalities identified are all resultant from failures imposed on the MMU hand controllers which have no redundancy within the MMU.

  7. The Behavior Analysis Follow Through Evaluation Strategy: A Multifaceted Approach.

    ERIC Educational Resources Information Center

    Green, Dan S.; And Others

    The Behavior Analysis (BA) approach to Project Follow Through, a federally funded education intervention program, has reversed the trend of academic failure of poor children by improving the educational experience of poor children from 12 communities in the urban East, Midwest, rural South, and on Indian reservations in the West. The BA model is…

  8. Independent Orbiter Assessment (IOA): Analysis of the electrical power generation/fuel cell powerplant subsystem

    NASA Technical Reports Server (NTRS)

    Brown, K. L.; Bertsch, P. J.

    1986-01-01

    Results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Generation (EPG)/Fuel Cell Powerplant (FCP) hardware. The EPG/FCP hardware is required for performing functions of electrical power generation and product water distribution in the Orbiter. Specifically, the EPG/FCP hardware consists of the following divisions: (1) Power Section Assembly (PSA); (2) Reactant Control Subsystem (RCS); (3) Thermal Control Subsystem (TCS); and (4) Water Removal Subsystem (WRS). The IOA analysis process utilized available EPG/FCP hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  9. Independent Orbiter Assessment (IOA): Analysis of the orbital maneuvering system

    NASA Technical Reports Server (NTRS)

    Prust, C. D.; Paul, D. J.; Burkemper, V. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbital Maneuvering System (OMS) hardware are documented. The OMS provides the thrust to perform orbit insertion, orbit circularization, orbit transfer, rendezvous, and deorbit. The OMS is housed in two independent pods located one on each side of the tail and consists of the following subsystems: Helium Pressurization; Propellant Storage and Distribution; Orbital Maneuvering Engine; and Electrical Power Distribution and Control. The IOA analysis process utilized available OMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluted and analyzed for possible failure modes and effects. Criticality was asigned based upon the severity of the effect for each failure mode.

  10. Creep-Fatigue Failure Diagnosis

    PubMed Central

    Holdsworth, Stuart

    2015-01-01

    Failure diagnosis invariably involves consideration of both associated material condition and the results of a mechanical analysis of prior operating history. This Review focuses on these aspects with particular reference to creep-fatigue failure diagnosis. Creep-fatigue cracking can be due to a spectrum of loading conditions ranging from pure cyclic to mainly steady loading with infrequent off-load transients. These require a range of mechanical analysis approaches, a number of which are reviewed. The microstructural information revealing material condition can vary with alloy class. In practice, the detail of the consequent cracking mechanism(s) can be camouflaged by oxidation at high temperatures, although the presence of oxide on fracture surfaces can be used to date events leading to failure. Routine laboratory specimen post-test examination is strongly recommended to characterise the detail of deformation and damage accumulation under known and well-controlled loading conditions to improve the effectiveness and efficiency of failure diagnosis. PMID:28793676

  11. Characterization of emission microscopy and liquid crystal thermography in IC fault localization

    NASA Astrophysics Data System (ADS)

    Lau, C. K.; Sim, K. S.

    2013-05-01

    This paper characterizes two fault localization techniques - Emission Microscopy (EMMI) and Liquid Crystal Thermography (LCT) by using integrated circuit (IC) leakage failures. The majority of today's semiconductor failures do not reveal a clear visual defect on the die surface and therefore require fault localization tools to identify the fault location. Among the various fault localization tools, liquid crystal thermography and frontside emission microscopy are commonly used in most semiconductor failure analysis laboratories. Many people misunderstand that both techniques are the same and both are detecting hot spot in chip failing with short or leakage. As a result, analysts tend to use only LCT since this technique involves very simple test setup compared to EMMI. The omission of EMMI as the alternative technique in fault localization always leads to incomplete analysis when LCT fails to localize any hot spot on a failing chip. Therefore, this research was established to characterize and compare both the techniques in terms of their sensitivity in detecting the fault location in common semiconductor failures. A new method was also proposed as an alternative technique i.e. the backside LCT technique. The research observed that both techniques have successfully detected the defect locations resulted from the leakage failures. LCT wass observed more sensitive than EMMI in the frontside analysis approach. On the other hand, EMMI performed better in the backside analysis approach. LCT was more sensitive in localizing ESD defect location and EMMI was more sensitive in detecting non ESD defect location. Backside LCT was proven to work as effectively as the frontside LCT and was ready to serve as an alternative technique to the backside EMMI. The research confirmed that LCT detects heat generation and EMMI detects photon emission (recombination radiation). The analysis results also suggested that both techniques complementing each other in the IC fault localization. It is necessary for a failure analyst to use both techniques when one of the techniques produces no result.

  12. Safety evaluation of driver cognitive failures and driving errors on right-turn filtering movement at signalized road intersections based on Fuzzy Cellular Automata (FCA) model.

    PubMed

    Chai, Chen; Wong, Yiik Diew; Wang, Xuesong

    2017-07-01

    This paper proposes a simulation-based approach to estimate safety impact of driver cognitive failures and driving errors. Fuzzy Logic, which involves linguistic terms and uncertainty, is incorporated with Cellular Automata model to simulate decision-making process of right-turn filtering movement at signalized intersections. Simulation experiments are conducted to estimate the relationships between cognitive failures and driving errors with safety performance. Simulation results show Different types of cognitive failures are found to have varied relationship with driving errors and safety performance. For right-turn filtering movement, cognitive failures are more likely to result in driving errors with denser conflicting traffic stream. Moreover, different driving errors are found to have different safety impacts. The study serves to provide a novel approach to linguistically assess cognitions and replicate decision-making procedures of the individual driver. Compare to crash analysis, the proposed FCA model allows quantitative estimation of particular cognitive failures, and the impact of cognitions on driving errors and safety performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Independent Orbiter Assessment (IOA): Analysis of the purge, vent and drain subsystem

    NASA Technical Reports Server (NTRS)

    Bynum, M. C., III

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter PV and D (Purge, Vent and Drain) Subsystem hardware. The PV and D Subsystem controls the environment of unpressurized compartments and window cavities, senses hazardous gases, and purges Orbiter/ET Disconnect. The subsystem is divided into six systems: Purge System (controls the environment of unpressurized structural compartments); Vent System (controls the pressure of unpressurized compartments); Drain System (removes water from unpressurized compartments); Hazardous Gas Detection System (HGDS) (monitors hazardous gas concentrations); Window Cavity Conditioning System (WCCS) (maintains clear windows and provides pressure control of the window cavities); and External Tank/Orbiter Disconnect Purge System (prevents cryo-pumping/icing of disconnect hardware). Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Four of the sixty-two failure modes analyzed were determined as single failures which could result in the loss of crew or vehicle. A possible loss of mission could result if any of twelve single failures occurred. Two of the criticality 1/1 failures are in the Window Cavity Conditioning System (WCCS) outer window cavity, where leakage and/or restricted flow will cause failure to depressurize/repressurize the window cavity. Two criticality 1/1 failures represent leakage and/or restricted flow in the Orbiter/ET disconnect purge network which prevent cryopumping/icing of disconnect hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  14. Computer-aided operations engineering with integrated models of systems and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  15. Independent Orbiter Assessment (IOA): Analysis of the displays and controls subsystem

    NASA Technical Reports Server (NTRS)

    Trahan, W. H.; Prust, E. E.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Displays and Controls (D and C) subsystem hardware. The function of the D and C hardware is to provide the crew with the monitor, command, and control capabilities required for management of all normal and contingency mission and flight operations. The D and C hardware for which failure modes analysis was performed consists of the following: Acceleration Indicator (G-METER); Head Up Display (HUD); Display Driver Unit (DDU); Alpha/Mach Indicator (AMI); Horizontal Situation Indicator (HSI); Attitude Director Indicator (ADI); Propellant Quantity Indicator (PQI); Surface Position Indicator (SPI); Altitude/Vertical Velocity Indicator (AVVI); Caution and Warning Assembly (CWA); Annunciator Control Assembly (ACA); Event Timer (ET); Mission Timer (MT); Interior Lighting; and Exterior Lighting. Each hardware item was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  16. Integrating Insults: Using Fault Tree Analysis to Guide Schizophrenia Research across Levels of Analysis

    PubMed Central

    MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.

    2016-01-01

    The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007

  17. Combining System Safety and Reliability to Ensure NASA CoNNeCT's Success

    NASA Technical Reports Server (NTRS)

    Havenhill, Maria; Fernandez, Rene; Zampino, Edward

    2012-01-01

    Hazard Analysis, Failure Modes and Effects Analysis (FMEA), the Limited-Life Items List (LLIL), and the Single Point Failure (SPF) List were applied by System Safety and Reliability engineers on NASA's Communications, Navigation, and Networking reConfigurable Testbed (CoNNeCT) Project. The integrated approach involving cross reviews of these reports by System Safety, Reliability, and Design engineers resulted in the mitigation of all identified hazards. The outcome was that the system met all the safety requirements it was required to meet.

  18. Global resilience analysis of water distribution systems.

    PubMed

    Diao, Kegong; Sweetapple, Chris; Farmani, Raziyeh; Fu, Guangtao; Ward, Sarah; Butler, David

    2016-12-01

    Evaluating and enhancing resilience in water infrastructure is a crucial step towards more sustainable urban water management. As a prerequisite to enhancing resilience, a detailed understanding is required of the inherent resilience of the underlying system. Differing from traditional risk analysis, here we propose a global resilience analysis (GRA) approach that shifts the objective from analysing multiple and unknown threats to analysing the more identifiable and measurable system responses to extreme conditions, i.e. potential failure modes. GRA aims to evaluate a system's resilience to a possible failure mode regardless of the causal threat(s) (known or unknown, external or internal). The method is applied to test the resilience of four water distribution systems (WDSs) with various features to three typical failure modes (pipe failure, excess demand, and substance intrusion). The study reveals GRA provides an overview of a water system's resilience to various failure modes. For each failure mode, it identifies the range of corresponding failure impacts and reveals extreme scenarios (e.g. the complete loss of water supply with only 5% pipe failure, or still meeting 80% of demand despite over 70% of pipes failing). GRA also reveals that increased resilience to one failure mode may decrease resilience to another and increasing system capacity may delay the system's recovery in some situations. It is also shown that selecting an appropriate level of detail for hydraulic models is of great importance in resilience analysis. The method can be used as a comprehensive diagnostic framework to evaluate a range of interventions for improving system resilience in future studies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. A systems engineering approach to automated failure cause diagnosis in space power systems

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Faymon, Karl A.

    1987-01-01

    Automatic failure-cause diagnosis is a key element in autonomous operation of space power systems such as Space Station's. A rule-based diagnostic system has been developed for determining the cause of degraded performance. The knowledge required for such diagnosis is elicited from the system engineering process by using traditional failure analysis techniques. Symptoms, failures, causes, and detector information are represented with structured data; and diagnostic procedural knowledge is represented with rules. Detected symptoms instantiate failure modes and possible causes consistent with currently held beliefs about the likelihood of the cause. A diagnosis concludes with an explanation of the observed symptoms in terms of a chain of possible causes and subcauses.

  20. A Mid-Layer Model for Human Reliability Analysis: Understanding the Cognitive Causes of Human Failure Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring

    The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failuremore » mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less

  1. Identification of Modeling Approaches To Support Common-Cause Failure Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsah, Kofi; Wood, Richard Thomas

    2015-06-01

    Experience with applying current guidance and practices for common-cause failure (CCF) mitigation to digital instrumentation and control (I&C) systems has proven problematic, and the regulatory environment has been unpredictable. The impact of CCF vulnerability is to inhibit I&C modernization and, thereby, challenge the long-term sustainability of existing plants. For new plants and advanced reactor concepts, the issue of CCF vulnerability for highly integrated digital I&C systems imposes a design burden resulting in higher costs and increased complexity. The regulatory uncertainty regarding which mitigation strategies are acceptable (e.g., what diversity is needed and how much is sufficient) drives designers to adoptmore » complicated, costly solutions devised for existing plants. The conditions that constrain the transition to digital I&C technology by the U.S. nuclear industry require crosscutting research to resolve uncertainty, demonstrate necessary characteristics, and establish an objective basis for qualification of digital technology for usage in Nuclear Power Plant (NPP) I&C applications. To fulfill this research need, Oak Ridge National Laboratory is conducting an investigation into mitigation of CCF vulnerability for nuclear-qualified applications. The outcome of this research is expected to contribute to a fundamentally sound, comprehensive technical basis for establishing the qualification of digital technology for nuclear power applications. This report documents the investigation of modeling approaches for representing failure of I&C systems. Failure models are used when there is a need to analyze how the probability of success (or failure) of a system depends on the success (or failure) of individual elements. If these failure models are extensible to represent CCF, then they can be employed to support analysis of CCF vulnerabilities and mitigation strategies. Specifically, the research findings documented in this report identify modeling approaches that can be adapted to contribute to the basis for developing systematic methods, quantifiable measures, and objective criteria for evaluating CCF vulnerabilities and mitigation strategies.« less

  2. Failure mode and effects analysis using intuitionistic fuzzy hybrid weighted Euclidean distance operator

    NASA Astrophysics Data System (ADS)

    Liu, Hu-Chen; Liu, Long; Li, Ping

    2014-10-01

    Failure mode and effects analysis (FMEA) has shown its effectiveness in examining potential failures in products, process, designs or services and has been extensively used for safety and reliability analysis in a wide range of industries. However, its approach to prioritise failure modes through a crisp risk priority number (RPN) has been criticised as having several shortcomings. The aim of this paper is to develop an efficient and comprehensive risk assessment methodology using intuitionistic fuzzy hybrid weighted Euclidean distance (IFHWED) operator to overcome the limitations and improve the effectiveness of the traditional FMEA. The diversified and uncertain assessments given by FMEA team members are treated as linguistic terms expressed in intuitionistic fuzzy numbers (IFNs). Intuitionistic fuzzy weighted averaging (IFWA) operator is used to aggregate the FMEA team members' individual assessments into a group assessment. IFHWED operator is applied thereafter to the prioritisation and selection of failure modes. Particularly, both subjective and objective weights of risk factors are considered during the risk evaluation process. A numerical example for risk assessment is given to illustrate the proposed method finally.

  3. A Study of Specific Fracture Energy at Percussion Drilling

    NASA Astrophysics Data System (ADS)

    A, Shadrina; T, Kabanova; V, Krets; L, Saruev

    2014-08-01

    The paper presents experimental studies of rock failure provided by percussion drilling. Quantification and qualitative analysis were carried out to estimate critical values of rock failure depending on the hammer pre-impact velocity, types of drill bits and cylindrical hammer parameters (weight, length, diameter), and turn angle of a drill bit. Obtained data in this work were compared with obtained results by other researchers. The particle-size distribution in granite-cutting sludge was analyzed in this paper. Statistical approach (Spearmen's rank-order correlation, multiple regression analysis with dummy variables, Kruskal-Wallis nonparametric test) was used to analyze the drilling process. Experimental data will be useful for specialists engaged in simulation and illustration of rock failure.

  4. Multiresolution Wavelet Analysis of Heartbeat Intervals Discriminates Healthy Patients from Those with Cardiac Pathology

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.

    1998-02-01

    We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures.

  5. Impact of Distributed Energy Resources on the Reliability of Critical Telecommunications Facilities: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D. G.; Arent, D. J.; Johnson, L.

    2006-06-01

    This paper documents a probabilistic risk assessment of existing and alternative power supply systems at a large telecommunications office. The analysis characterizes the increase in the reliability of power supply through the use of two alternative power configurations. Failures in the power systems supporting major telecommunications service nodes are a main contributor to significant telecommunications outages. A logical approach to improving the robustness of telecommunication facilities is to increase the depth and breadth of technologies available to restore power during power outages. Distributed energy resources such as fuel cells and gas turbines could provide additional on-site electric power sources tomore » provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.« less

  6. Using Combined SFTA and SFMECA Techniques for Space Critical Software

    NASA Astrophysics Data System (ADS)

    Nicodemos, F. G.; Lahoz, C. H. N.; Abdala, M. A. D.; Saotome, O.

    2012-01-01

    This work addresses the combined Software Fault Tree Analysis (SFTA) and Software Failure Modes, Effects and Criticality Analysis (SFMECA) techniques applied to space critical software of satellite launch vehicles. The combined approach is under research as part of the Verification and Validation (V&V) efforts to increase software dependability and as future application in other projects under development at Instituto de Aeronáutica e Espaço (IAE). The applicability of such approach was conducted on system software specification and applied to a case study based on the Brazilian Satellite Launcher (VLS). The main goal is to identify possible failure causes and obtain compensating provisions that lead to inclusion of new functional and non-functional system software requirements.

  7. Life Prediction Issues in Thermal/Environmental Barrier Coatings in Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Brewer, David N.; Murthy, Pappu L. N.

    2001-01-01

    Issues and design requirements for the environmental barrier coating (EBC)/thermal barrier coating (TBC) life that are general and those specific to the NASA Ultra-Efficient Engine Technology (UEET) development program have been described. The current state and trend of the research, methods in vogue related to the failure analysis, and long-term behavior and life prediction of EBCITBC systems are reported. Also, the perceived failure mechanisms, variables, and related uncertainties governing the EBCITBC system life are summarized. A combined heat transfer and structural analysis approach based on the oxidation kinetics using the Arrhenius theory is proposed to develop a life prediction model for the EBC/TBC systems. Stochastic process-based reliability approach that includes the physical variables such as gas pressure, temperature, velocity, moisture content, crack density, oxygen content, etc., is suggested. Benefits of the reliability-based approach are also discussed in the report.

  8. Preoperative short hookwire placement for small pulmonary lesions: evaluation of technical success and risk factors for initial placement failure.

    PubMed

    Iguchi, Toshihiro; Hiraki, Takao; Matsui, Yusuke; Fujiwara, Hiroyasu; Masaoka, Yoshihisa; Tanaka, Takashi; Sato, Takuya; Gobara, Hideo; Toyooka, Shinichi; Kanazawa, Susumu

    2018-05-01

    To retrospectively evaluate the technical success of computed tomography fluoroscopy-guided short hookwire placement before video-assisted thoracoscopic surgery and to identify the risk factors for initial placement failure. In total, 401 short hookwire placements for 401 lesions (mean diameter 9.3 mm) were reviewed. Technical success was defined as correct positioning of the hookwire. Possible risk factors for initial placement failure (i.e., requirement for placement of an additional hookwire or to abort the attempt) were evaluated using logistic regression analysis for all procedures, and for procedures performed via the conventional route separately. Of the 401 initial placements, 383 were successful and 18 failed. Short hookwires were finally placed for 399 of 401 lesions (99.5%). Univariate logistic regression analyses revealed that in all 401 procedures only the transfissural approach was a significant independent predictor of initial placement failure (odds ratio, OR, 15.326; 95% confidence interval, CI, 5.429-43.267; p < 0.001) and for the 374 procedures performed via the conventional route only lesion size was a significant independent predictor of failure (OR 0.793, 95% CI 0.631-0.996; p = 0.046). The technical success of preoperative short hookwire placement was extremely high. The transfissural approach was a predictor initial placement failure for all procedures and small lesion size was a predictor of initial placement failure for procedures performed via the conventional route. • Technical success of preoperative short hookwire placement was extremely high. • The transfissural approach was a significant independent predictor of initial placement failure for all procedures. • Small lesion size was a significant independent predictor of initial placement failure for procedures performed via the conventional route.

  9. Abduction of Toe-excavation Induced Failure Process from LEM and FDM for a Dip Slope with Rock Anchorage in Taiwan

    NASA Astrophysics Data System (ADS)

    Huang, W.-S.; Lin, M.-L.; Liu, H.-C.; Lin, H.-H.

    2012-04-01

    On April 25, 2010, without rainfall and earthquake triggering a massive landslide (200000 m3) covered a 200m stretch of Taiwan's National Freeway No. 3, killing 4 people, burying three cars and destroying a bridge. The failure mode appears to be a dip-slope type failure occurred on a rock anchorage cut slope. The strike of Tertiary sedimentary strata is northeast-southwest and dip 15˚ toward southeast. Based on the investigations of Taiwan Geotechnical Society, there are three possible factors contributing to the failure mechanism as follow:(1) By toe-excavation during construction in 1998, the daylight of the sliding layer had induced the strength reduction in the sliding layer. It also caused the loadings of anchors increased rapidly and approached to their ultimate capacity; (2) Although the excavated area had stabilized soon with rock anchors and backfills, the weathering and groundwater infiltration caused the strength reduction of overlying rock mass; (3) The possible corrosion and age of the ground anchors deteriorate the loading capacity of rock anchors. Considering the strength of sliding layer had reduced from peak to residual strength which was caused by the disturbance of excavation, the limit equilibrium method (LEM) analysis was utilized in the back analysis at first. The results showed the stability condition of slope approached the critical state (F.S.≈1). The efficiency reduction of rock anchors and strength reduction of overlying stratum (sandstone) had been considered in following analysis. The results showed the unstable condition (F.S. <1). This research also utilized the result of laboratory test, geological strength index(GSI) and finite difference method (FDM, FLAC 5.0) to discuss the failure process with the interaction of disturbance of toe-excavation, weathering of rock mass, groundwater infiltration and efficiency reduction of rock anchors on the stability of slope. The analysis indicated that the incremental load of anchors have similar tendency comparing to the monitoring records in toe-excavation stages. This result showed that the strength of the sliding layer was significantly influenced by toe-excavation. The numerical model which calibrated with monitoring records in excavation stage was then used to discuss the failure process after backfilling. The results showed the interaction of different factors into the failure process. Keyword: Dip slope failure, rock anchor, LEM, FDM, GSI, back analysis

  10. Independent Orbiter Assessment (IOA): Analysis of the elevon subsystem

    NASA Technical Reports Server (NTRS)

    Wilson, R. E.; Riccio, J. R.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Orbiter Elevon system hardware. The elevon actuators are located at the trailing edge of the wing surface. The proper function of the elevons is essential during the dynamic flight phases of ascent and entry. In the ascent phase of flight, the elevons are used for relieving high wing loads. For entry, the elevons are used to pitch and roll the vehicle. Specifically, the elevon system hardware comprises the following components: flow cutoff valve; switching valve; electro-hydraulic (EH) servoactuator; secondary delta pressure transducer; bypass valve; power valve; power valve check valve; primary actuator; primary delta pressure transducer; and primary actuator position transducer. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 25 failure modes analyzed, 18 were determined to be PCIs.

  11. Application of Vibration and Oil Analysis for Reliability Information on Helicopter Main Rotor Gearbox

    NASA Astrophysics Data System (ADS)

    Murrad, Muhamad; Leong, M. Salman

    Based on the experiences of the Malaysian Armed Forces (MAF), failure of the main rotor gearbox (MRGB) was one of the major contributing factors to helicopter breakdowns. Even though vibration and oil analysis are the effective techniques for monitoring the health of helicopter components, these two techniques were rarely combined to form an effective assessment tool in MAF. Results of the oil analysis were often used only for oil changing schedule while assessments of MRGB condition were mainly based on overall vibration readings. A study group was formed and given a mandate to improve the maintenance strategy of S61-A4 helicopter fleet in the MAF. The improvement consisted of a structured approach to the reassessment/redefinition suitable maintenance actions that should be taken for the MRGB. Basic and enhanced tools for condition monitoring (CM) are investigated to address the predominant failures of the MRGB. Quantitative accelerated life testing (QALT) was considered in this work with an intent to obtain the required reliability information in a shorter time with tests under normal stress conditions. These tests when performed correctly can provide valuable information about MRGB performance under normal operating conditions which enable maintenance personnel to make decision more quickly, accurately and economically. The time-to-failure and probability of failure information of the MRGB were generated by applying QALT analysis principles. This study is anticipated to make a dramatic change in its approach to CM, bringing significant savings and various benefits to MAF.

  12. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control/electrical power generation subsystem

    NASA Technical Reports Server (NTRS)

    Patton, Jeff A.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C)/Electrical Power Generation (EPG) hardware. The EPD and C/EPG hardware is required for performing critical functions of cryogenic reactant storage, electrical power generation and product water distribution in the Orbiter. Specifically, the EPD and C/EPG hardware consists of the following components: Power Section Assembly (PSA); Reactant Control Subsystem (RCS); Thermal Control Subsystem (TCS); Water Removal Subsystem (WRS); and Power Reactant Storage and Distribution System (PRSDS). The IOA analysis process utilized available EPD and C/EPG hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  13. Independent Orbiter Assessment (IOA): FMEA/CIL assessment

    NASA Technical Reports Server (NTRS)

    Saiidi, Mo J.; Swain, L. J.; Compton, J. M.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. Direction was given by the Orbiter and GFE Projects Office to perform the hardware analysis and assessment using the instructions and ground rules defined in NSTS 22206. The IOA analysis features a top-down approach to determine hardware failure modes, criticality, and potential critical items. To preserve independence, the anlaysis was accomplished without reliance upon the results contained within the NASA and prime contractor FMEA/CIL documentation. The assessment process compares the independently derived failure modes and criticality assignments to the proposed NASA Post 51-L FMEA/CIL documentation. When possible, assessment issues are discussed and resolved with the NASA subsystem managers. The assessment results for each subsystem are summarized. The most important Orbiter assessment finding was the previously unknown stuck autopilot push-button criticality 1/1 failure mode, having a worst case effect of loss of crew/vehicle when a microwave landing system is not active.

  14. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    PubMed

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  15. Unique challenges of hospice for patients with heart failure: A qualitative study of hospice clinicians.

    PubMed

    Lum, Hillary D; Jones, Jacqueline; Lahoff, Dana; Allen, Larry A; Bekelman, David B; Kutner, Jean S; Matlock, Daniel D

    2015-09-01

    Patients with heart failure have end-of-life care needs that may benefit from hospice care. The goal of this descriptive study was to understand hospice clinicians' perspectives on the unique aspects of caring for patients with heart failure to inform approaches to improving end-of-life care. This qualitative study explored experiences, observations, and perspectives of hospice clinicians regarding hospice care for patients with heart failure. Thirteen hospice clinicians from a variety of professional disciplines and clinical roles, diverse geographic regions, and varying lengths of time working in hospice participated in semistructured interviews. Through team-based, iterative qualitative analysis, we identified 3 major themes. Hospice clinicians identified 3 themes regarding care for patients with heart failure. First, care for patients with heart failure involves clinical complexity and a tailored approach to cardiac medications and advanced cardiac technologies. Second, hospice clinicians describe the difficulty patients with heart failure have in trusting hospice care due to patient optimism, prognostic uncertainty, and reliance on prehospice health care providers. Third, hospice clinicians described opportunities to improve heart failure-specific hospice care, highlighting the desire for collaboration with referring cardiologists. From a hospice clinician perspective, caring for patients with heart failure is unique compared with other hospice populations. This study suggests potential opportunities for hospice clinicians and referring providers who seek to collaborate to improve care for patients with heart failure during the transition to hospice care. Published by Elsevier Inc.

  16. Independent Orbiter Assessment (IOA): Analysis of the body flap subsystem

    NASA Technical Reports Server (NTRS)

    Wilson, R. E.; Riccio, J. R.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Body Flap (BF) subsystem hardware are documented. The BF is a large aerosurface located at the trailing edge of the lower aft fuselage of the Orbiter. The proper function of the BF is essential during the dynamic flight phases of ascent and entry. During the ascent phase of flight, the BF trails in a fixed position. For entry, the BF provides elevon load relief, trim control, and acts as a heat shield for the main engines. Specifically, the BF hardware comprises the following components: Power Drive Unit (PDU), rotary actuators, and torque tubes. The IOA analysis process utilized available BF hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 35 failure modes analyzed, 19 were determined to be PCIs.

  17. An analysis of policy success and failure in formal evaluations of Australia's national mental health strategy (1992-2012).

    PubMed

    Grace, Francesca C; Meurk, Carla S; Head, Brian W; Hall, Wayne D; Harris, Meredith G; Whiteford, Harvey A

    2017-05-30

    Heightened fiscal constraints, increases in the chronic disease burden and in consumer expectations are among several factors contributing to the global interest in evidence-informed health policy. The present article builds on previous work that explored how the Australian Federal Government applied five instruments of policy, or policy levers, to implement a series of reforms under the Australian National Mental Health Strategy (NMHS). The present article draws on theoretical insights from political science to analyse the relative successes and failures of these levers, as portrayed in formal government evaluations of the NMHS. Documentary analysis of six evaluation documents corresponding to three National Mental Health Plans was undertaken. Both the content and approach of these government-funded, independently conducted evaluations were appraised. An overall improvement was apparent in the development and application of policy levers over time. However, this finding should be interpreted with caution due to variations in evaluation approach according to Plan and policy lever. Tabulated summaries of the success and failure of each policy initiative, ordered by lever type, are provided to establish a resource that could be consulted for future policy-making. This analysis highlights the complexities of health service reform and underscores the limitations of narrowly focused empirical approaches. A theoretical framework is provided that could inform the evaluation and targeted selection of appropriate policy levers in mental health.

  18. Ply-level failure analysis of a graphite/epoxy laminate under bearing-bypass loading

    NASA Technical Reports Server (NTRS)

    Naik, R. A.; Crews, J. H., Jr.

    1988-01-01

    A combined experimental and analytical study was conducted to investigate and predict the failure modes of a graphite/epoxy laminate subjected to combined bearing and bypass loading. Tests were conducted in a test machine that allowed the bearing-bypass load ratio to be controlled while a single-fastener coupon was loaded to failure in either tension or compression. Onset and ultimate failure modes and strengths were determined for each test case. The damage-onset modes were studied in detail by sectioning and micrographing the damaged specimens. A two-dimensional, finite-element analysis was conducted to determine lamina strains around the bolt hole. Damage onset consisted of matrix cracks, delamination, and fiber failures. Stiffness loss appeared to be caused by fiber failures rather than by matrix cracking and delamination. An unusual offset-compression mode was observed for compressive bearing-bypass laoding in which the specimen failed across its width along a line offset from the hole. The computed lamina strains in the fiber direction were used in a combined analytical and experimental approach to predict bearing-bypass diagrams for damage onset from a few simple tests.

  19. Ply-level failure analysis of a graphite/epoxy laminate under bearing-bypass loading

    NASA Technical Reports Server (NTRS)

    Naik, R. A.; Crews, J. H., Jr.

    1990-01-01

    A combined experimental and analytical study was conducted to investigate and predict the failure modes of a graphite/epoxy laminate subjected to combined bearing and bypass loading. Tests were conducted in a test machine that allowed the bearing-bypass load ratio to be controlled while a single-fastener coupon was loaded to failure in either tension or compression. Onset and ultimate failure modes and strengths were determined for each test case. The damage-onset modes were studied in detail by sectioning and micrographing the damaged specimens. A two-dimensional, finite-element analysis was conducted to determine lamina strains around the bolt hole. Damage onset consisted of matrix cracks, delamination, and fiber failures. Stiffness loss appeared to be caused by fiber failures rather than by matrix cracking and delamination. An unusual offset-compression mode was observed for compressive bearing-bypass loading in which the specimen failed across its width along a line offset from the hole. The computed lamina strains in the fiber direction were used in a combined analytical and experimental approach to predict bearing-bypass diagrams for damage onset from a few simple tests.

  20. Basic failure mechanisms in advanced composites

    NASA Technical Reports Server (NTRS)

    Mullin, J. V.; Mazzio, V. F.; Mehan, R. L.

    1972-01-01

    Failure mechanisms in carbon-epoxy composites are identified as a basis for more reliable prediction of the performance of these materials. The approach involves both the study of local fracture events in model specimens containing small groups of filaments and fractographic examination of high fiber content engineering composites. Emphasis is placed on the correlation of model specimen observations with gross fracture modes. The effects of fiber surface treatment, resin modification and fiber content are studied and acoustic emission methods are applied. Some effort is devoted to analysis of the failure process in composite/metal specimens.

  1. Continuous infusion or bolus injection of loop diuretics for congestive heart failure?

    PubMed

    Zepeda, Patricio; Rain, Carmen; Sepúlveda, Paola

    2016-04-22

    Loop diuretics are widely used in acute heart failure. However, there is controversy about the superiority of continuous infusion over bolus administration. Searching in Epistemonikos database, which is maintained by screening 30 databases, we identified four systematic reviews including 11 pertinent randomized controlled trials overall. We combined the evidence using meta-analysis and generated a summary of findings following the GRADE approach. We concluded continuous administration of loop diuretics probably reduces mortality and length of stay compared to intermittent administration in patients with acute heart failure.

  2. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  3. Fear of failure, psychological stress, and burnout among adolescent athletes competing in high level sport.

    PubMed

    Gustafsson, H; Sagar, S S; Stenling, A

    2017-12-01

    The purpose of this study was to investigate fear of failure in highly competitive junior athletes and the association with psychological stress and burnout. In total 258 athletes (152 males and 108 females) ranged in age from 15 to 19 years (M = 17.4 years, SD = 1.08) participated. Athletes competed in variety of sports including both team and individual sports. Results showed in a variable-oriented approach using regression analyses that one dimension, fear of experiencing shame and embarrassment had a statistically significant effect on perceived psychological stress and one dimension of burnout, reduced sense of accomplishment. However, adopting a person-oriented approach using latent class analysis, we found that athletes with high levels of fear failure on all dimensions scored high on burnout. We also found another class with high scores on burnout. These athletes had high scores on the individual-oriented dimensions of fear of failure and low scores on the other oriented fear of failure dimensions. The findings indicate that fear of failure is related to burnout and psychological stress in athletes and that this association is mainly associated with the individual-oriented dimensions of fear of failure. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  5. The Effect of Delamination on Damage Path and Failure Load Prediction for Notched Composite Laminates

    NASA Technical Reports Server (NTRS)

    Satyanarayana, Arunkumar; Bogert, Philip B.; Chunchu, Prasad B.

    2007-01-01

    The influence of delamination on the progressing damage path and initial failure load in composite laminates is investigated. Results are presented from a numerical and an experimental study of center-notched tensile-loaded coupons. The numerical study includes two approaches. The first approach considers only intralaminar (fiber breakage and matrix cracking) damage modes in calculating the progression of the damage path. In the second approach, the model is extended to consider the effect of interlaminar (delamination) damage modes in addition to the intralaminar damage modes. The intralaminar damage is modeled using progressive damage analysis (PDA) methodology implemented with the VUMAT subroutine in the ABAQUS finite element code. The interlaminar damage mode has been simulated using cohesive elements in ABAQUS. In the experimental study, 2-3 specimens each of two different stacking sequences of center-notched laminates are tensile loaded. The numerical results from the two different modeling approaches are compared with each other and the experimentally observed results for both laminate types. The comparisons reveal that the second modeling approach, where the delamination damage mode is included together with the intralaminar damage modes, better simulates the experimentally observed damage modes and damage paths, which were characterized by splitting failures perpendicular to the notch tips in one or more layers. Additionally, the inclusion of the delamination mode resulted in a better prediction of the loads at which the failure took place, which were higher than those predicted by the first modeling approach which did not include delaminations.

  6. Independent Orbiter Assessment (IOA): Analysis of the DPS subsystem

    NASA Technical Reports Server (NTRS)

    Lowery, H. J.; Haufler, W. A.; Pietz, K. C.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) is presented. The IOA approach features a top-down analysis of the hardware to independently determine failure modes, criticality, and potential critical items. The independent analysis results corresponding to the Orbiter Data Processing System (DPS) hardware are documented. The DPS hardware is required for performing critical functions of data acquisition, data manipulation, data display, and data transfer throughout the Orbiter. Specifically, the DPS hardware consists of the following components: Multiplexer/Demultiplexer (MDM); General Purpose Computer (GPC); Multifunction CRT Display System (MCDS); Data Buses and Data Bus Couplers (DBC); Data Bus Isolation Amplifiers (DBIA); Mass Memory Unit (MMU); and Engine Interface Unit (EIU). The IOA analysis process utilized available DPS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the extensive redundancy built into the DPS the number of critical items are few. Those identified resulted from premature operation and erroneous output of the GPCs.

  7. Independent Orbiter Assessment (IOA): Analysis of the nose wheel steering subsystem

    NASA Technical Reports Server (NTRS)

    Mediavilla, Anthony Scott

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Nose Wheel Steering (NWS) hardware are documented. The NWS hardware provides primary directional control for the Orbiter vehicle during landing rollout. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The original NWS design was envisioned as a backup system to differential braking for directional control of the Orbiter during landing rollout. No real effort was made to design the NWS system as fail operational. The brakes have much redundancy built into their design but the poor brake/tire performance has forced the NSTS to upgrade NWS to the primary mode of directional control during rollout. As a result, a large percentage of the NWS system components have become Potential Critical Items (PCI).

  8. Application of Failure Mode and Effect Analysis (FMEA), cause and effect analysis, and Pareto diagram in conjunction with HACCP to a corn curl manufacturing plant.

    PubMed

    Varzakas, Theodoros H; Arvanitoyannis, Ioannis S

    2007-01-01

    The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of corn curl manufacturing. A tentative approach of FMEA application to the snacks industry was attempted in an effort to exclude the presence of GMOs in the final product. This is of crucial importance both from the ethics and the legislation (Regulations EC 1829/2003; EC 1830/2003; Directive EC 18/2001) point of view. The Preliminary Hazard Analysis and the Fault Tree Analysis were used to analyze and predict the occurring failure modes in a food chain system (corn curls processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and the fishbone diagram). Finally, Pareto diagrams were employed towards the optimization of GMOs detection potential of FMEA.

  9. Management of reliability and maintainability; a disciplined approach to fleet readiness

    NASA Technical Reports Server (NTRS)

    Willoughby, W. J., Jr.

    1981-01-01

    Material acquisition fundamentals were reviewed and include: mission profile definition, stress analysis, derating criteria, circuit reliability, failure modes, and worst case analysis. Military system reliability was examined with emphasis on the sparing of equipment. The Navy's organizational strategy for 1980 is presented.

  10. A model for the progressive failure of laminated composite structural components

    NASA Technical Reports Server (NTRS)

    Allen, D. H.; Lo, D. C.

    1991-01-01

    Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.

  11. Failure Diagnosis for the Holdup Tank System via ISFA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Huijuan; Bragg-Sitton, Shannon; Smidts, Carol

    This paper discusses the use of the integrated system failure analysis (ISFA) technique for fault diagnosis for the holdup tank system. ISFA is a simulation-based, qualitative and integrated approach used to study fault propagation in systems containing both hardware and software subsystems. The holdup tank system consists of a tank containing a fluid whose level is controlled by an inlet valve and an outlet valve. We introduce the component and functional models of the system, quantify the main parameters and simulate possible failure-propagation paths based on the fault propagation approach, ISFA. The results show that most component failures in themore » holdup tank system can be identified clearly and that ISFA is viable as a technique for fault diagnosis. Since ISFA is a qualitative technique that can be used in the very early stages of system design, this case study provides indications that it can be used early to study design aspects that relate to robustness and fault tolerance.« less

  12. Independent Orbiter Assessment (IOA): Analysis of the orbiter main propulsion system

    NASA Technical Reports Server (NTRS)

    Mcnicoll, W. J.; Mcneely, M.; Holden, K. A.; Emmons, T. E.; Lowery, H. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Main Propulsion System (MPS) hardware are documented. The Orbiter MPS consists of two subsystems: the Propellant Management Subsystem (PMS) and the Helium Subsystem. The PMS is a system of manifolds, distribution lines and valves by which the liquid propellants pass from the External Tank (ET) to the Space Shuttle Main Engines (SSMEs) and gaseous propellants pass from the SSMEs to the ET. The Helium Subsystem consists of a series of helium supply tanks and their associated regulators, check valves, distribution lines, and control valves. The Helium Subsystem supplies helium that is used within the SSMEs for inflight purges and provides pressure for actuation of SSME valves during emergency pneumatic shutdowns. The balance of the helium is used to provide pressure to operate the pneumatically actuated valves within the PMS. Each component was evaluated and analyzed for possible failure modes and effects. Criticalities were assigned based on the worst possible effect of each failure mode. Of the 690 failure modes analyzed, 349 were determined to be PCIs.

  13. Design and implementation of a novel mechanical testing system for cellular solids.

    PubMed

    Nazarian, Ara; Stauber, Martin; Müller, Ralph

    2005-05-01

    Cellular solids constitute an important class of engineering materials encompassing both man-made and natural constructs. Materials such as wood, cork, coral, and cancellous bone are examples of cellular solids. The structural analysis of cellular solid failure has been limited to 2D sections to illustrate global fracture patterns. Due to the inherent destructiveness of 2D methods, dynamic assessment of fracture progression has not been possible. Image-guided failure assessment (IGFA), a noninvasive technique to analyze 3D progressive bone failure, has been developed utilizing stepwise microcompression in combination with time-lapsed microcomputed tomographic imaging (microCT). This method allows for the assessment of fracture progression in the plastic region, where much of the structural deformation/energy absorption is encountered in a cellular solid. Therefore, the goal of this project was to design and fabricate a novel micromechanical testing system to validate the effectiveness of the stepwise IGFA technique compared to classical continuous mechanical testing, using a variety of engineered and natural cellular solids. In our analysis, we found stepwise compression to be a valid approach for IGFA with high precision and accuracy comparable to classical continuous testing. Therefore, this approach complements the conventional mechanical testing methods by providing visual insight into the failure propagation mechanisms of cellular solids. (c) 2005 Wiley Periodicals, Inc.

  14. Fracture of a Brittle-Particle Ductile Matrix Composite with Applications to a Coating System

    NASA Astrophysics Data System (ADS)

    Bianculli, Steven J.

    In material systems consisting of hard second phase particles in a ductile matrix, failure initiating from cracking of the second phase particles is an important failure mechanism. This dissertation applies the principles of fracture mechanics to consider this problem, first from the standpoint of fracture of the particles, and then the onset of crack propagation from fractured particles. This research was inspired by the observation of the failure mechanism of a commercial zinc-based anti-corrosion coating and the analysis was initially approached as coatings problem. As the work progressed it became evident that failure mechanism was relevant to a broad range of composite material systems and research approach was generalized to consider failure of a system consisting of ellipsoidal second phase particles in a ductile matrix. The starting point for the analysis is the classical Eshelby Problem, which considered stress transfer from the matrix to an ellipsoidal inclusion. The particle fracture problem is approached by considering cracks within particles and how they are affected by the particle/matrix interface, the difference in properties between the particle and matrix, and by particle shape. These effects are mapped out for a wide range of material combinations. The trends developed show that, although the particle fracture problem is very complex, the potential for fracture among a range of particle shapes can, for certain ranges in particle shape, be considered easily on the basis of the Eshelby Stress alone. Additionally, the evaluation of cracks near the curved particle/matrix interface adds to the existing body of work of cracks approaching bi-material interfaces in layered material systems. The onset of crack propagation from fractured particles is then considered as a function of particle shape and mismatch in material properties between the particle and matrix. This behavior is mapped out for a wide range of material combinations. The final section of this dissertation qualitatively considers an approach to determine critical particle sizes, below which crack propagation will not occur for a coating system that exhibited stable cracks in an interfacial layer between the coating and substrate.

  15. Application of multi attribute failure mode analysis of milk production using analytical hierarchy process method

    NASA Astrophysics Data System (ADS)

    Rucitra, A. L.

    2018-03-01

    Pusat Koperasi Induk Susu (PKIS) Sekar Tanjung, East Java is one of the modern dairy industries producing Ultra High Temperature (UHT) milk. A problem that often occurs in the production process in PKIS Sekar Tanjung is a mismatch between the production process and the predetermined standard. The purpose of applying Analytical Hierarchy Process (AHP) was to identify the most potential cause of failure in the milk production process. Multi Attribute Failure Mode Analysis (MAFMA) method was used to eliminate or reduce the possibility of failure when viewed from the failure causes. This method integrates the severity, occurrence, detection, and expected cost criteria obtained from depth interview with the head of the production department as an expert. The AHP approach was used to formulate the priority ranking of the cause of failure in the milk production process. At level 1, the severity has the highest weight of 0.41 or 41% compared to other criteria. While at level 2, identifying failure in the UHT milk production process, the most potential cause was the average mixing temperature of more than 70 °C which was higher than the standard temperature (≤70 ° C). This failure cause has a contributes weight of 0.47 or 47% of all criteria Therefore, this study suggested the company to control the mixing temperature to minimise or eliminate the failure in this process.

  16. Managing heart failure in the long-term care setting: nurses' experiences in Ontario, Canada.

    PubMed

    Strachan, Patricia H; Kaasalainen, Sharon; Horton, Amy; Jarman, Hellen; D'Elia, Teresa; Van Der Horst, Mary-Lou; Newhouse, Ian; Kelley, Mary Lou; McAiney, Carrie; McKelvie, Robert; Heckman, George A

    2014-01-01

    Implementation of heart failure guidelines in long-term care (LTC) settings is challenging. Understanding the conditions of nursing practice can improve management, reduce suffering, and prevent hospital admission of LTC residents living with heart failure. The aim of the study was to understand the experiences of LTC nurses managing care for residents with heart failure. This was a descriptive qualitative study nested in Phase 2 of a three-phase mixed methods project designed to investigate barriers and solutions to implementing the Canadian Cardiovascular Society heart failure guidelines into LTC homes. Five focus groups totaling 33 nurses working in LTC settings in Ontario, Canada, were audiorecorded, then transcribed verbatim, and entered into NVivo9. A complex adaptive systems framework informed this analysis. Thematic content analysis was conducted by the research team. Triangulation, rigorous discussion, and a search for negative cases were conducted. Data were collected between May and July 2010. Nurses characterized their experiences managing heart failure in relation to many influences on their capacity for decision-making in LTC settings: (a) a reactive versus proactive approach to chronic illness; (b) ability to interpret heart failure signs, symptoms, and acuity; (c) compromised information flow; (d) access to resources; and (e) moral distress. Heart failure guideline implementation reflects multiple dynamic influences. Leadership that addresses these factors is required to optimize the conditions of heart failure care and related nursing practice.

  17. Development of failure criterion for Kevlar-epoxy fabric laminates

    NASA Technical Reports Server (NTRS)

    Tennyson, R. C.; Elliott, W. G.

    1984-01-01

    The development of the tensor polynomial failure criterion for composite laminate analysis is discussed. In particular, emphasis is given to the fabrication and testing of Kevlar-49 fabric (Style 285)/Narmco 5208 Epoxy. The quadratic-failure criterion with F(12)=0 provides accurate estimates of failure stresses for the Kevlar/Epoxy investigated. The cubic failure criterion was re-cast into an operationally easier form, providing the engineer with design curves that can be applied to laminates fabricated from unidirectional prepregs. In the form presented no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exists at present to generalize this approach for all undirectional prepregs and its use must be restricted to the generic materials investigated to-date.

  18. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control/remote manipulator system subsystem

    NASA Technical Reports Server (NTRS)

    Robinson, W. W.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the Electrical Power Distribution and Control (EPD and C)/Remote Manipulator System (RMS) hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained in the NASA FMEA/CIL documentation. This report documents the results of the independent analysis of the EPD and C/RMS (both port and starboard) hardware. The EPD and C/RMS subsystem hardware provides the electrical power and power control circuitry required to safely deploy, operate, control, and stow or guillotine and jettison two (one port and one starboard) RMSs. The EPD and C/RMS subsystem is subdivided into the four following functional divisions: Remote Manipulator Arm; Manipulator Deploy Control; Manipulator Latch Control; Manipulator Arm Shoulder Jettison; and Retention Arm Jettison. The IOA analysis process utilized available EPD and C/RMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based on the severity of the effect for each failure mode.

  19. Independent Orbiter Assessment (IOA): Analysis of the Orbiter Experiment (OEX) subsystem

    NASA Technical Reports Server (NTRS)

    Compton, J. M.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Experiments hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The Orbiter Experiments (OEX) Program consists of a multiple set of experiments for the purpose of gathering environmental and aerodynamic data to develop more accurate ground models for Shuttle performance and to facilitate the design of future spacecraft. This assessment only addresses currently manifested experiments and their support systems. Specifically this list consists of: Shuttle Entry Air Data System (SEADS); Shuttle Upper Atmosphere Mass Spectrometer (SUMS); Forward Fuselage Support System for OEX (FFSSO); Shuttle Infrared Laced Temperature Sensor (SILTS); Aerodynamic Coefficient Identification Package (ACIP); and Support System for OEX (SSO). There are only two potential critical items for the OEX, since the experiments only gather data for analysis post mission and are totally independent systems except for power. Failure of any experiment component usually only causes a loss of experiment data and in no way jeopardizes the crew or mission.

  20. The Semantic Distance Task: Quantifying Semantic Distance with Semantic Network Path Length

    ERIC Educational Resources Information Center

    Kenett, Yoed N.; Levi, Effi; Anaki, David; Faust, Miriam

    2017-01-01

    Semantic distance is a determining factor in cognitive processes, such as semantic priming, operating upon semantic memory. The main computational approach to compute semantic distance is through latent semantic analysis (LSA). However, objections have been raised against this approach, mainly in its failure at predicting semantic priming. We…

  1. A Comparison of Functional Models for Use in the Function-Failure Design Method

    NASA Technical Reports Server (NTRS)

    Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.

    2006-01-01

    When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.

  2. Multi-institutional application of Failure Mode and Effects Analysis (FMEA) to CyberKnife Stereotactic Body Radiation Therapy (SBRT).

    PubMed

    Veronese, Ivan; De Martin, Elena; Martinotti, Anna Stefania; Fumagalli, Maria Luisa; Vite, Cristina; Redaelli, Irene; Malatesta, Tiziana; Mancosu, Pietro; Beltramo, Giancarlo; Fariselli, Laura; Cantone, Marie Claire

    2015-06-13

    A multidisciplinary and multi-institutional working group applied the Failure Mode and Effects Analysis (FMEA) approach to assess the risks for patients undergoing Stereotactic Body Radiation Therapy (SBRT) treatments for lesions located in spine and liver in two CyberKnife® Centres. The various sub-processes characterizing the SBRT treatment were identified to generate the process trees of both the treatment planning and delivery phases. This analysis drove to the identification and subsequent scoring of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system. Novel solutions aimed to increase patient safety were accordingly considered. The process-tree characterising the SBRT treatment planning stage was composed with a total of 48 sub-processes. Similarly, 42 sub-processes were identified in the stage of delivery to liver tumours and 30 in the stage of delivery to spine lesions. All the sub-processes were judged to be potentially prone to one or more failure modes. Nineteen failures (i.e. 5 in treatment planning stage, 5 in the delivery to liver lesions and 9 in the delivery to spine lesions) were considered of high concern in view of the high RPN and/or severity index value. The analysis of the potential failures, their causes and effects allowed to improve the safety strategies already adopted in the clinical practice with additional measures for optimizing quality management workflow and increasing patient safety.

  3. Unified Approach to the Biomechanics of Dental Implantology

    NASA Technical Reports Server (NTRS)

    Grenoble, D. E.; Knoell, A. C.

    1973-01-01

    The human need for safe and effective dental implants is well-recognized. Although many implant designs have been tested and are in use today, a large number have resulted in clinical failure. These failures appear to be due to biomechanical effects, as well as biocompatibility and surgical factors. A unified approach is proposed using multidisciplinary systems technology, for the study of the biomechanical interactions between dental implants and host tissues. The approach progresses from biomechanical modeling and analysis, supported by experimental investigations, through implant design development, clinical verification, and education of the dental practitioner. The result of the biomechanical modeling, analysis, and experimental phases would be the development of scientific design criteria for implants. Implant designs meeting these criteria would be generated, fabricated, and tested in animals. After design acceptance, these implants would be tested in humans, using efficient and safe surgical and restorative procedures. Finally, educational media and instructional courses would be developed for training dental practitioners in the use of the resulting implants.

  4. Commercial transport aircraft composite structures

    NASA Technical Reports Server (NTRS)

    Mccarty, J. E.

    1983-01-01

    The role that analysis plays in the development, production, and substantiation of aircraft structures is discussed. The types, elements, and applications of failure that are used and needed; the current application of analysis methods to commercial aircraft advanced composite structures, along with a projection of future needs; and some personal thoughts on analysis development goals and the elements of an approach to analysis development are discussed.

  5. Independent Orbiter Assessment (IOA): Analysis of the rudder/speed brake subsystem

    NASA Technical Reports Server (NTRS)

    Wilson, R. E.; Riccio, J. R.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Rudder/Speedbrake Actuation Mechanism is documented. The function of the Rudder/Speedbrake (RSB) is to provide directional control and to provide a means of energy control during entry. The system consists of two panels on a vertical hinge mounted on the aft part of the vertical stabilizer. These two panels move together to form a rudder but split apart to make a speedbrake. The Rudder/Speedbrake Actuation Mechanism consists of the following elements: (1) Power Drive Unit (PDU) which is composed of hydraulic valve module and a hydraulic motor-powered gearbox which contains differentials and mixer gears to provide PDU torque output; (2) four geared rotary actuators which apply the PDU generated torque to the rudder/speedbrake panels; and (3) ten torque shafts which join the PDU to the rotary actuators and interconnect the four rotary actuators. Each level of hardware was evaluated and analyzed for possible failures and causes. Criticality was assigned based upon the severity of the effect for each failure mode. Critical RSB failures which result in potential loss of vehicle control were mainly due to loss of hydraulic fluid, fluid contaminators, and mechanical failures in gears and shafts.

  6. Is it possible to identify a trend in problem/failure data

    NASA Technical Reports Server (NTRS)

    Church, Curtis K.

    1990-01-01

    One of the major obstacles in identifying and interpreting a trend is the small number of data points. Future trending reports will begin with 1983 data. As the problem/failure data are aggregated by year, there are just seven observations (1983 to 1989) for the 1990 reports. Any statistical inferences with a small amount of data will have a large degree of uncertainty. Consequently, a regression technique approach to identify a trend is limited. Though trend determination by failure mode may be unrealistic, the data may be explored for consistency or stability and the failure rate investigated. Various alternative data analysis procedures are briefly discussed. Techniques that could be used to explore problem/failure data by failure mode are addressed. The data used are taken from Section One, Space Shuttle Main Engine, of the Calspan Quarterly Report dated April 2, 1990.

  7. Spatial correlation analysis of cascading failures: Congestions and Blackouts

    PubMed Central

    Daqing, Li; Yinan, Jiang; Rui, Kang; Havlin, Shlomo

    2014-01-01

    Cascading failures have become major threats to network robustness due to their potential catastrophic consequences, where local perturbations can induce global propagation of failures. Unlike failures spreading via direct contacts due to structural interdependencies, overload failures usually propagate through collective interactions among system components. Despite the critical need in developing protection or mitigation strategies in networks such as power grids and transportation, the propagation behavior of cascading failures is essentially unknown. Here we find by analyzing our collected data that jams in city traffic and faults in power grid are spatially long-range correlated with correlations decaying slowly with distance. Moreover, we find in the daily traffic, that the correlation length increases dramatically and reaches maximum, when morning or evening rush hour is approaching. Our study can impact all efforts towards improving actively system resilience ranging from evaluation of design schemes, development of protection strategies to implementation of mitigation programs. PMID:24946927

  8. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  9. Root Cause Failure Analysis of Stator Winding Insulation failure on 6.2 MW hydropower generator

    NASA Astrophysics Data System (ADS)

    Adhi Nugroho, Agus; Widihastuti, Ida; Ary, As

    2017-04-01

    Insulation failure on generator winding insulation occurred in the Wonogiri Hydropower plant has caused stator damage since ase was short circuited to ground. The fault has made the generator stop to operate. Wonogiri Hydropower plant is one of the hydroelectric plants run by PT. Indonesia Power UBP Mrica with capacity 2 × 6.2 MW. To prevent damage to occur again on hydropower generators, an analysis is carried out using Root Cause Failure Analysis RCFA is a systematic approach to identify the root cause of the main orbasic root cause of a problem or a condition that is not wanted. There are several aspects to concerned such as: loading pattern and operations, protection systems, generator insulation resistance, vibration, the cleanliness of the air and the ambient air. Insulation damage caused by gradual inhomogeneous cooling at the surface of winding may lead in to partial discharge. In homogeneous cooling may present due to lattice hampered by dust and oil deposits. To avoid repetitive defects and unwanted condition above, it is necessary to perform major maintenance overhaul every 5000-6000 hours of operation.

  10. An Approach for Reducing the Error Rate in Automated Lung Segmentation

    PubMed Central

    Gill, Gurman; Beichel, Reinhard R.

    2016-01-01

    Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855 ± 0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897

  11. Fully Coupled Micro/Macro Deformation, Damage, and Failure Prediction for SiC/Ti-15-3 Laminates

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.; Lerch, Brad A.

    2001-01-01

    The deformation, failure, and low cycle fatigue life of SCS-6/Ti-15-3 composites are predicted using a coupled deformation and damage approach in the context of the analytical generalized method of cells (GMC) micromechanics model. The local effects of inelastic deformation, fiber breakage, fiber-matrix interfacial debonding, and fatigue damage are included as sub-models that operate on the micro scale for the individual composite phases. For the laminate analysis, lamination theory is employed as the global or structural scale model, while GMC is embedded to operate on the meso scale to simulate the behavior of the composite material within each laminate layer. While the analysis approach is quite complex and multifaceted, it is shown, through comparison with experimental data, to be quite accurate and realistic while remaining extremely efficient.

  12. The Application of Failure Modes and Effects Analysis Methodology to Intrathecal Drug Delivery for Pain Management

    PubMed Central

    Patel, Teresa; Fisher, Stanley P.

    2016-01-01

    Objective This study aimed to utilize failure modes and effects analysis (FMEA) to transform clinical insights into a risk mitigation plan for intrathecal (IT) drug delivery in pain management. Methods The FMEA methodology, which has been used for quality improvement, was adapted to assess risks (i.e., failure modes) associated with IT therapy. Ten experienced pain physicians scored 37 failure modes in the following categories: patient selection for therapy initiation (efficacy and safety concerns), patient safety during IT therapy, and product selection for IT therapy. Participants assigned severity, probability, and detection scores for each failure mode, from which a risk priority number (RPN) was calculated. Failure modes with the highest RPNs (i.e., most problematic) were discussed, and strategies were proposed to mitigate risks. Results Strategic discussions focused on 17 failure modes with the most severe outcomes, the highest probabilities of occurrence, and the most challenging detection. The topic of the highest‐ranked failure mode (RPN = 144) was manufactured monotherapy versus compounded combination products. Addressing failure modes associated with appropriate patient and product selection was predicted to be clinically important for the success of IT therapy. Conclusions The methodology of FMEA offers a systematic approach to prioritizing risks in a complex environment such as IT therapy. Unmet needs and information gaps are highlighted through the process. Risk mitigation and strategic planning to prevent and manage critical failure modes can contribute to therapeutic success. PMID:27477689

  13. Estimation of probability of failure for damage-tolerant aerospace structures

    NASA Astrophysics Data System (ADS)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft.

  14. Using cost-analyses to inform health professions education - The economic cost of pre-clinical failure.

    PubMed

    Foo, Jonathan; Ilic, Dragan; Rivers, George; Evans, Darrell J R; Walsh, Kieran; Haines, Terry P; Paynter, Sophie; Morgan, Prue; Maloney, Stephen

    2017-12-07

    Student failure creates additional economic costs. Knowing the cost of failure helps to frame its economic burden relative to other educational issues, providing an evidence-base to guide priority setting and allocation of resources. The Ingredients Method is a cost-analysis approach which has been previously applied to health professions education research. In this study, the Ingredients Method is introduced, and applied to a case study, investigating the cost of pre-clinical student failure. The four step Ingredients Method was introduced and applied: (1) identify and specify resource items, (2) measure volume of resources in natural units, (3) assign monetary prices to resource items, and (4) analyze and report costs. Calculations were based on a physiotherapy program at an Australian university. The cost of failure was £5991 per failing student, distributed across students (70%), the government (21%), and the university (8%). If the cost of failure and attrition is distributed among the remaining continuing cohort, the cost per continuing student educated increases from £9923 to £11,391 per semester. The economics of health professions education is complex. Researchers should consider both accuracy and feasibility in their costing approach, toward the goal of better informing cost-conscious decision-making.

  15. Independent Orbiter Assessment (IOA): Assessment of the electrical power generation/power reactant storage and distribution subsystem FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Ames, B. E.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) is presented. The IOA effort first completed an analysis of the Electrical Power Generation/Power Reactant Storage and Distribution (EPG/PRSD) subsystem hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baselines with proposed Post 51-L updates included. A resolution of each discrepancy from the comparison is provided through additional analysis as required. The results of that comparison are documented for the Orbiter EPG/PRSD hardware. The comparison produced agreement on all but 27 FMEAs and 9 CIL items. The discrepancy between the number of IOA findings and NASA FMEAs can be partially explained by the different approaches used by IOA and NASA to group failure modes together to form one FMEA. Also, several IOA items represented inner tank components and ground operations failure modes which were not in the NASA baseline.

  16. Fuzzy Bayesian Network-Bow-Tie Analysis of Gas Leakage during Biomass Gasification

    PubMed Central

    Yan, Fang; Xu, Kaili; Yao, Xiwen; Li, Yang

    2016-01-01

    Biomass gasification technology has been rapidly developed recently. But fire and poisoning accidents caused by gas leakage restrict the development and promotion of biomass gasification. Therefore, probabilistic safety assessment (PSA) is necessary for biomass gasification system. Subsequently, Bayesian network-bow-tie (BN-bow-tie) analysis was proposed by mapping bow-tie analysis into Bayesian network (BN). Causes of gas leakage and the accidents triggered by gas leakage can be obtained by bow-tie analysis, and BN was used to confirm the critical nodes of accidents by introducing corresponding three importance measures. Meanwhile, certain occurrence probability of failure was needed in PSA. In view of the insufficient failure data of biomass gasification, the occurrence probability of failure which cannot be obtained from standard reliability data sources was confirmed by fuzzy methods based on expert judgment. An improved approach considered expert weighting to aggregate fuzzy numbers included triangular and trapezoidal numbers was proposed, and the occurrence probability of failure was obtained. Finally, safety measures were indicated based on the obtained critical nodes. The theoretical occurrence probabilities in one year of gas leakage and the accidents caused by it were reduced to 1/10.3 of the original values by these safety measures. PMID:27463975

  17. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    PubMed

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  18. Modular Heat Dissipation Technique for a CubeSat

    DTIC Science & Technology

    2015-07-28

    Model TVAC Thermal Vacuum Chamber System xv U.S. United States UV Ultraviolet VUV Vacuum Ultraviolet xvi 1 MODULAR HEAT...failure percentage approaches to 50% in university- led missions [Swartwout, 2013]. It can also be deduced from the analysis that on-orbit failures of...simulator is designed to achieve one sun equivalent illumination with three-degree collimation over a 12 in x 12 in area. A 1.6 kW lamp is used for the

  19. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  20. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  1. An improved method for risk evaluation in failure modes and effects analysis of CNC lathe

    NASA Astrophysics Data System (ADS)

    Rachieru, N.; Belu, N.; Anghel, D. C.

    2015-11-01

    Failure mode and effects analysis (FMEA) is one of the most popular reliability analysis tools for identifying, assessing and eliminating potential failure modes in a wide range of industries. In general, failure modes in FMEA are evaluated and ranked through the risk priority number (RPN), which is obtained by the multiplication of crisp values of the risk factors, such as the occurrence (O), severity (S), and detection (D) of each failure mode. However, the crisp RPN method has been criticized to have several deficiencies. In this paper, linguistic variables, expressed in Gaussian, trapezoidal or triangular fuzzy numbers, are used to assess the ratings and weights for the risk factors S, O and D. A new risk assessment system based on the fuzzy set theory and fuzzy rule base theory is to be applied to assess and rank risks associated to failure modes that could appear in the functioning of Turn 55 Lathe CNC. Two case studies have been shown to demonstrate the methodology thus developed. It is illustrated a parallel between the results obtained by the traditional method and fuzzy logic for determining the RPNs. The results show that the proposed approach can reduce duplicated RPN numbers and get a more accurate, reasonable risk assessment. As a result, the stability of product and process can be assured.

  2. A Queueing Approach to Optimal Resource Replication in Wireless Sensor Networks

    DTIC Science & Technology

    2009-04-29

    network (an energy- centric approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric ...replication strategies in wireless sensor networks. The model can be used to minimize either the total transmission rate of the network (an energy- centric ...approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric approach). The model explicitly

  3. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    PubMed

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. A Case Study on Improving Intensive Care Unit (ICU) Services Reliability: By Using Process Failure Mode and Effects Analysis (PFMEA)

    PubMed Central

    Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad

    2016-01-01

    Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162

  5. Design, Analysis and Testing of a PRSEUS Pressure Cube to Investigate Assembly Joints

    NASA Technical Reports Server (NTRS)

    Yovanof, Nicolette; Lovejoy, Andrew E.; Baraja, Jaime; Gould, Kevin

    2012-01-01

    Due to its potential to significantly increase fuel efficiency, the current focus of NASA's Environmentally Responsible Aviation Program is the hybrid wing body (HWB) aircraft. Due to the complex load condition that exists in HWB structure, as compared to traditional aircraft configurations, light-weight, cost-effective and manufacturable structural concepts are required to enable the HWB. The Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) concept is one such structural concept. A building block approach for technology development of the PRSEUS concept is being conducted. As part of this approach, a PRSEUS pressure cube was developed as a risk reduction test article to examine a new integral cap joint concept. This paper describes the design, analysis and testing of the PRSEUS pressure cube test article. The pressure cube was required to withstand a 2P, 18.4 psi, overpressure load requirement. The pristine pressure cube was tested to 2.2P with no catastrophic failure. After the addition of barely visible impact damage, the cube was pressure loaded to 48 psi where catastrophic failure occurred, meeting the scale-up requirement. Comparison of pretest and posttest analyses with the cube test response agree well, and indicate that current analysis methods can be used to accurately analyze PRSEUS structure for initial failure response.

  6. Electromechanical actuators affected by multiple failures: Prognostic method based on spectral analysis techniques

    NASA Astrophysics Data System (ADS)

    Belmonte, D.; Vedova, M. D. L. Dalla; Ferro, C.; Maggiore, P.

    2017-06-01

    The proposal of prognostic algorithms able to identify precursors of incipient failures of primary flight command electromechanical actuators (EMA) is beneficial for the anticipation of the incoming failure: an early and correct interpretation of the failure degradation pattern, in fact, can trig an early alert of the maintenance crew, who can properly schedule the servomechanism replacement. An innovative prognostic model-based approach, able to recognize the EMA progressive degradations before his anomalous behaviors become critical, is proposed: the Fault Detection and Identification (FDI) of the considered incipient failures is performed analyzing proper system operational parameters, able to put in evidence the corresponding degradation path, by means of a numerical algorithm based on spectral analysis techniques. Subsequently, these operational parameters will be correlated with the actual EMA health condition by means of failure maps created by a reference monitoring model-based algorithm. In this work, the proposed method has been tested in case of EMA affected by combined progressive failures: in particular, partial stator single phase turn to turn short-circuit and rotor static eccentricity are considered. In order to evaluate the prognostic method, a numerical test-bench has been conceived. Results show that the method exhibit adequate robustness and a high degree of confidence in the ability to early identify an eventual malfunctioning, minimizing the risk of fake alarms or unannounced failures.

  7. Independent Orbiter Assessment (IOA): Analysis of the landing/deceleration subsystem

    NASA Technical Reports Server (NTRS)

    Compton, J. M.; Beaird, H. G.; Weissinger, W. D.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Landing/Deceleration Subsystem hardware. The Landing/Deceleration Subsystem is utilized to allow the Orbiter to perform a safe landing, allowing for landing-gear deploy activities, steering and braking control throughout the landing rollout to wheel-stop, and to allow for ground-handling capability during the ground-processing phase of the flight cycle. Specifically, the Landing/Deceleration hardware consists of the following components: Nose Landing Gear (NLG); Main Landing Gear (MLG); Brake and Antiskid (B and AS) Electrical Power Distribution and Controls (EPD and C); Nose Wheel Steering (NWS); and Hydraulics Actuators. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the lack of redundancy in the Landing/Deceleration Subsystems there is a high number of critical items.

  8. Independent Orbiter Assessment (IOA): Analysis of the extravehicular mobility unit

    NASA Technical Reports Server (NTRS)

    Raffaelli, Gary G.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Extravehicular Mobility Unit (EMU) hardware. The EMU is an independent anthropomorphic system that provides environmental protection, mobility, life support, and communications for the Shuttle crewmember to perform Extravehicular Activity (EVA) in Earth orbit. Two EMUs are included on each baseline Orbiter mission, and consumables are provided for three two-man EVAs. The EMU consists of the Life Support System (LSS), Caution and Warning System (CWS), and the Space Suit Assembly (SSA). Each level of hardware was evaluated and analyzed for possible failure modes and effects. The majority of these PCIs are resultant from failures which cause loss of one or more primary functions: pressurization, oxygen delivery, environmental maintenance, and thermal maintenance. It should also be noted that the quantity of PCIs would significantly increase if the SOP were to be treated as an emergency system rather than as an unlike redundant element.

  9. Influence of Finite Element Size in Residual Strength Prediction of Composite Structures

    NASA Technical Reports Server (NTRS)

    Satyanarayana, Arunkumar; Bogert, Philip B.; Karayev, Kazbek Z.; Nordman, Paul S.; Razi, Hamid

    2012-01-01

    The sensitivity of failure load to the element size used in a progressive failure analysis (PFA) of carbon composite center notched laminates is evaluated. The sensitivity study employs a PFA methodology previously developed by the authors consisting of Hashin-Rotem intra-laminar fiber and matrix failure criteria and a complete stress degradation scheme for damage simulation. The approach is implemented with a user defined subroutine in the ABAQUS/Explicit finite element package. The effect of element size near the notch tips on residual strength predictions was assessed for a brittle failure mode with a parametric study that included three laminates of varying material system, thickness and stacking sequence. The study resulted in the selection of an element size of 0.09 in. X 0.09 in., which was later used for predicting crack paths and failure loads in sandwich panels and monolithic laminated panels. Comparison of predicted crack paths and failure loads for these panels agreed well with experimental observations. Additionally, the element size vs. normalized failure load relationship, determined in the parametric study, was used to evaluate strength-scaling factors for three different element sizes. The failure loads predicted with all three element sizes provided converged failure loads with respect to that corresponding with the 0.09 in. X 0.09 in. element size. Though preliminary in nature, the strength-scaling concept has the potential to greatly reduce the computational time required for PFA and can enable the analysis of large scale structural components where failure is dominated by fiber failure in tension.

  10. Independent predictors of retrograde failure in CTO-PCI after successful collateral channel crossing.

    PubMed

    Suzuki, Yoriyasu; Muto, Makoto; Yamane, Masahisa; Muramatsu, Toshiya; Okamura, Atsunori; Igarashi, Yasumi; Fujita, Tsutomu; Nakamura, Shigeru; Oida, Akitsugu; Tsuchikane, Etsuo

    2017-07-01

    To evaluate factors for predicting retrograde CTO-PCI failure after successful collateral channel crossing. Successful guidewire/catheter collateral channel crossing is important for the retrograde approach in percutaneous coronary intervention (PCI) for chronic total occlusion (CTO). A total of 5984 CTO-PCI procedures performed in 45 centers in Japan from 2009 to 2012 were studied. The retrograde approach was used in 1656 CTO-PCIs (27.7%). We investigated these retrograde procedures to evaluate factors for predicting retrograde CTO-PCI failure even after successful collateral channel crossing. Successful guidewire/catheter collateral crossing was achieved in 77.1% (n = 1,276) of 1656 retrograde CTO-PCI procedures. Retrograde procedural success after successful collateral crossing was achieved in 89.4% (n = 1,141). Univariate analysis showed that the predictors for retrograde CTO-PCI failure were in-stent occlusion (OR = 1.9829, 95%CI = 1.1783 - 3.3370 P = 0.0088), calcified lesions (OR = 1.9233, 95%CI = 1.2463 - 2.9679, P = 0.0027), and lesion tortuosity (OR = 1.5244, 95%CI = 1.0618 - 2.1883, P = 0.0216). On multivariate analysis, lesion calcification was an independent predictor of retrograde CTO-PCI failure after successful collateral channel crossing (OR = 1.3472, 95%CI = 1.0614 - 1.7169, P = 0.0141). The success rate of retrograde CTO-PCI following successful guidewire/catheter collateral channel crossing was high in this registry. Lesion calcification was an independent predictor of retrograde CTO-PCI failure after successful collateral channel crossing. Devices and techniques to overcome complex CTO lesion morphology, such as lesion calcification, are required to further improve the retrograde CTO-PCI success rate. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Analysis and Test Correlation of Proof of Concept Box for Blended Wing Body-Low Speed Vehicle

    NASA Technical Reports Server (NTRS)

    Spellman, Regina L.

    2003-01-01

    The Low Speed Vehicle (LSV) is a 14.2% scale remotely piloted vehicle of the revolutionary Blended Wing Body concept. The design of the LSV includes an all composite airframe. Due to internal manufacturing capability restrictions, room temperature layups were necessary. An extensive materials testing and manufacturing process development effort was underwent to establish a process that would achieve the high modulus/low weight properties required to meet the design requirements. The analysis process involved a loads development effort that incorporated aero loads to determine internal forces that could be applied to a traditional FEM of the vehicle and to conduct detailed component analyses. A new tool, Hypersizer, was added to the design process to address various composite failure modes and to optimize the skin panel thickness of the upper and lower skins for the vehicle. The analysis required an iterative approach as material properties were continually changing. As a part of the material characterization effort, test articles, including a proof of concept wing box and a full-scale wing, were fabricated. The proof of concept box was fabricated based on very preliminary material studies and tested in bending, torsion, and shear. The box was then tested to failure under shear. The proof of concept box was also analyzed using Nastran and Hypersizer. The results of both analyses were scaled to determine the predicted failure load. The test results were compared to both the Nastran and Hypersizer analytical predictions. The actual failure occurred at 899 lbs. The failure was predicted at 1167 lbs based on the Nastran analysis. The Hypersizer analysis predicted a lower failure load of 960 lbs. The Nastran analysis alone was not sufficient to predict the failure load because it does not identify local composite failure modes. This analysis has traditionally been done using closed form solutions. Although Hypersizer is typically used as an optimizer for the design process, the failure prediction was used to help gain acceptance and confidence in this new tool. The correlated models and process were to be used to analyze the full BWB-LSV airframe design. The analysis and correlation with test results of the proof of concept box is presented here, including the comparison of the Nastran and Hypersizer results.

  12. Integrated failure detection and management for the Space Station Freedom external active thermal control system

    NASA Technical Reports Server (NTRS)

    Mesloh, Nick; Hill, Tim; Kosyk, Kathy

    1993-01-01

    This paper presents the integrated approach toward failure detection, isolation, and recovery/reconfiguration to be used for the Space Station Freedom External Active Thermal Control System (EATCS). The on-board and on-ground diagnostic capabilities of the EATCS are discussed. Time and safety critical features, as well as noncritical failures, and the detection coverage for each provided by existing capabilities are reviewed. The allocation of responsibility between on-board software and ground-based systems, to be shown during ground testing at the Johnson Space Center, is described. Failure isolation capabilities allocated to the ground include some functionality originally found on orbit but moved to the ground to reduce on-board resource requirements. Complex failures requiring the analysis of multiple external variables, such as environmental conditions, heat loads, or station attitude, are also allocated to ground personnel.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mossahebi, S; Feigenberg, S; Nichols, E

    Purpose: GammaPod™, the first stereotactic radiotherapy device for early stage breast cancer treatment, has been recently installed and commissioned at our institution. A multidisciplinary working group applied the failure mode and effects analysis (FMEA) approach to perform a risk analysis. Methods: FMEA was applied to the GammaPod™ treatment process by: 1) generating process maps for each stage of treatment; 2) identifying potential failure modes and outlining their causes and effects; 3) scoring the potential failure modes using the risk priority number (RPN) system based on the product of severity, frequency of occurrence, and detectability (ranging 1–10). An RPN of highermore » than 150 was set as the threshold for minimal concern of risk. For these high-risk failure modes, potential quality assurance procedures and risk control techniques have been proposed. A new set of severity, occurrence, and detectability values were re-assessed in presence of the suggested mitigation strategies. Results: In the single-day image-and-treat workflow, 19, 22, and 27 sub-processes were identified for the stages of simulation, treatment planning, and delivery processes, respectively. During the simulation stage, 38 potential failure modes were found and scored, in terms of RPN, in the range of 9-392. 34 potential failure modes were analyzed in treatment planning with a score range of 16-200. For the treatment delivery stage, 47 potential failure modes were found with an RPN score range of 16-392. The most critical failure modes consisted of breast-cup pressure loss and incorrect target localization due to patient upper-body alignment inaccuracies. The final RPN score of these failure modes based on recommended actions were assessed to be below 150. Conclusion: FMEA risk analysis technique was applied to the treatment process of GammaPod™, a new stereotactic radiotherapy technology. Application of systematic risk analysis methods is projected to lead to improved quality of GammaPod™ treatments. Ying Niu and Cedric Yu are affiliated with Xcision Medical Systems.« less

  14. Nodal failure index approach to groundwater remediation design

    USGS Publications Warehouse

    Lee, J.; Reeves, H.W.; Dowding, C.H.

    2008-01-01

    Computer simulations often are used to design and to optimize groundwater remediation systems. We present a new computationally efficient approach that calculates the reliability of remedial design at every location in a model domain with a single simulation. The estimated reliability and other model information are used to select a best remedial option for given site conditions, conceptual model, and available data. To evaluate design performance, we introduce the nodal failure index (NFI) to determine the number of nodal locations at which the probability of success is below the design requirement. The strength of the NFI approach is that selected areas of interest can be specified for analysis and the best remedial design determined for this target region. An example application of the NFI approach using a hypothetical model shows how the spatial distribution of reliability can be used for a decision support system in groundwater remediation design. ?? 2008 ASCE.

  15. Independent Orbiter Assessment (IOA): Analysis of the reaction control system, volume 1

    NASA Technical Reports Server (NTRS)

    Burkemper, V. J.; Haufler, W. A.; Odonnell, R. A.; Paul, D. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Reaction Control System (RCS). The purpose of the RCS is to provide thrust in and about the X, Y, Z axes for External Tank (ET) separation; orbit insertion maneuvers; orbit translation maneuvers; on-orbit attitude control; rendezvous; proximity operations (payload deploy and capture); deorbit maneuvers; and abort attitude control. The RCS is situated in three independent modules, one forward in the orbiter nose and one in each OMS/RCS pod. Each RCS module consists of the following subsystems: Helium Pressurization Subsystem; Propellant Storage and Distribution Subsystem; Thruster Subsystem; and Electrical Power Distribution and Control Subsystem. Of the failure modes analyzed, 307 could potentially result in a loss of life and/or loss of vehicle.

  16. Independent Orbiter Assessment (IOA): Analysis of the reaction control system, volume 3

    NASA Technical Reports Server (NTRS)

    Burkemper, V. J.; Haufler, W. A.; Odonnell, R. A.; Paul, D. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Reaction Control System (RCS). The RCS is situated in three independent modules, one forward in the orbiter nose and one in each OMS/RCS pod. Each RCS module consists of the following subsystems: Helium Pressurization Subsystem; Propellant Storage and Distribution Subsystem; Thruster Subsystem; and Electrical Power Distribution and Control Subsystem. Volume 3 continues the presentation of IOA analysis worksheets and the potential critical items list.

  17. Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates

    PubMed Central

    Barrese, James C; Rao, Naveen; Paroo, Kaivon; Triebwasser, Corey; Vargas-Irwin, Carlos; Franquemont, Lachlan; Donoghue, John P

    2016-01-01

    Objective Brain–computer interfaces (BCIs) using chronically implanted intracortical microelectrode arrays (MEAs) have the potential to restore lost function to people with disabilities if they work reliably for years. Current sensors fail to provide reliably useful signals over extended periods of time for reasons that are not clear. This study reports a comprehensive retrospective analysis from a large set of implants of a single type of intracortical MEA in a single species, with a common set of measures in order to evaluate failure modes. Approach Since 1996, 78 silicon MEAs were implanted in 27 monkeys (Macaca mulatta). We used two approaches to find reasons for sensor failure. First, we classified the time course leading up to complete recording failure as acute (abrupt) or chronic (progressive). Second, we evaluated the quality of electrode recordings over time based on signal features and electrode impedance. Failure modes were divided into four categories: biological, material, mechanical, and unknown. Main results Recording duration ranged from 0 to 2104 days (5.75 years), with a mean of 387 days and a median of 182 days (n = 78). Sixty-two arrays failed completely with a mean time to failure of 332 days (median = 133 days) while nine array experiments were electively terminated for experimental reasons (mean = 486 days). Seven remained active at the close of this study (mean = 753 days). Most failures (56%) occurred within a year of implantation, with acute mechanical failures the most common class (48%), largely because of connector issues (83%). Among grossly observable biological failures (24%), a progressive meningeal reaction that separated the array from the parenchyma was most prevalent (14.5%). In the absence of acute interruptions, electrode recordings showed a slow progressive decline in spike amplitude, noise amplitude, and number of viable channels that predicts complete signal loss by about eight years. Impedance measurements showed systematic early increases, which did not appear to affect recording quality, followed by a slow decline over years. The combination of slowly falling impedance and signal quality in these arrays indicate that insulating material failure is the most significant factor. Significance This is the first long-term failure mode analysis of an emerging BCI technology in a large series of non-human primates. The classification system introduced here may be used to standardize how neuroprosthetic failure modes are evaluated. The results demonstrate the potential for these arrays to record for many years, but achieving reliable sensors will require replacing connectors with implantable wireless systems, controlling the meningeal reaction, and improving insulation materials. These results will focus future research in order to create clinical neuroprosthetic sensors, as well as valuable research tools, that are able to safely provide reliable neural signals for over a decade. PMID:24216311

  18. Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates

    NASA Astrophysics Data System (ADS)

    Barrese, James C.; Rao, Naveen; Paroo, Kaivon; Triebwasser, Corey; Vargas-Irwin, Carlos; Franquemont, Lachlan; Donoghue, John P.

    2013-12-01

    Objective. Brain-computer interfaces (BCIs) using chronically implanted intracortical microelectrode arrays (MEAs) have the potential to restore lost function to people with disabilities if they work reliably for years. Current sensors fail to provide reliably useful signals over extended periods of time for reasons that are not clear. This study reports a comprehensive retrospective analysis from a large set of implants of a single type of intracortical MEA in a single species, with a common set of measures in order to evaluate failure modes. Approach. Since 1996, 78 silicon MEAs were implanted in 27 monkeys (Macaca mulatta). We used two approaches to find reasons for sensor failure. First, we classified the time course leading up to complete recording failure as acute (abrupt) or chronic (progressive). Second, we evaluated the quality of electrode recordings over time based on signal features and electrode impedance. Failure modes were divided into four categories: biological, material, mechanical, and unknown. Main results. Recording duration ranged from 0 to 2104 days (5.75 years), with a mean of 387 days and a median of 182 days (n = 78). Sixty-two arrays failed completely with a mean time to failure of 332 days (median = 133 days) while nine array experiments were electively terminated for experimental reasons (mean = 486 days). Seven remained active at the close of this study (mean = 753 days). Most failures (56%) occurred within a year of implantation, with acute mechanical failures the most common class (48%), largely because of connector issues (83%). Among grossly observable biological failures (24%), a progressive meningeal reaction that separated the array from the parenchyma was most prevalent (14.5%). In the absence of acute interruptions, electrode recordings showed a slow progressive decline in spike amplitude, noise amplitude, and number of viable channels that predicts complete signal loss by about eight years. Impedance measurements showed systematic early increases, which did not appear to affect recording quality, followed by a slow decline over years. The combination of slowly falling impedance and signal quality in these arrays indicates that insulating material failure is the most significant factor. Significance. This is the first long-term failure mode analysis of an emerging BCI technology in a large series of non-human primates. The classification system introduced here may be used to standardize how neuroprosthetic failure modes are evaluated. The results demonstrate the potential for these arrays to record for many years, but achieving reliable sensors will require replacing connectors with implantable wireless systems, controlling the meningeal reaction, and improving insulation materials. These results will focus future research in order to create clinical neuroprosthetic sensors, as well as valuable research tools, that are able to safely provide reliable neural signals for over a decade.

  19. Real-time diagnostics of the reusable rocket engine using on-line system identification

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1990-01-01

    A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.

  20. An Aircraft Lifecycle Approach for the Cost-Benefit Analysis of Prognostics and Condition-Based Maintenance-Based on Discrete-Event Simulation

    DTIC Science & Technology

    2014-10-02

    MPD. This manufacturer documentation contains maintenance tasks with specification of intervals and required man-hours that are to be carried out...failures, without consideration of false alarms and missed failures (see also section 4.1.3). The task redundancy rate is the percentage of preventive...Prognostics and Health Management ROI return on investment RUL remaining useful life TCG task code group SB Service Bulletin XML Extensible Markup

  1. Failure of wooden sandwich beam reinforced with glass/epoxy faces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papakaliatakis, G. E.; Zacharopoulos, D. A.

    2015-12-31

    The mechanical properties and the failure of wooden beam strengthened with two faces from glass/epoxy composite and a wooden beam without strengthening was studied. Stresses and deflections on both beams, which are imposed in three point bending loading. On the idealized geometry of the specimens with detailed nonlinear orthotropic analysis was performed with a finite elements program. The failure study of the wooden beams was performed, applying the criterion of Tsai-Hill. The shear strength of the adhesive was taken into account. All the specimens were tested with three point bending loading and the experimental results were compared to those ofmore » the theoretical approach with the finite elements analysis. Comparing the results, the advantage of strengthened wooden beam against the simple wooden beam becomes obvious. Theoretical predictions were in good agreement with experimental results.« less

  2. Interdependent Multi-Layer Networks: Modeling and Survivability Analysis with Applications to Space-Based Networks

    PubMed Central

    Castet, Jean-Francois; Saleh, Joseph H.

    2013-01-01

    This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs) allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats) of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also examined, and the results highlight the importance of the reliability of the wireless links between spacecraft (nodes) to enable any survivability improvements for space-based networks. PMID:23599835

  3. Interdependent multi-layer networks: modeling and survivability analysis with applications to space-based networks.

    PubMed

    Castet, Jean-Francois; Saleh, Joseph H

    2013-01-01

    This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs) allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats) of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also examined, and the results highlight the importance of the reliability of the wireless links between spacecraft (nodes) to enable any survivability improvements for space-based networks.

  4. Workplace Learning: A Trade Union Failure to Service Needs

    ERIC Educational Resources Information Center

    Stroud, Dean; Fairbrother, Peter

    2008-01-01

    Purpose: The purpose of this paper is to open up discussion about the relationship between trade unions and workplace learning. Design/methodology/approach: The paper is based on an analysis of a series of case-studies of restructuring in the European steel industry, incorporating interviews, observation and documentary analysis. Findings: The…

  5. Generalization of the slip line field theory for temperature sensitive visco-plastic materials

    NASA Astrophysics Data System (ADS)

    Paesold, Martin; Peters, Max; Regenauer-Lieb, Klaus; Veveakis, Manolis; Bassom, Andrew

    2015-04-01

    Geological processes can be a combination of various effects such as heat production or consumption, chemical reactions or fluid flow. These individual effects are coupled to each other via feedbacks and the mathematical analysis becomes challenging due to these interdependencies. Here, we concentrate solely on thermo-mechanical coupling and a main result of this work is that the coupling can depend on material parameters and boundary conditions and the coupling is more or less pronounced depending on theses parameters. The transitions from weak to strong coupling can be studied in the context of a bifurcation analysis. classically, Material instabilities in solids are approached as material bifurcations of a rate-independent, isothermal, elasto-plastic solid. However, previous research has shown that temperature and deformation rate are important factors and are fully coupled with the mechanical deformation. Early experiments in steel revealed a distinct pattern of localized heat dissipation and plastic deformation known as heat lines. Further, earth materials, soils, rocks and ceramics are known to be greatly influenced by temperature with strain localization being strongly affected by thermal loading. In this work, we provide a theoretical framework for the evolution of plastic deformation for such coupled systems, with a two-pronged approach to the prediction of localized failure. First, slip line field theory is employed to predict the geometry of the failure patterns and second, failure criteria are derived from an energy bifurcation analysis. The bifurcation analysis is concerned with the local energy balance of a material and compares the effects of heat diffusion terms and heat production terms where the heat production is due to mechanical processes. Commonly, the heat is produced locally along the slip lines and if the heat production outweighs diffusion the material is locally weakened which eventually leads to failure. The effect of diffusion and heat production is captured by a dimensionless quantity, the Gruntfest number, and only if the Gruntfest number is larger than a critical value localized failure occurs. This critical Gruntfest number depends on boundary conditions such as temperature or pressure and hence this critical value gives rise to localization criteria. We find that the results of this approach agree with earlier contributions to the theory of plasticity but gives the advantage of a unified framework which might prove useful in numerical schemes for visco-plasticity.

  6. Sensory redundancy management: The development of a design methodology for determining threshold values through a statistical analysis of sensor output data

    NASA Technical Reports Server (NTRS)

    Scalzo, F.

    1983-01-01

    Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.

  7. Advanced Composite Wind Turbine Blade Design Based on Durability and Damage Tolerance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abumeri, Galib; Abdi, Frank

    2012-02-16

    The objective of the program was to demonstrate and verify Certification-by-Analysis (CBA) capability for wind turbine blades made from advanced lightweight composite materials. The approach integrated durability and damage tolerance analysis with robust design and virtual testing capabilities to deliver superior, durable, low weight, low cost, long life, and reliable wind blade design. The GENOA durability and life prediction software suite was be used as the primary simulation tool. First, a micromechanics-based computational approach was used to assess the durability of composite laminates with ply drop features commonly used in wind turbine applications. Ply drops occur in composite joints andmore » closures of wind turbine blades to reduce skin thicknesses along the blade span. They increase localized stress concentration, which may cause premature delamination failure in composite and reduced fatigue service life. Durability and damage tolerance (D&DT) were evaluated utilizing a multi-scale micro-macro progressive failure analysis (PFA) technique. PFA is finite element based and is capable of detecting all stages of material damage including initiation and propagation of delamination. It assesses multiple failure criteria and includes the effects of manufacturing anomalies (i.e., void, fiber waviness). Two different approaches have been used within PFA. The first approach is Virtual Crack Closure Technique (VCCT) PFA while the second one is strength-based. Constituent stiffness and strength properties for glass and carbon based material systems were reverse engineered for use in D&DT evaluation of coupons with ply drops under static loading. Lamina and laminate properties calculated using manufacturing and composite architecture details matched closely published test data. Similarly, resin properties were determined for fatigue life calculation. The simulation not only reproduced static strength and fatigue life as observed in the test, it also showed composite damage and fracture modes that resemble those reported in the tests. The results show that computational simulation can be relied on to enhance the design of tapered composite structures such as the ones used in turbine wind blades. A computational simulation for durability, damage tolerance (D&DT) and reliability of composite wind turbine blade structures in presence of uncertainties in material properties was performed. A composite turbine blade was first assessed with finite element based multi-scale progressive failure analysis to determine failure modes and locations as well as the fracture load. D&DT analyses were then validated with static test performed at Sandia National Laboratories. The work was followed by detailed weight analysis to identify contribution of various materials to the overall weight of the blade. The methodology ensured that certain types of failure modes, such as delamination progression, are contained to reduce risk to the structure. Probabilistic analysis indicated that composite shear strength has a great influence on the blade ultimate load under static loading. Weight was reduced by 12% with robust design without loss in reliability or D&DT. Structural benefits obtained with the use of enhanced matrix properties through nanoparticles infusion were also assessed. Thin unidirectional fiberglass layers enriched with silica nanoparticles were applied to the outer surfaces of a wind blade to improve its overall structural performance and durability. The wind blade was a 9-meter prototype structure manufactured and tested subject to three saddle static loading at Sandia National Laboratory (SNL). The blade manufacturing did not include the use of any nano-material. With silica nanoparticles in glass composite applied to the exterior surfaces of the blade, the durability and damage tolerance (D&DT) results from multi-scale PFA showed an increase in ultimate load of the blade by 9.2% as compared to baseline structural performance (without nano). The use of nanoparticles lead to a delay in the onset of delamination. Load-displacement relationships obtained from testing of the blade with baseline neat material were compared to the ones from analytical simulation using neat resin and using silica nanoparticles in the resin. Multi-scale PFA results for the neat material construction matched closely those from test for both load displacement and location and type of damage and failure. AlphaSTAR demonstrated that wind blade structures made from advanced composite materials can be certified with multi-scale progressive failure analysis by following building block verification approach.« less

  8. "If at first you don't succeed": using failure to improve teaching.

    PubMed

    Pinsky, L E; Irby, D M

    1997-11-01

    The authors surveyed a group of distinguished clinical teachers regarding episodes of failure that had subsequently led to improvements in their teaching. Specifically, they examined how these teachers had used reflection on failed approaches as a tool for experiential learning. The respondents believed that failures were as important as successes in learning to be a good teacher. Using qualitative content analysis of the respondents' comments, the authors identified eight common types of failure associated with each of the three phases of teaching: planning, teaching, and reflection. Common failures associated with the planning stage were misjudging learners, lack of preparation, presenting too much content, lack of purpose, and difficulties with audiovisuals. The primary failure associated with actual teaching was inflexibly using a single teaching method. In the reflection phase, respondents said they most often realized that they had made one of two common errors: selecting the wrong teaching strategy or incorrectly implementing a sound strategy. For each identified failure, the respondents made recommendations for improvement. The deliberative process that had guided planning, teaching, and reflecting had helped all of them transform past failures into successes.

  9. Experiences with Extra-Vehicular Activities in Response to Critical ISS Contingencies

    NASA Technical Reports Server (NTRS)

    Van Cise, E. A.; Kelly, B. J.; Radigan, J. P.; Cranmer, C. W.

    2016-01-01

    The maturation of the International Space Station (ISS) design from the proposed Space Station Freedom to today's current implementation resulted in external hardware redundancy vulnerabilities in the final design. Failure to compensate for or respond to these vulnerabilities could put the ISS in a posture to where it could no longer function as a habitable space station. In the first years of ISS assembly, these responses were to largely be addressed by the continued resupply and Extra-Vehicular Activity (EVA) capabilities of the Space Shuttle. Even prior to the decision to retire the Space Shuttle, it was realized that ISS needed to have its own capability to be able to rapidly repair or replace external hardware without needing to wait for the next cargo resupply mission. As documented in a previous publicatoin5, in 2006 development was started to baseline Extra- Vehicular Activity (EVA, or spacewalk) procedures to replace hardware components whose failure would expose some of the ISS vulnerabilities should a second failure occur. This development work laid the groundwork for the onboard crews and the ground operations and engineering teams to be ready to replace any of this failed hardware. In 2010, this development work was put to the test when one of these pieces of hardware failed. This paper will provide a brief summary of the planning and processes established in the original Contingency EVA development phase. It will then review how those plans and processes were implemented in 2010, highlighting what went well as well as where there were deficiencies between theory and reality. This paper will show that the original approach and analyses, though sound, were not as thorough as they should have been in the realm of planning for next worse failures, for documenting Programmatic approval of key assumptions, and not pursuing sufficient engineering analysis prior to the failure of the hardware. The paper will further highlight the changes made to the Contingency EVA preparation team structure, approach, goals, and the resources allocated to its work after the 2010 events. Finally, the authors will overview the implementation of these updates in addressing failures onboard the ISS in 2012, 2013, and 2014. The successful use of the updated approaches, and the application of the approaches to other spacewalks, will demonstrate the effectiveness of this additional work and make a case for putting significant time and resources into pre-failure planning and analysis for critical hardware items on human-tended spacecraft.

  10. Experiences with Extra-Vehicular Activities in Response to Critical ISS Contingencies

    NASA Technical Reports Server (NTRS)

    Van Cise, E. A.; Kelly, B. J.; Radigan, J. P.; Cranmer, C. W.

    2016-01-01

    The maturation of the International Space Station (ISS) design from the proposed Space Station Freedom to today's current implementation resulted in external hardware redundancy vulnerabilities in the final design. Failure to compensate for or respond to these vulnerabilities could put the ISS in a posture where it could no longer function as a habitable space station. In the first years of ISS assembly, these responses were to largely be addressed by the continued resupply and Extra-Vehicular Activity (EVA) capabilities of the Space Shuttle. Even prior to the decision to retire the Space Shuttle, it was realized that ISS needed to have its own capability to be able to rapidly repair or replace external hardware without needing to wait for the next cargo resupply mission. As documented in a previous publication, in 2006 development was started to baseline Extra-Vehicular Activity (EVA, or spacewalk) procedures to replace hardware components whose failure would expose some of the ISS vulnerabilities should a second failure occur. This development work laid the groundwork for the onboard crews and the ground operations and engineering teams to be ready to replace any of this failed hardware. In 2010, this development work was put to the test when one of these pieces of hardware failed. This paper will provide a brief summary of the planning and processes established in the original Contingency EVA development phase. It will then review how those plans and processes were implemented in 2010, highlighting what went well as well as where there were deficiencies between theory and reality. This paper will show that the original approach and analyses, though sound, were not as thorough as they should have been in the realm of planning for next worse failures, for documenting Programmatic approval of key assumptions, and not pursuing sufficient engineering analysis prior to the failure of the hardware. The paper will further highlight the changes made to the Contingency EVA preparation team structure, approach, goals, and the resources allocated to its work after the 2010 events. Finally, the authors will overview the implementation of these updates in addressing failures onboard the ISS in 2012, 2013, and 2014. The successful use of the updated approaches, and the application of the approaches to other spacewalks, will demonstrate the effectiveness of this additional work and make a case for putting significant time and resources into pre-failure planning and analysis for critical hardware items on human-tended spacecraft.

  11. Identification and classification of failure modes in laminated composites by using a multivariate statistical analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Baccar, D.; Söffker, D.

    2017-11-01

    Acoustic Emission (AE) is a suitable method to monitor the health of composite structures in real-time. However, AE-based failure mode identification and classification are still complex to apply due to the fact that AE waves are generally released simultaneously from all AE-emitting damage sources. Hence, the use of advanced signal processing techniques in combination with pattern recognition approaches is required. In this paper, AE signals generated from laminated carbon fiber reinforced polymer (CFRP) subjected to indentation test are examined and analyzed. A new pattern recognition approach involving a number of processing steps able to be implemented in real-time is developed. Unlike common classification approaches, here only CWT coefficients are extracted as relevant features. Firstly, Continuous Wavelet Transform (CWT) is applied to the AE signals. Furthermore, dimensionality reduction process using Principal Component Analysis (PCA) is carried out on the coefficient matrices. The PCA-based feature distribution is analyzed using Kernel Density Estimation (KDE) allowing the determination of a specific pattern for each fault-specific AE signal. Moreover, waveform and frequency content of AE signals are in depth examined and compared with fundamental assumptions reported in this field. A correlation between the identified patterns and failure modes is achieved. The introduced method improves the damage classification and can be used as a non-destructive evaluation tool.

  12. Intelligent data analysis: the best approach for chronic heart failure (CHF) follow up management.

    PubMed

    Mohammadzadeh, Niloofar; Safdari, Reza; Baraani, Alireza; Mohammadzadeh, Farshid

    2014-08-01

    Intelligent data analysis has ability to prepare and present complex relations between symptoms and diseases, medical and treatment consequences and definitely has significant role in improving follow-up management of chronic heart failure (CHF) patients, increasing speed ​​and accuracy in diagnosis and treatments; reducing costs, designing and implementation of clinical guidelines. The aim of this article is to describe intelligent data analysis methods in order to improve patient monitoring in follow and treatment of chronic heart failure patients as the best approach for CHF follow up management. Minimum data set (MDS) requirements for monitoring and follow up of CHF patient designed in checklist with six main parts. All CHF patients that discharged in 2013 from Tehran heart center have been selected. The MDS for monitoring CHF patient status were collected during 5 months in three different times of follow up. Gathered data was imported in RAPIDMINER 5 software. Modeling was based on decision trees methods such as C4.5, CHAID, ID3 and k-Nearest Neighbors algorithm (K-NN) with k=1. Final analysis was based on voting method. Decision trees and K-NN evaluate according to Cross-Validation. Creating and using standard terminologies and databases consistent with these terminologies help to meet the challenges related to data collection from various places and data application in intelligent data analysis. It should be noted that intelligent analysis of health data and intelligent system can never replace cardiologists. It can only act as a helpful tool for the cardiologist's decisions making.

  13. Analysis of rainfall-induced slope instability using a field of local factor of safety

    USGS Publications Warehouse

    Lu, Ning; Şener-Kaya, Başak; Wayllace, Alexandra; Godt, Jonathan W.

    2012-01-01

    Slope-stability analyses are mostly conducted by identifying or assuming a potential failure surface and assessing the factor of safety (FS) of that surface. This approach of assigning a single FS to a potentially unstable slope provides little insight on where the failure initiates or the ultimate geometry and location of a landslide rupture surface. We describe a method to quantify a scalar field of FS based on the concept of the Coulomb stress and the shift in the state of stress toward failure that results from rainfall infiltration. The FS at each point within a hillslope is called the local factor of safety (LFS) and is defined as the ratio of the Coulomb stress at the current state of stress to the Coulomb stress of the potential failure state under the Mohr-Coulomb criterion. Comparative assessment with limit-equilibrium and hybrid finite element limit-equilibrium methods show that the proposed LFS is consistent with these approaches and yields additional insight into the geometry and location of the potential failure surface and how instability may initiate and evolve with changes in pore water conditions. Quantitative assessments applying the new LFS field method to slopes under infiltration conditions demonstrate that the LFS has the potential to overcome several major limitations in the classical FS methodologies such as the shape of the failure surface and the inherent underestimation of slope instability. Comparison with infinite-slope methods, including a recent extension to variably saturated conditions, shows further enhancement in assessing shallow landslide occurrence using the LFS methodology. Although we use only a linear elastic solution for the state of stress with no post-failure analysis that require more sophisticated elastoplastic or other theories, the LFS provides a new means to quantify the potential instability zones in hillslopes under variably saturated conditions using stress-field based methods.

  14. Design and Rationale of the Cognitive Intervention to Improve Memory in Heart Failure Patients Study.

    PubMed

    Pressler, Susan J; Giordani, Bruno; Titler, Marita; Gradus-Pizlo, Irmina; Smith, Dean; Dorsey, Susan G; Gao, Sujuan; Jung, Miyeon

    Memory loss is an independent predictor of mortality among heart failure patients. Twenty-three percent to 50% of heart failure patients have comorbid memory loss, but few interventions are available to treat the memory loss. The aims of this 3-arm randomized controlled trial were to (1) evaluate efficacy of computerized cognitive training intervention using BrainHQ to improve primary outcomes of memory and serum brain-derived neurotrophic factor levels and secondary outcomes of working memory, instrumental activities of daily living, and health-related quality of life among heart failure patients; (2) evaluate incremental cost-effectiveness of BrainHQ; and (3) examine depressive symptoms and genomic moderators of BrainHQ effect. A sample of 264 heart failure patients within 4 equal-sized blocks (normal/low baseline cognitive function and gender) will be randomly assigned to (1) BrainHQ, (2) active control computer-based crossword puzzles, and (3) usual care control groups. BrainHQ is an 8-week, 40-hour program individualized to each patient's performance. Data collection will be completed at baseline and at 10 weeks and 4 and 8 months. Descriptive statistics, mixed model analyses, and cost-utility analysis using intent-to-treat approach will be computed. This research will provide new knowledge about the efficacy of BrainHQ to improve memory and increase serum brain-derived neurotrophic factor levels in heart failure. If efficacious, the intervention will provide a new therapeutic approach that is easy to disseminate to treat a serious comorbid condition of heart failure.

  15. Multi-Scale Impact and Compression-After-Impact Modeling of Reinforced Benzoxazine/Epoxy Composites using Micromechanics Approach

    NASA Astrophysics Data System (ADS)

    Montero, Marc Villa; Barjasteh, Ehsan; Baid, Harsh K.; Godines, Cody; Abdi, Frank; Nikbin, Kamran

    A multi-scale micromechanics approach along with finite element (FE) model predictive tool is developed to analyze low-energy-impact damage footprint and compression-after-impact (CAI) of composite laminates which is also tested and verified with experimental data. Effective fiber and matrix properties were reverse-engineered from lamina properties using an optimization algorithm and used to assess damage at the micro-level during impact and post-impact FE simulations. Progressive failure dynamic analysis (PFDA) was performed for a two step-process simulation. Damage mechanisms at the micro-level were continuously evaluated during the analyses. Contribution of each failure mode was tracked during the simulations and damage and delamination footprint size and shape were predicted to understand when, where and why failure occurred during both impact and CAI events. The composite laminate was manufactured by the vacuum infusion of the aero-grade toughened Benzoxazine system into the fabric preform. Delamination footprint was measured using C-scan data from the impacted panels and compared with the predicated values obtained from proposed multi-scale micromechanics coupled with FE analysis. Furthermore, the residual strength was predicted from the load-displacement curve and compared with the experimental values as well.

  16. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  17. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  18. Development of an engineering analysis of progressive damage in composites during low velocity impact

    NASA Technical Reports Server (NTRS)

    Humphreys, E. A.

    1981-01-01

    A computerized, analytical methodology was developed to study damage accumulation during low velocity lateral impact of layered composite plates. The impact event was modeled as perfectly plastic with complete momentum transfer to the plate structure. A transient dynamic finite element approach was selected to predict the displacement time response of the plate structure. Composite ply and interlaminar stresses were computed at selected time intervals and subsequently evaluated to predict layer and interlaminar damage. The effects of damage on elemental stiffness were then incorporated back into the analysis for subsequent time steps. Damage predicted included fiber failure, matrix ply failure and interlaminar delamination.

  19. TU-FG-201-11: Evaluating the Validity of Prospective Risk Analysis Methods: A Comparison of Traditional FMEA and Modified Healthcare FMEA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lah, J; Manger, R; Kim, G

    Purpose: To examine the ability of traditional Failure mode and effects analysis (FMEA) and a light version of Healthcare FMEA (HFMEA), called Scenario analysis of FMEA (SAFER) by comparing their outputs in terms of the risks identified and their severity rankings. Methods: We applied two prospective methods of the quality management to surface image guided, linac-based radiosurgery (SIG-RS). For the traditional FMEA, decisions on how to improve an operation are based on risk priority number (RPN). RPN is a product of three indices: occurrence, severity and detectability. The SAFER approach; utilized two indices-frequency and severity-which were defined by a multidisciplinarymore » team. A criticality matrix was divided into 4 categories; very low, low, high and very high. For high risk events, an additional evaluation was performed. Based upon the criticality of the process, it was decided if additional safety measures were needed and what they comprise. Results: Two methods were independently compared to determine if the results and rated risks were matching or not. Our results showed an agreement of 67% between FMEA and SAFER approaches for the 15 riskiest SIG-specific failure modes. The main differences between the two approaches were the distribution of the values and the failure modes (No.52, 54, 154) that have high SAFER scores do not necessarily have high FMEA RPN scores. In our results, there were additional risks identified by both methods with little correspondence. In the SAFER, when the risk score is determined, the basis of the established decision tree or the failure mode should be more investigated. Conclusion: The FMEA method takes into account the probability that an error passes without being detected. SAFER is inductive because it requires the identification of the consequences from causes, and semi-quantitative since it allow the prioritization of risks and mitigation measures, and thus is perfectly applicable to clinical parts of radiotherapy.« less

  20. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  1. Evaluation of failure criterion for graphite/epoxy fabric laminates

    NASA Technical Reports Server (NTRS)

    Tennyson, R. C.; Wharram, G. E.

    1985-01-01

    The development and application of the tensor polynomial failure criterion for composite laminate analysis is described. Emphasis is given to the fabrication and testing of Narmco Rigidite 5208-WT300, a plain weave fabric of Thornel 300 Graphite fibers impregnated with Narmco 5208 Resin. The quadratic-failure criterion with F sub 12=0 provides accurate estimates of failure stresses for the graphite/epoxy investigated. The cubic failure criterion was recast into an operationally easier form, providing design curves that can be applied to laminates fabricated from orthotropic woven fabric prepregs. In the form presented, no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exist at present to generalize this approach for all prepreg constructions, and its use must be restricted to the generic materials and configurations investigated to date.

  2. Tutoring for Success: Empowering Graduate Nurses After Failure on the NCLEX-RN.

    PubMed

    Lutter, Stacy L; Thompson, Cheryl W; Condon, Marian C

    2017-12-01

    Failure on the National Council Licensure Examination for Registered Nurses (NCLEX-RN) is a devastating experience. Most research related to NCLEX-RN is focused on predicting and preventing failure. Despite these efforts, more than 20,000 nursing school graduates experience failure on the NCLEX-RN each year, and there is a paucity of literature regarding remediation after failure. The aim of this article is to describe an individualized tutoring approach centered on establishing a trusting relationship and incorporating two core strategies for remediation: the nugget method, and a six-step strategy for question analysis. This individualized tutoring method has been used by three nursing faculty with a 95% success rate on an NCLEX retake attempt. Further research is needed to identify the elements of this tutoring method that influence success. [J Nurs Educ. 2017;56(12):758-761.]. Copyright 2017, SLACK Incorporated.

  3. Fault tree analysis for system modeling in case of intentional EMI

    NASA Astrophysics Data System (ADS)

    Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.

    2011-08-01

    The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.

  4. New Approach For Prediction Groundwater Depletion

    NASA Astrophysics Data System (ADS)

    Moustafa, Mahmoud

    2017-01-01

    Current approaches to quantify groundwater depletion involve water balance and satellite gravity. However, the water balance technique includes uncertain estimation of parameters such as evapotranspiration and runoff. The satellite method consumes time and effort. The work reported in this paper proposes using failure theory in a novel way to predict groundwater saturated thickness depletion. An important issue in the failure theory proposed is to determine the failure point (depletion case). The proposed technique uses depth of water as the net result of recharge/discharge processes in the aquifer to calculate remaining saturated thickness resulting from the applied pumping rates in an area to evaluate the groundwater depletion. Two parameters, the Weibull function and Bayes analysis were used to model and analyze collected data from 1962 to 2009. The proposed methodology was tested in a nonrenewable aquifer, with no recharge. Consequently, the continuous decline in water depth has been the main criterion used to estimate the depletion. The value of the proposed approach is to predict the probable effect of the current applied pumping rates on the saturated thickness based on the remaining saturated thickness data. The limitation of the suggested approach is that it assumes the applied management practices are constant during the prediction period. The study predicted that after 300 years there would be an 80% probability of the saturated aquifer which would be expected to be depleted. Lifetime or failure theory can give a simple alternative way to predict the remaining saturated thickness depletion with no time-consuming processes such as the sophisticated software required.

  5. Future Issues and Approaches to Health Monitoring and Failure Prevention for Oil-Free Gas Turbines

    NASA Technical Reports Server (NTRS)

    DellaCorte, Christopher

    2004-01-01

    Recent technology advances in foil air bearings, high temperature solid lubricants and computer based modeling has enabled the development of small Oil-Free gas turbines. These turbomachines are currently commercialized as small (<100 kW) microturbine generators and larger machines are being developed. Based upon these successes and the high potential payoffs offered by Oil-Free systems, NASA, industry, and other government entities are anticipating Oil-Free gas turbine propulsion systems to proliferate future markets. Since an Oil-Free engine has no oil system, traditional approaches to health monitoring and diagnostics, such as chip detection, oil analysis, and possibly vibration signature analyses (e.g., ball pass frequency) will be unavailable. As such, new approaches will need to be considered. These could include shaft orbit analyses, foil bearing temperature measurements, embedded wear sensors and start-up/coast down speed analysis. In addition, novel, as yet undeveloped techniques may emerge based upon concurrent developments in MEMS technology. This paper introduces Oil-Free technology, reviews the current state of the art and potential for future turbomachinery applications and discusses possible approaches to health monitoring, diagnostics and failure prevention.

  6. Reduced Data Dualscale Entropy Analysis of HRV Signals for Improved Congestive Heart Failure Detection

    NASA Astrophysics Data System (ADS)

    Kuntamalla, Srinivas; Lekkala, Ram Gopal Reddy

    2014-10-01

    Heart rate variability (HRV) is an important dynamic variable of the cardiovascular system, which operates on multiple time scales. In this study, Multiscale entropy (MSE) analysis is applied to HRV signals taken from Physiobank to discriminate Congestive Heart Failure (CHF) patients from healthy young and elderly subjects. The discrimination power of the MSE method is decreased as the amount of the data reduces and the lowest amount of the data at which there is a clear discrimination between CHF and normal subjects is found to be 4000 samples. Further, this method failed to discriminate CHF from healthy elderly subjects. In view of this, the Reduced Data Dualscale Entropy Analysis method is proposed to reduce the data size required (as low as 500 samples) for clearly discriminating the CHF patients from young and elderly subjects with only two scales. Further, an easy to interpret index is derived using this new approach for the diagnosis of CHF. This index shows 100 % accuracy and correlates well with the pathophysiology of heart failure.

  7. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  8. Probabilistic Analysis of a Composite Crew Module

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Krishnamurthy, Thiagarajan

    2011-01-01

    An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.

  9. Qualification of computerized monitoring systems in a cell therapy facility compliant with the good manufacturing practices.

    PubMed

    Del Mazo-Barbara, Anna; Mirabel, Clémentine; Nieto, Valentín; Reyes, Blanca; García-López, Joan; Oliver-Vila, Irene; Vives, Joaquim

    2016-09-01

    Computerized systems (CS) are essential in the development and manufacture of cell-based medicines and must comply with good manufacturing practice, thus pushing academic developers to implement methods that are typically found within pharmaceutical industry environments. Qualitative and quantitative risk analyses were performed by Ishikawa and Failure Mode and Effects Analysis, respectively. A process for qualification of a CS that keeps track of environmental conditions was designed and executed. The simplicity of the Ishikawa analysis permitted to identify critical parameters that were subsequently quantified by Failure Mode Effects Analysis, resulting in a list of test included in the qualification protocols. The approach presented here contributes to simplify and streamline the qualification of CS in compliance with pharmaceutical quality standards.

  10. A micromechanics-based strength prediction methodology for notched metal matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1992-01-01

    An analytical micromechanics based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and post fatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  11. A micromechanics-based strength prediction methodology for notched metal-matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1993-01-01

    An analytical micromechanics-based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three-dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and postfatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics-based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  12. A Hybrid Approach to Composite Damage and Failure Analysis Combining Synergistic Damage Mechanics and Peridynamics

    DTIC Science & Technology

    2017-06-30

    along the intermetallic component or at the interface between the two components of the composite. The availability of rnicroscale experimental data in...obtained with the PD model; (c) map of strain energy density; (d) the new quasi -index damage is a predictor of fai lure. As in the case of FRCs, one...which points are most likely to fail, before actual failure happens. The " quasi -damage index", shown in the formula below, is a point-wise measure

  13. Risk, Issues and Lessons Learned: Maximizing Risk Management in the DoD Ground Domain

    DTIC Science & Technology

    2011-10-01

    Carnegie Mellon University “Risk Management Overview for TACOM” Benefits of Risk Management include: • Risk is a proactive approach - preventing... Chili (no beans) 13 • Hot dog sub-assy Unclassified How does the FMEA work? Execute the analysis and discover the potential failures and effects...34 - --· . -· A c u i.rition Benefits of FMEAs • Prevent major risks, reduce failures, minimize cost and reduce development time - Do it right the first time

  14. A relation to predict the failure of materials and potential application to volcanic eruptions and landslides.

    PubMed

    Hao, Shengwang; Liu, Chao; Lu, Chunsheng; Elsworth, Derek

    2016-06-16

    A theoretical explanation of a time-to-failure relation is presented, with this relationship then used to describe the failure of materials. This provides the potential to predict timing (tf - t) immediately before failure by extrapolating the trajectory as it asymptotes to zero with no need to fit unknown exponents as previously proposed in critical power law behaviors. This generalized relation is verified by comparison with approaches to criticality for volcanic eruptions and creep failure. A new relation based on changes with stress is proposed as an alternative expression of Voight's relation, which is widely used to describe the accelerating precursory signals before material failure and broadly applied to volcanic eruptions, landslides and other phenomena. The new generalized relation reduces to Voight's relation if stress is limited to increase at a constant rate with time. This implies that the time-derivatives in Voight's analysis may be a subset of a more general expression connecting stress derivatives, and thus provides a potential method for forecasting these events.

  15. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  16. One versus two venous anastomoses in microsurgical head and neck reconstruction: a cumulative meta-analysis.

    PubMed

    Christianto, S; Lau, A; Li, K Y; Yang, W F; Su, Y X

    2018-05-01

    Venous compromise is still the most common cause of free flap failure. The use of two venous anastomoses has been advocated to reduce venous compromise. However, the effectiveness of this approach remains controversial. A systematic review and cumulative meta-analysis was performed to assess the effect of one versus two venous anastomoses on venous compromise and free flap failure in head and neck microsurgical reconstruction. A total of 27 articles reporting 7389 flaps were included in this study. On comparison of one versus two venous anastomoses, the odds ratio (OR) for flap failure was 1.66 (95% confidence interval 1.11-2.50; P=0.014) and for venous compromise was 1.50 (95% confidence interval 1.10-2.05; P=0.011), suggesting a significant increase in the flap failure rate and venous compromise rate in the single venous anastomosis group. These results show that the execution of two venous anastomoses has significant effects on reducing the vascular compromise and free flap failure rate in head and neck reconstruction. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive

    NASA Technical Reports Server (NTRS)

    Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)

    2001-01-01

    Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.

  18. [Application of root cause analysis in healthcare].

    PubMed

    Hsu, Tsung-Fu

    2007-12-01

    The main purpose of this study was to explore various aspects of root cause analysis (RCA), including its definition, rationale concept, main objective, implementation procedures, most common analysis methodology (fault tree analysis, FTA), and advantages and methodologic limitations in regard to healthcare. Several adverse events that occurred at a certain hospital were also analyzed by the author using FTA as part of this study. RCA is a process employed to identify basic and contributing causal factors underlying performance variations associated with adverse events. The rationale concept of RCA offers a systemic approach to improving patient safety that does not assign blame or liability to individuals. The four-step process involved in conducting an RCA includes: RCA preparation, proximate cause identification, root cause identification, and recommendation generation and implementation. FTA is a logical, structured process that can help identify potential causes of system failure before actual failures occur. Some advantages and significant methodologic limitations of RCA were discussed. Finally, we emphasized that errors stem principally from faults attributable to system design, practice guidelines, work conditions, and other human factors, which induce health professionals to make negligence or mistakes with regard to healthcare. We must explore the root causes of medical errors to eliminate potential RCA system failure factors. Also, a systemic approach is needed to resolve medical errors and move beyond a current culture centered on assigning fault to individuals. In constructing a real environment of patient-centered safety healthcare, we can help encourage clients to accept state-of-the-art healthcare services.

  19. Independent Orbiter Assessment (IOA): Analysis of the Electrical Power Distribution and Control Subsystem, Volume 2

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list.

  20. Ares-I-X Vehicle Preliminary Range Safety Malfunction Turn Analysis

    NASA Technical Reports Server (NTRS)

    Beaty, James R.; Starr, Brett R.; Gowan, John W., Jr.

    2008-01-01

    Ares-I-X is the designation given to the flight test version of the Ares-I rocket (also known as the Crew Launch Vehicle - CLV) being developed by NASA. As part of the preliminary flight plan approval process for the test vehicle, a range safety malfunction turn analysis was performed to support the launch area risk assessment and vehicle destruct criteria development processes. Several vehicle failure scenarios were identified which could cause the vehicle trajectory to deviate from its normal flight path, and the effects of these failures were evaluated with an Ares-I-X 6 degrees-of-freedom (6-DOF) digital simulation, using the Program to Optimize Simulated Trajectories Version 2 (POST2) simulation framework. The Ares-I-X simulation analysis provides output files containing vehicle state information, which are used by other risk assessment and vehicle debris trajectory simulation tools to determine the risk to personnel and facilities in the vicinity of the launch area at Kennedy Space Center (KSC), and to develop the vehicle destruct criteria used by the flight test range safety officer. The simulation analysis approach used for this study is described, including descriptions of the failure modes which were considered and the underlying assumptions and ground rules of the study, and preliminary results are presented, determined by analysis of the trajectory deviation of the failure cases, compared with the expected vehicle trajectory.

  1. A Review of Diagnostic Techniques for ISHM Applications

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Biswas, Gautam; Aaseng, Gordon; Narasimhan, Sriam; Pattipati, Krishna

    2005-01-01

    System diagnosis is an integral part of any Integrated System Health Management application. Diagnostic applications make use of system information from the design phase, such as safety and mission assurance analysis, failure modes and effects analysis, hazards analysis, functional models, fault propagation models, and testability analysis. In modern process control and equipment monitoring systems, topological and analytic , models of the nominal system, derived from design documents, are also employed for fault isolation and identification. Depending on the complexity of the monitored signals from the physical system, diagnostic applications may involve straightforward trending and feature extraction techniques to retrieve the parameters of importance from the sensor streams. They also may involve very complex analysis routines, such as signal processing, learning or classification methods to derive the parameters of importance to diagnosis. The process that is used to diagnose anomalous conditions from monitored system signals varies widely across the different approaches to system diagnosis. Rule-based expert systems, case-based reasoning systems, model-based reasoning systems, learning systems, and probabilistic reasoning systems are examples of the many diverse approaches ta diagnostic reasoning. Many engineering disciplines have specific approaches to modeling, monitoring and diagnosing anomalous conditions. Therefore, there is no "one-size-fits-all" approach to building diagnostic and health monitoring capabilities for a system. For instance, the conventional approaches to diagnosing failures in rotorcraft applications are very different from those used in communications systems. Further, online and offline automated diagnostic applications are integrated into an operations framework with flight crews, flight controllers and maintenance teams. While the emphasis of this paper is automation of health management functions, striking the correct balance between automated and human-performed tasks is a vital concern.

  2. A New, More Powerful Approach to Multitrait-Multimethod Analyses: An Application of Second-Order Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Hocevar, Dennis

    The advantages of applying confirmatory factor analysis (CFA) to multitrait-multimethod (MTMM) data are widely recognized. However, because CFA as traditionally applied to MTMM data incorporates single indicators of each scale (i.e., each trait/method combination), important weaknesses are the failure to: (1) correct appropriately for measurement…

  3. Critical Factors Analysis for Offshore Software Development Success by Structural Equation Modeling

    NASA Astrophysics Data System (ADS)

    Wada, Yoshihisa; Tsuji, Hiroshi

    In order to analyze the success/failure factors in offshore software development service by the structural equation modeling, this paper proposes to follow two approaches together; domain knowledge based heuristic analysis and factor analysis based rational analysis. The former works for generating and verifying of hypothesis to find factors and causalities. The latter works for verifying factors introduced by theory to build the model without heuristics. Following the proposed combined approaches for the responses from skilled project managers of the questionnaire, this paper found that the vendor property has high causality for the success compared to software property and project property.

  4. Effective properties of dispersed phase reinforced composite materials with perfect and imperfect interfaces

    NASA Astrophysics Data System (ADS)

    Han, Ru

    This thesis focuses on the analysis of dispersed phase reinforced composite materials with perfect as well as imperfect interfaces using the Boundary Element Method (BEM). Two problems of interest are considered, namely, to determine the limitations in the use of effective properties and the analysis of failure progression at the inclusion-matrix interface. The effective moduli (effective Young's modulus, effective Poisson's ratio, effective shear modulus, and effective bulk modulus) of composite materials can be determined at the mesoscopic level using three-dimensional parallel BEM simulations. By comparing the mesoscopic BEM results and the macroscopic results based on effective properties, limitations in the effective property approach can be determined. Decohesion is an important failure mode associated with fiber-reinforced composite materials. Analysis of failure progression at the fiber-matrix interface in fiber-reinforced composite materials is considered using a softening decohesion model consistent with thermodynamic concepts. In this model, the initiation of failure is given directly by a failure criterion. Damage is interpreted by the development of a discontinuity of displacement. The formulation describing the potential development of damage is governed by a discrete decohesive constitutive equation. Numerical simulations are performed using the direct boundary element method. Incremental decohesion simulations illustrate the progressive evolution of debonding zones and the propagation of cracks along the interfaces. The effect of decohesion on the macroscopic response of composite materials is also investigated.

  5. Clinical and economic analysis of rescue intracytoplasmic sperm injection cycles.

    PubMed

    Shalom-paz, Einat; Alshalati, Jana; Shehata, Fady; Jimenez, Luis; Son, Weon-Young; Holzer, Hananel; Tan, Seang Lin; Almog, Benny

    2011-12-01

    To identify clinical and embryological factors that may predict success in rescue intracytoplasmic sperm injection (ICSI) cycles (after total fertilization failure has occurred) and to evaluate the cost effectiveness of rescue ICSI strategy. Additionally, follow-up of 20 rescue ICSI pregnancies is reported. Retrospective analysis of total fertilization failure cycles. University-based tertiary medical center. In total, 92 patients who had undergone conventional in-vitro fertilization (IVF) cycles with total fertilization failure were included. The patients were divided into two subgroups: those who conceived through rescue ICSI and those who did not. The pregnant members of the rescue ICSI subgroup were found to be significantly younger (32.9 ± 4.2 vs. 36.3 ± 4.5, respectively, p = 0.0035,) and to have better-quality embryos than those who did not conceive (cumulative embryo score: 38.3 ± 20.4 vs. 29.3 ± 14.7, p = 0.025). Cost effectiveness analysis showed 25% reduction in the cost per live birth when rescue ICSI is compared to cycle cancellation approach. The pregnancies follow-up did not show adverse perinatal outcome. Rescue ICSI is an option for salvaging IVF cycles complicated by total fertilization failure. Success in rescue ICSI was found to be associated with younger age and higher quality of embryos. Furthermore, the cost effectiveness of rescue ICSI in terms of total fertilization failure was found to be worthwhile.

  6. Local Failure in Resected N1 Lung Cancer: Implications for Adjuvant Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, Kristin A., E-mail: kristin.higgins@duke.edu; Chino, Junzo P.; Berry, Mark

    2012-06-01

    Purpose: To evaluate actuarial rates of local failure in patients with pathologic N1 non-small-cell lung cancer and to identify clinical and pathologic factors associated with an increased risk of local failure after resection. Methods and Materials: All patients who underwent surgery for non-small-cell lung cancer with pathologically confirmed N1 disease at Duke University Medical Center from 1995-2008 were identified. Patients receiving any preoperative therapy or postoperative radiotherapy or with positive surgical margins were excluded. Local failure was defined as disease recurrence within the ipsilateral hilum, mediastinum, or bronchial stump/staple line. Actuarial rates of local failure were calculated with the Kaplan-Meiermore » method. A Cox multivariate analysis was used to identify factors independently associated with a higher risk of local recurrence. Results: Among 1,559 patients who underwent surgery during the time interval, 198 met the inclusion criteria. Of these patients, 50 (25%) received adjuvant chemotherapy. Actuarial (5-year) rates of local failure, distant failure, and overall survival were 40%, 55%, and 33%, respectively. On multivariate analysis, factors associated with an increased risk of local failure included a video-assisted thoracoscopic surgery approach (hazard ratio [HR], 2.5; p = 0.01), visceral pleural invasion (HR, 2.1; p = 0.04), and increasing number of positive N1 lymph nodes (HR, 1.3 per involved lymph node; p = 0.02). Chemotherapy was associated with a trend toward decreased risk of local failure that was not statistically significant (HR, 0.61; p = 0.2). Conclusions: Actuarial rates of local failure in pN1 disease are high. Further investigation of conformal postoperative radiotherapy may be warranted.« less

  7. A fuzzy set approach for reliability calculation of valve controlling electric actuators

    NASA Astrophysics Data System (ADS)

    Karmachev, D. P.; Yefremov, A. A.; Luneva, E. E.

    2017-02-01

    The oil and gas equipment and electric actuators in particular frequently perform in various operational modes and under dynamic environmental conditions. These factors affect equipment reliability measures in a vague, uncertain way. To eliminate the ambiguity, reliability model parameters could be defined as fuzzy numbers. We suggest a technique that allows constructing fundamental fuzzy-valued performance reliability measures based on an analysis of electric actuators failure data in accordance with the amount of work, completed before the failure, instead of failure time. Also, this paper provides a computation example of fuzzy-valued reliability and hazard rate functions, assuming Kumaraswamy complementary Weibull geometric distribution as a lifetime (reliability) model for electric actuators.

  8. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthias C. M. Troffaes; Gero Walter; Dana Kelly

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less

  10. A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon

    2009-01-01

    Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.

  11. Dam break analysis and flood inundation map of Krisak dam for emergency action plan

    NASA Astrophysics Data System (ADS)

    Juliastuti, Setyandito, Oki

    2017-11-01

    The Indonesian Regulation which refers to the ICOLD Regulation (International Committee on Large Dam required have the Emergency Action Plan (EAP) guidelines because of the dams have potential failure. In EAP guidelines there is a management of evacuation where the determination of the inundation map based on flood modeling. The purpose of the EAP is to minimize the risk of loss of life and property in downstream which caused by dam failure. This paper will describe about develop flood modeling and inundation map in Krisak dam using numerical methods through dam break analysis (DBA) using hydraulic model Zhong Xing HY-21. The approaches of dam failure simulation are overtopping and piping. Overtopping simulation based on quadrangular, triangular and trapezium fracture. Piping simulation based on cracks of orifice. Using results of DBA, hazard classification of Krisak dam is very high. The nearest village affected dam failure is Singodutan village (distance is 1.45 kilometer from dam) with inundation depth is 1.85 meter. This result can be used by stakeholders such as emergency responders and the community at risk in formulating evacuation procedure.

  12. Failure Analysis in Platelet Molded Composite Systems

    NASA Astrophysics Data System (ADS)

    Kravchenko, Sergii G.

    Long-fiber discontinuous composite systems in the form of chopped prepreg tapes provide an advanced, structural grade, molding compound allowing for fabrication of complex three-dimensional components. Understanding of process-structure-property relationship is essential for application of prerpeg platelet molded components, especially because of their possible irregular disordered heterogeneous morphology. Herein, a structure-property relationship was analyzed in the composite systems of many platelets. Regular and irregular morphologies were considered. Platelet-based systems with more ordered morphology possess superior mechanical performance. While regular morphologies allow for a careful inspection of failure mechanisms derived from the morphological characteristics, irregular morphologies are representative of the composite architectures resulting from uncontrolled deposition and molding with chopped prerpegs. Progressive failure analysis (PFA) was used to study the damaged deformation up to ultimate failure in a platelet-based composite system. Computational damage mechanics approaches were utilized to conduct the PFA. The developed computational models granted understanding of how the composite structure details, meaning the platelet geometry and system morphology (geometrical arrangement and orientation distribution of platelets), define the effective mechanical properties of a platelet-molded composite system, its stiffness, strength and variability in properties.

  13. The application of encapsulation material stability data to photovoltaic module life assessment

    NASA Technical Reports Server (NTRS)

    Coulbert, C. D.

    1983-01-01

    For any piece of hardware that degrades when subject to environmental and application stresses, the route or sequence that describes the degradation process may be summarized in terms of six key words: LOADS, RESPONSE, CHANGE, DAMAGE, FAILURE, and PENALTY. Applied to photovoltaic modules, these six factors form the core outline of an expanded failure analysis matrix for unifying and integrating relevant material degradation data and analyses. An important feature of this approach is the deliberate differentiation between factors such as CHANGE, DAMAGE, and FAILURE. The application of this outline to materials degradation research facilitates the distinction between quantifying material property changes and quantifying module damage or power loss with their economic consequences. The approach recommended for relating material stability data to photovoltaic module life is to use the degree of DAMAGE to (1) optical coupling, (2) encapsulant package integrity, (3) PV circuit integrity or (4) electrical isolation as the quantitative criterion for assessing module potential service life rather than simply using module power loss.

  14. Preventing medical errors by designing benign failures.

    PubMed

    Grout, John R

    2003-07-01

    One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.

  15. Semiparametric regression analysis of failure time data with dependent interval censoring.

    PubMed

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-09-20

    Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. How Analysis Informs Regulation:Success and Failure of ...

    EPA Pesticide Factsheets

    How Analysis Informs Regulation:Success and Failure of Evolving Approaches to Polyfluoroalkyl Acid Contamination The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  17. Design for testability and diagnosis at the system-level

    NASA Technical Reports Server (NTRS)

    Simpson, William R.; Sheppard, John W.

    1993-01-01

    The growing complexity of full-scale systems has surpassed the capabilities of most simulation software to provide detailed models or gate-level failure analyses. The process of system-level diagnosis approaches the fault-isolation problem in a manner that differs significantly from the traditional and exhaustive failure mode search. System-level diagnosis is based on a functional representation of the system. For example, one can exercise one portion of a radar algorithm (the Fast Fourier Transform (FFT) function) by injecting several standard input patterns and comparing the results to standardized output results. An anomalous output would point to one of several items (including the FFT circuit) without specifying the gate or failure mode. For system-level repair, identifying an anomalous chip is sufficient. We describe here an information theoretic and dependency modeling approach that discards much of the detailed physical knowledge about the system and analyzes its information flow and functional interrelationships. The approach relies on group and flow associations and, as such, is hierarchical. Its hierarchical nature allows the approach to be applicable to any level of complexity and to any repair level. This approach has been incorporated in a product called STAMP (System Testability and Maintenance Program) which was developed and refined through more than 10 years of field-level applications to complex system diagnosis. The results have been outstanding, even spectacular in some cases. In this paper we describe system-level testability, system-level diagnoses, and the STAMP analysis approach, as well as a few STAMP applications.

  18. Joint scale-change models for recurrent events and failure time.

    PubMed

    Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun

    2017-01-01

    Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.

  19. Impact of dam failure-induced flood on road network using combined remote sensing and geospatial approach

    NASA Astrophysics Data System (ADS)

    Foumelis, Michael

    2017-01-01

    The applicability of the normalized difference water index (NDWI) to the delineation of dam failure-induced floods is demonstrated for the case of the Sparmos dam (Larissa, Central Greece). The approach followed was based on the differentiation of NDWI maps to accurately define the extent of the inundated area over different time spans using multimission Earth observation optical data. Besides using Landsat data, for which the index was initially designed, higher spatial resolution data from Sentinel-2 mission were also successfully exploited. A geospatial analysis approach was then introduced to rapidly identify potentially affected segments of the road network. This allowed for further correlation to actual damages in the following damage assessment and remediation activities. The proposed combination of geographic information systems and remote sensing techniques can be easily implemented by local authorities and civil protection agencies for mapping and monitoring flood events.

  20. Predicting Failure Under Laboratory Conditions: Learning the Physics of Slow Frictional Slip and Dynamic Failure

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.

    2016-12-01

    Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman, L. Random forests. Machine Learning 45, 5-32 (2001). 2Rouet-Leduc, B. C. Hulbert, N. Lubbers, K. Barros and P. A. Johnson, Learning the physics of failure, in review (2016).

  1. Innovative approach to improving the care of acute decompensated heart failure.

    PubMed

    Merhaut, Shawn; Trupp, Robin

    2011-06-01

    The care of patients presenting to hospitals with acute decompensated heart failure remains a challenging and multifaceted dilemma across the continuum of care. The combination of improved survival rates for and rising incidence of heart failure has created both a clinical and economic burden for hospitals of epidemic proportion. With limited clinical resources, hospitals are expected to provide efficient, comprehensive, and quality care to a population laden with multiple comorbidities and social constraints. Further, this care must be provided in the setting of a volatile economic climate heavily affected by prolonged length of stays, high readmission rates, and changing healthcare policy. Although problems continue to mount, solutions remain scarce. In an effort to help hospitals identify gaps in care, control costs, streamline processes, and ultimately improve outcomes for these patients, the Society of Chest Pain Centers launched Heart Failure Accreditation in July 2009. Rooted in process improvement science, the Society's approach includes utilization of a tiered Accreditation tool to identify best practices, facilitate an internal gap analysis, and generate opportunities for improvement. In contrast to other organizations that require compliance with predetermined specifications, the Society's Heart Failure Accreditation focuses on the overall process including the continuum of care from emergency medical services, emergency department care, inpatient management, transition from hospital to home, and community outreach. As partners in the process, the Society strives to build relationships with facilities and share best practices with the ultimate goal to improve outcomes for heart failure patients.

  2. ANN based Performance Evaluation of BDI for Condition Monitoring of Induction Motor Bearings

    NASA Astrophysics Data System (ADS)

    Patel, Raj Kumar; Giri, V. K.

    2017-06-01

    One of the critical parts in rotating machines is bearings and most of the failure arises from the defective bearings. Bearing failure leads to failure of a machine and the unpredicted productivity loss in the performance. Therefore, bearing fault detection and prognosis is an integral part of the preventive maintenance procedures. In this paper vibration signal for four conditions of a deep groove ball bearing; normal (N), inner race defect (IRD), ball defect (BD) and outer race defect (ORD) were acquired from a customized bearing test rig, under four different conditions and three different fault sizes. Two approaches have been opted for statistical feature extraction from the vibration signal. In the first approach, raw signal is used for statistical feature extraction and in the second approach statistical features extracted are based on bearing damage index (BDI). The proposed BDI technique uses wavelet packet node energy coefficients analysis method. Both the features are used as inputs to an ANN classifier to evaluate its performance. A comparison of ANN performance is made based on raw vibration data and data chosen by using BDI. The ANN performance has been found to be fairly higher when BDI based signals were used as inputs to the classifier.

  3. Sustainability of transport structures - some aspects of the nonlinear reliability assessment

    NASA Astrophysics Data System (ADS)

    Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír

    2017-09-01

    Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.

  4. Detection of Impaired Cerebral Autoregulation Using Selected Correlation Analysis: A Validation Study

    PubMed Central

    Brawanski, Alexander

    2017-01-01

    Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca), correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp). In this study we compared the results of the sca with the pressure reactivity index (PRx), an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc). The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data. PMID:28255331

  5. Detection of Impaired Cerebral Autoregulation Using Selected Correlation Analysis: A Validation Study.

    PubMed

    Proescholdt, Martin A; Faltermeier, Rupert; Bele, Sylvia; Brawanski, Alexander

    2017-01-01

    Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca), correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp). In this study we compared the results of the sca with the pressure reactivity index (PRx), an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc). The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data.

  6. Peridynamic theory for modeling three-dimensional damage growth in metallic and composite structures

    NASA Astrophysics Data System (ADS)

    Ochoa-Ricoux, Juan Pedro

    A recently introduced nonlocal peridynamic theory removes the obstacles present in classical continuum mechanics that limit the prediction of crack initiation and growth in materials. It is also applicable at different length scales. This study presents an alternative approach for the derivation of peridynamic equations of motion based on the principle of virtual work. It also presents solutions for the longitudinal vibration of a bar subjected to an initial stretch, propagation of a pre-existing crack in a plate subjected to velocity boundary conditions, and crack initiation and growth in a plate with a circular cutout. Furthermore, damage growth in composites involves complex and progressive failure modes. Current computational tools are incapable of predicting failure in composite materials mainly due to their mathematical structure. However, the peridynamic theory removes these obstacles by taking into account non-local interactions between material points. Hence, an application of the peridynamic theory to predict how damage propagates in fiber reinforced composite materials subjected to mechanical and thermal loading conditions is presented. Finally, an analysis approach based on a merger of the finite element method and the peridynamic theory is proposed. Its validity is established through qualitative and quantitative comparisons against the test results for a stiffened composite curved panel with a central slot under combined internal pressure and axial tension. The predicted initial and final failure loads, as well as the final failure modes, are in close agreement with the experimental observations. This proposed approach demonstrates the capability of the PD approach to assess the durability of complex composite structures.

  7. MAXimising Involvement in MUltiMorbidity (MAXIMUM) in primary care: protocol for an observation and interview study of patients, GPs and other care providers to identify ways of reducing patient safety failures

    PubMed Central

    Daker-White, Gavin; Hays, Rebecca; Esmail, Aneez; Minor, Brian; Barlow, Wendy; Brown, Benjamin; Blakeman, Thomas; Bower, Peter

    2014-01-01

    Introduction Increasing numbers of older people are living with multiple long-term health conditions but global healthcare systems and clinical guidelines have traditionally focused on the management of single conditions. Having two or more long-term conditions, or ‘multimorbidity’, is associated with a range of adverse consequences and poor outcomes and could put patients at increased risk of safety failures. Traditionally, most research into patient safety failures has explored hospital or inpatient settings. Much less is known about patient safety failures in primary care. Our core aims are to understand the mechanisms by which multimorbidity leads to safety failures, to explore the different ways in which patients and services respond (or fail to respond), and to identify opportunities for intervention. Methods and analysis We plan to undertake an applied ethnographic study of patients with multimorbidity. Patients’ interactions and environments, relevant to their healthcare, will be studied through observations, diary methods and semistructured interviews. A framework, based on previous studies, will be used to organise the collection and analysis of field notes, observations and other qualitative data. This framework includes the domains: access breakdowns, communication breakdowns, continuity of care errors, relationship breakdowns and technical errors. Ethics and dissemination Ethical approval was received from the National Health Service Research Ethics Committee for Wales. An individual case study approach is likely to be most fruitful for exploring the mechanisms by which multimorbidity leads to safety failures. A longitudinal and multiperspective approach will allow for the constant comparison of patient, carer and healthcare worker expectations and experiences related to the provision, integration and management of complex care. This data will be used to explore ways of engaging patients and carers more in their own care using shared decision-making, patient empowerment or other relevant models. PMID:25138807

  8. SU-E-T-87: A TG-100 Approach for Quality Improvement of Associated Dosimetry Equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manger, R; Pawlicki, T; Kim, G

    2015-06-15

    Purpose: Dosimetry protocols devote so much time to the discussion of ionization chamber choice, use and performance that is easy to forget about the importance of the associated dosimetry equipment (ADE) in radiation dosimetry - barometer, thermometer, electrometer, phantoms, triaxial cables, etc. Improper use and inaccuracy of these devices may significantly affect the accuracy of radiation dosimetry. The purpose of this study is to evaluate the risk factors in the monthly output dosimetry procedure and recommend corrective actions using a TG-100 approach. Methods: A failure mode and effects analysis (FMEA) of the monthly linac output check procedure was performed tomore » determine which steps and failure modes carried the greatest risk. In addition, a fault tree analysis (FTA) was performed to expand the initial list of failure modes making sure that none were overlooked. After determining the failure modes with the highest risk priority numbers (RPNs), 11 physicists were asked to score corrective actions based on their ease of implementation and potential impact. The results were aggregated into an impact map to determine the implementable corrective actions. Results: Three of the top five failure modes were related to the thermometer and barometer. The two highest RPN-ranked failure modes were related to barometric pressure inaccuracy due to their high lack-of-detectability scores. Six corrective actions were proposed to address barometric pressure inaccuracy, and the survey results found the following two corrective actions to be implementable: 1) send the barometer for recalibration at a calibration laboratory and 2) check the barometer accuracy against the local airport and correct for elevation. Conclusion: An FMEA on monthly output measurements displayed the importance of ADE for accurate radiation dosimetry. When brainstorming for corrective actions, an impact map is helpful for visualizing the overall impact versus the ease of implementation.« less

  9. The struggling student: a thematic analysis from the self-regulated learning perspective.

    PubMed

    Patel, Rakesh; Tarrant, Carolyn; Bonas, Sheila; Yates, Janet; Sandars, John

    2015-04-01

    Students who engage in self-regulated learning (SRL) are more likely to achieve academic success compared with students who have deficits in SRL and tend to struggle with academic performance. Understanding how poor SRL affects the response to failure at assessment will inform the development of better remediation. Semi-structured interviews were conducted with 55 students who had failed the final re-sit assessment at two medical schools in the UK to explore their use of SRL processes. A thematic analysis approach was used to identify the factors, from an SRL perspective, that prevented students from appropriately and adaptively overcoming failure, and confined them to a cycle of recurrent failure. Struggling students did not utilise key SRL processes, which caused them to make inappropriate choices of learning strategies for written and clinical formats of assessment, and to use maladaptive strategies for coping with failure. Their normalisation of the experience and external attribution of failure represented barriers to their taking up of formal support and seeking informal help from peers. This study identified that struggling students had problems with SRL, which caused them to enter a cycle of failure as a result of their limited attempts to access formal and informal support. Implications for how medical schools can create a culture that supports the seeking of help and the development of SRL, and improves remediation for struggling students, are discussed. © 2015 John Wiley & Sons Ltd.

  10. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  11. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  12. Optimization of composite box-beam structures including effects of subcomponent interactions

    NASA Technical Reports Server (NTRS)

    Ragon, Scott A.; Guerdal, Zafer; Starnes, James H., Jr.

    1995-01-01

    Minimum mass designs are obtained for a simple box beam structure subject to bending, torque and combined bending/torque load cases. These designs are obtained subject to point strain and linear buckling constraints. The present work differs from previous efforts in that special attention is payed to including the effects of subcomponent panel interaction in the optimal design process. Two different approaches are used to impose the buckling constraints. When the global approach is used, buckling constraints are imposed on the global structure via a linear eigenvalue analysis. This approach allows the subcomponent panels to interact in a realistic manner. The results obtained using this approach are compared to results obtained using a traditional, less expensive approach, called the local approach. When the local approach is used, in-plane loads are extracted from the global model and used to impose buckling constraints on each subcomponent panel individually. In the global cases, it is found that there can be significant interaction between skin, spar, and rib design variables. This coupling is weak or nonexistent in the local designs. It is determined that weight savings of up to 7% may be obtained by using the global approach instead of the local approach to design these structures. Several of the designs obtained using the linear buckling analysis are subjected to a geometrically nonlinear analysis. For the designs which were subjected to bending loads, the innermost rib panel begins to collapse at less than half the intended design load and in a mode different from that predicted by linear analysis. The discrepancy between the predicted linear and nonlinear responses is attributed to the effects of the nonlinear rib crushing load, and the parameter which controls this rib collapse failure mode is shown to be the rib thickness. The rib collapse failure mode may be avoided by increasing the rib thickness above the value obtained from the (linear analysis based) optimizer. It is concluded that it would be necessary to include geometric nonlinearities in the design optimization process if the true optimum in this case were to be found.

  13. Detection of system failures in multi-axes tasks. [pilot monitored instrument approach

    NASA Technical Reports Server (NTRS)

    Ephrath, A. R.

    1975-01-01

    The effects of the pilot's participation mode in the control task on his workload level and failure detection performance were examined considering a low visibility landing approach. It is found that the participation mode had a strong effect on the pilot's workload, the induced workload being lowest when the pilot acted as a monitoring element during a coupled approach and highest when the pilot was an active element in the control loop. The effects of workload and participation mode on failure detection were separated. The participation mode was shown to have a dominant effect on the failure detection performance, with a failure in a monitored (coupled) axis being detected significantly faster than a comparable failure in a manually controlled axis.

  14. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.

  15. On the Formulation of Anisotropic-Polyaxial Failure Criteria: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Parisio, Francesco; Laloui, Lyesse

    2018-02-01

    The correct representation of the failure of geomaterials that feature strength anisotropy and polyaxiality is crucial for many applications. In this contribution, we propose and evaluate through a comparative study a generalized framework that covers both features. Polyaxiality of strength is modeled with a modified Van Eekelen approach, while the anisotropy is modeled using a fabric tensor approach of the Pietruszczak and Mroz type. Both approaches share the same philosophy as they can be applied to simpler failure surfaces, allowing great flexibility in model formulation. The new failure surface is tested against experimental data and its performance compared against classical failure criteria commonly used in geomechanics. Our study finds that the global error between predictions and data is generally smaller for the proposed framework compared to other classical approaches.

  16. The fluoroscopy time, door to balloon time, contrast volume use and prevalence of vascular access site failure with transradial versus transfemoral approach in ST segment elevation myocardial infarction: A systematic review & meta-analysis.

    PubMed

    Singh, Sukhchain; Singh, Mukesh; Grewal, Navsheen; Khosla, Sandeep

    2015-12-01

    The authors aimed to conduct first systematic review and meta-analysis in STEMI patients evaluating vascular access site failure rate, fluoroscopy time, door to balloon time and contrast volume used with transradial vs transfemoral approach (TRA vs TFA) for PCI. The PubMed, CINAHL, clinicaltrials.gov, Embase and CENTRAL databases were searched for randomized trials comparing TRA versus TFA. Random effect models were used to conduct this meta-analysis. Fourteen randomized trials comprising 3758 patients met inclusion criteria. The access site failure rate was significantly higher TRA compared to TFA (RR 3.30, CI 2.16-5.03; P=0.000). Random effect inverse variance weighted prevalence rate meta-analysis showed that access site failure rate was predicted to be 4% (95% CI 3.0-6.0%) with TRA versus 1% (95% CI 0.0-1.0 %) with TFA. Door to balloon time (Standardized mean difference [SMD] 0.30 min, 95% CI 0.23-0.37 min; P=0.000) and fluoroscopy time (Standardized mean difference 0.14 min, 95% CI 0.06-0.23 min; P=0.001) were also significantly higher in TRA. There was no difference in the amount of contrast volume used with TRA versus TFA (SMD -0.05 ml, 95% CI -0.14 to 0.04 ml; P=0.275). Statistical heterogeneity was low in cross-over rate and contrast volume use, moderate in fluoroscopy time but high in the door to balloon time comparison. Operators need to consider higher cross-over rate with TRA compared to TFA in STEMI patients while attempting PCI. Fluoroscopy and door to balloon times are negligibly higher with TRA but there is no difference in terms of contrast volume use. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Dynamic analysis method for prevention of failure in the first-stage low-pressure turbine blade with two-finger root

    NASA Astrophysics Data System (ADS)

    Park, Jung-Yong; Jung, Yong-Keun; Park, Jong-Jin; Kang, Yong-Ho

    2002-05-01

    Failures of turbine blades are identified as the leading causes of unplanned outages for steam turbine. Accidents of low-pressure turbine blade occupied more than 70 percent in turbine components. Therefore, the prevention of failures for low pressure turbine blades is certainly needed. The procedure is illustrated by the case study. This procedure is used to guide, and support the plant manager's decisions to avoid a costly, unplanned outage. In this study, we are trying to find factors of failures in LP turbine blade and to make three steps to approach the solution of blade failure. First step is to measure natural frequency in mockup test and to compare it with nozzle passing frequency. Second step is to use FEM and to calculate the natural frequencies of 7 blades and 10 blades per group in BLADE code. Third step is to find natural frequencies of grouped blade off the nozzle passing frequency.

  18. Improved FTA methodology and application to subsea pipeline reliability design.

    PubMed

    Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan

    2014-01-01

    An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.

  19. Improved FTA Methodology and Application to Subsea Pipeline Reliability Design

    PubMed Central

    Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan

    2014-01-01

    An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681

  20. Safety analysis of occupational exposure of healthcare workers to residual contaminations of cytotoxic drugs using FMECA security approach.

    PubMed

    Le, Laetitia Minh Mai; Reitter, Delphine; He, Sophie; Bonle, Franck Té; Launois, Amélie; Martinez, Diane; Prognon, Patrice; Caudron, Eric

    2017-12-01

    Handling cytotoxic drugs is associated with chemical contamination of workplace surfaces. The potential mutagenic, teratogenic and oncogenic properties of those drugs create a risk of occupational exposure for healthcare workers, from reception of starting materials to the preparation and administration of cytotoxic therapies. The Security Failure Mode Effects and Criticality Analysis (FMECA) was used as a proactive method to assess the risks involved in the chemotherapy compounding process. FMECA was carried out by a multidisciplinary team from 2011 to 2016. Potential failure modes of the process were identified based on the Risk Priority Number (RPN) that prioritizes corrective actions. Twenty-five potential failure modes were identified. Based on RPN results, the corrective actions plan was revised annually to reduce the risk of exposure and improve practices. Since 2011, 16 specific measures were implemented successively. In six years, a cumulative RPN reduction of 626 was observed, with a decrease from 912 to 286 (-69%) despite an increase of cytotoxic compounding activity of around 23.2%. In order to anticipate and prevent occupational exposure, FMECA is a valuable tool to identify, prioritize and eliminate potential failure modes for operators involved in the cytotoxic drug preparation process before the failures occur. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A relation to predict the failure of materials and potential application to volcanic eruptions and landslides

    PubMed Central

    Hao, Shengwang; Liu, Chao; Lu, Chunsheng; Elsworth, Derek

    2016-01-01

    A theoretical explanation of a time-to-failure relation is presented, with this relationship then used to describe the failure of materials. This provides the potential to predict timing (tf − t) immediately before failure by extrapolating the trajectory as it asymptotes to zero with no need to fit unknown exponents as previously proposed in critical power law behaviors. This generalized relation is verified by comparison with approaches to criticality for volcanic eruptions and creep failure. A new relation based on changes with stress is proposed as an alternative expression of Voight’s relation, which is widely used to describe the accelerating precursory signals before material failure and broadly applied to volcanic eruptions, landslides and other phenomena. The new generalized relation reduces to Voight’s relation if stress is limited to increase at a constant rate with time. This implies that the time-derivatives in Voight’s analysis may be a subset of a more general expression connecting stress derivatives, and thus provides a potential method for forecasting these events. PMID:27306851

  2. Space Shuttle Main Engine Quantitative Risk Assessment: Illustrating Modeling of a Complex System with a New QRA Software Package

    NASA Technical Reports Server (NTRS)

    Smart, Christian

    1998-01-01

    During 1997, a team from Hernandez Engineering, MSFC, Rocketdyne, Thiokol, Pratt & Whitney, and USBI completed the first phase of a two year Quantitative Risk Assessment (QRA) of the Space Shuttle. The models for the Shuttle systems were entered and analyzed by a new QRA software package. This system, termed the Quantitative Risk Assessment System(QRAS), was designed by NASA and programmed by the University of Maryland. The software is a groundbreaking PC-based risk assessment package that allows the user to model complex systems in a hierarchical fashion. Features of the software include the ability to easily select quantifications of failure modes, draw Event Sequence Diagrams(ESDs) interactively, perform uncertainty and sensitivity analysis, and document the modeling. This paper illustrates both the approach used in modeling and the particular features of the software package. The software is general and can be used in a QRA of any complex engineered system. The author is the project lead for the modeling of the Space Shuttle Main Engines (SSMEs), and this paper focuses on the modeling completed for the SSMEs during 1997. In particular, the groundrules for the study, the databases used, the way in which ESDs were used to model catastrophic failure of the SSMES, the methods used to quantify the failure rates, and how QRAS was used in the modeling effort are discussed. Groundrules were necessary to limit the scope of such a complex study, especially with regard to a liquid rocket engine such as the SSME, which can be shut down after ignition either on the pad or in flight. The SSME was divided into its constituent components and subsystems. These were ranked on the basis of the possibility of being upgraded and risk of catastrophic failure. Once this was done the Shuttle program Hazard Analysis and Failure Modes and Effects Analysis (FMEA) were used to create a list of potential failure modes to be modeled. The groundrules and other criteria were used to screen out the many failure modes that did not contribute significantly to the catastrophic risk. The Hazard Analysis and FMEA for the SSME were also used to build ESDs that show the chain of events leading from the failure mode occurence to one of the following end states: catastrophic failure, engine shutdown, or siccessful operation( successful with respect to the failure mode under consideration).

  3. Evaluation of marginal failures of dental composite restorations by acoustic emission analysis.

    PubMed

    Gu, Ja-Uk; Choi, Nak-Sam

    2013-01-01

    In this study, a nondestructive method based on acoustic emission (AE) analysis was developed to evaluate the marginal failure states of dental composite restorations. Three types of ring-shaped substrates, which were modeled after a Class I cavity, were prepared from polymethyl methacrylate, stainless steel, and human molar teeth. A bonding agent and a composite resin were applied to the ring-shaped substrates and cured by light exposure. At each time-interval measurement, the tooth substrate presented a higher number of AE hits than polymethyl methacrylate and steel substrates. Marginal disintegration estimations derived from cumulative AE hits and cumulative AE energy parameters showed that a signification portion of marginal gap formation was already realized within 1 min at the initial light-curing stage. Estimation based on cumulative AE energy gave a higher level of marginal failure than that based on AE hits. It was concluded that the AE analysis method developed in this study was a viable approach in predicting the clinical survival of dental composite restorations efficiently within a short test period.

  4. Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.

    PubMed

    Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda

    2015-08-31

    The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.

  5. Relationship between sponsorship and failure rate of dental implants: a systematic approach.

    PubMed

    Popelut, Antoine; Valet, Fabien; Fromentin, Olivier; Thomas, Aurélie; Bouchard, Philippe

    2010-04-21

    The number of dental implant treatments increases annually. Dental implants are manufactured by competing companies. Systematic reviews and meta-analysis have shown a clear association between pharmaceutical industry funding of clinical trials and pro-industry results. So far, the impact of industry sponsorship on the outcomes and conclusions of dental implant clinical trials has never been explored. The aim of the present study was to examine financial sponsorship of dental implant trials, and to evaluate whether research funding sources may affect the annual failure rate. A systematic approach was used to identify systematic reviews published between January 1993 and December 2008 that specifically deal with the length of survival of dental implants. Primary articles were extracted from these reviews. The failure rate of the dental implants included in the trials was calculated. Data on publication year, Impact Factor, prosthetic design, periodontal status reporting, number of dental implants included in the trials, methodological quality of the studies, presence of a statistical advisor, and financial sponsorship were extracted by two independent reviewers (kappa = 0.90; CI(95%) [0.77-1.00]). Univariate quasi-Poisson regression models and multivariate analysis were used to identify variables that were significantly associated with failure rates. Five systematic reviews were identified from which 41 analyzable trials were extracted. The mean annual failure rate estimate was 1.09%.(CI(95%) [0.84-1.42]). The funding source was not reported in 63% of the trials (26/41). Sixty-six percent of the trials were considered as having a risk of bias (27/41). Given study age, both industry associated (OR = 0.21; CI(95%) [0.12-0.38]) and unknown funding source trials (OR = 0.33; (CI(95%) [0.21-0.51]) had a lower annual failure rates compared with non-industry associated trials. A conflict of interest statement was disclosed in 2 trials. When controlling for other factors, the probability of annual failure for industry associated trials is significantly lower compared with non-industry associated trials. This bias may have significant implications on tooth extraction decision making, research on tooth preservation, and governmental health care policies.

  6. Optimized Vertex Method and Hybrid Reliability

    NASA Technical Reports Server (NTRS)

    Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.

    2002-01-01

    A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.

  7. Failure mode and effects analysis based risk profile assessment for stereotactic radiosurgery programs at three cancer centers in Brazil.

    PubMed

    Teixeira, Flavia C; de Almeida, Carlos E; Saiful Huq, M

    2016-01-01

    The goal of this study was to evaluate the safety and quality management program for stereotactic radiosurgery (SRS) treatment processes at three radiotherapy centers in Brazil by using three industrial engineering tools (1) process mapping, (2) failure modes and effects analysis (FMEA), and (3) fault tree analysis. The recommendations of Task Group 100 of American Association of Physicists in Medicine were followed to apply the three tools described above to create a process tree for SRS procedure for each radiotherapy center and then FMEA was performed. Failure modes were identified for all process steps and values of risk priority number (RPN) were calculated from O, S, and D (RPN = O × S × D) values assigned by a professional team responsible for patient care. The subprocess treatment planning was presented with the highest number of failure modes for all centers. The total number of failure modes were 135, 104, and 131 for centers I, II, and III, respectively. The highest RPN value for each center is as follows: center I (204), center II (372), and center III (370). Failure modes with RPN ≥ 100: center I (22), center II (115), and center III (110). Failure modes characterized by S ≥ 7, represented 68% of the failure modes for center III, 62% for center II, and 45% for center I. Failure modes with RPNs values ≥100 and S ≥ 7, D ≥ 5, and O ≥ 5 were considered as high priority in this study. The results of the present study show that the safety risk profiles for the same stereotactic radiotherapy process are different at three radiotherapy centers in Brazil. Although this is the same treatment process, this present study showed that the risk priority is different and it will lead to implementation of different safety interventions among the centers. Therefore, the current practice of applying universal device-centric QA is not adequate to address all possible failures in clinical processes at different radiotherapy centers. Integrated approaches to device-centric and process specific quality management program specific to each radiotherapy center are the key to a safe quality management program.

  8. Regression analysis of informative current status data with the additive hazards model.

    PubMed

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  9. Fatigue crack growth in an aluminum alloy-fractographic study

    NASA Astrophysics Data System (ADS)

    Salam, I.; Muhammad, W.; Ejaz, N.

    2016-08-01

    A two-fold approach was adopted to understand the fatigue crack growth process in an Aluminum alloy; fatigue crack growth test of samples and analysis of fractured surfaces. Fatigue crack growth tests were conducted on middle tension M(T) samples prepared from an Aluminum alloy cylinder. The tests were conducted under constant amplitude loading at R ratio 0.1. The stress applied was from 20,30 and 40 per cent of the yield stress of the material. The fatigue crack growth data was recorded. After fatigue testing, the samples were subjected to detailed scanning electron microscopic (SEM) analysis. The resulting fracture surfaces were subjected to qualitative and quantitative fractographic examinations. Quantitative fracture analysis included an estimation of crack growth rate (CGR) in different regions. The effect of the microstructural features on fatigue crack growth was examined. It was observed that in stage II (crack growth region), the failure mode changes from intergranular to transgranular as the stress level increases. In the region of intergranular failure the localized brittle failure was observed and fatigue striations are difficult to reveal. However, in the region of transgranular failure the crack path is independent of the microstructural features. In this region, localized ductile failure mode was observed and well defined fatigue striations were present in the wake of fatigue crack. The effect of interaction of growing fatigue crack with microstructural features was not substantial. The final fracture (stage III) was ductile in all the cases.

  10. The competing risks Cox model with auxiliary case covariates under weaker missing-at-random cause of failure.

    PubMed

    Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin

    2017-08-04

    In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.

  11. Independent Orbiter Assessment (IOA): Assessment of the mechanical actuation subsystem, volume 1

    NASA Technical Reports Server (NTRS)

    Bradway, M. W.; Slaughter, W. T.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine draft failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline that was available. A resolution of each discrepancy from the comparison was provided through additional analysis as required. These discrepancies were flagged as issues, and recommendations were made based on the FMEA data available at the time. This report documents the results of that comparison for the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). Criticality was assigned based upon the severity of the effect for each failure mode.

  12. Operation reliability analysis of independent power plants of gas-transmission system distant production facilities

    NASA Astrophysics Data System (ADS)

    Piskunov, Maksim V.; Voytkov, Ivan S.; Vysokomornaya, Olga V.; Vysokomorny, Vladimir S.

    2015-01-01

    The new approach was developed to analyze the failure causes in operation of linear facilities independent power supply sources (mini-CHP-plants) of gas-transmission system in Eastern part of Russia. Triggering conditions of ceiling operation substance temperature at condenser output were determined with mathematical simulation use of unsteady heat and mass transfer processes in condenser of mini-CHP-plants. Under these conditions the failure probability in operation of independent power supply sources is increased. Influence of environmental factors (in particular, ambient temperature) as well as output electric capability values of power plant on mini-CHP-plant operation reliability was analyzed. Values of mean time to failure and power plant failure density during operation in different regions of Eastern Siberia and Far East of Russia were received with use of numerical simulation results of heat and mass transfer processes at operation substance condensation.

  13. Clipping in Awake Surgery as End-Stage in a Complex Internal Carotid Artery Aneurysm After Failure of Multimodal Endovascular and Extracranial-Intracranial Bypass Treatment.

    PubMed

    Cannizzaro, Delia; Peschillo, Simone; Mancarella, Cristina; La Pira, Biagia; Rastelli, Emanuela; Passacantilli, Emiliano; Santoro, Antonio

    2017-06-01

    Intracranial carotid artery aneurysm can be treated via microsurgical or endovascular techniques. The optimal planning is the result of the careful patient selection through clinical, anatomic, and angiographic analysis. We present a case of ruptured internal carotid artery (ICA) aneurysm that became a complex aneurysm after failure of multi-endovascular and surgery treatment. We describe complete trapping in awake craniotomy after failure of coiling, stenting, and bypassing. ICA aneurysms could become complex aneurysms following multi-treatment failure. Endovascular approaches to treat ICA aneurysms include coiling, stenting, flow diverter stenting, and stenting-assisted coiling technique. The role of surgery remains relevant. To avoid severe neurologic deficits, recurrence, and the need of retreatment, a multidisciplinary discussion with experienced endovascular and vascular neurosurgeons is mandatory in such complex cases. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  14. Spatial and temporal analyses for multiscale monitoring of landslides: Examples from Northern Ireland

    NASA Astrophysics Data System (ADS)

    Bell, Andrew; McKinley, Jennifer; Hughes, David

    2013-04-01

    Landslides in the form of debris flows, large scale rotational features and composite mudflows impact transport corridors cutting off local communities and in some instances result in loss of life. This study presents landslide monitoring methods used for predicting and characterising landslide activity along transport corridors. A variety of approaches are discussed: desk based risk assessment of slopes using Geographical Information Systems (GIS); Aerial LiDAR surveys and Terrestrial LiDAR monitoring and field instrumentation of selected sites. A GIS based case study is discussed which provides risk assessment for the potential of slope stability issues. Layers incorporated within the system include Digital Elevation Model (DEM), slope, aspect, solid and drift geology and groundwater conditions. Additional datasets include consequence of failure. These are combined within a risk model, presented as likelihoods of failure. This integrated spatial approach for slope risk assessment provides the user with a preliminary risk assessment of sites. An innovative "Flexviewer" web-based server interface allows users to view data without needing advanced GIS techniques to gather information about selected areas. On a macro landscape scale, Aerial LiDAR (ALS) surveys are used for the characterisation of landslides from the surrounding terrain. DEMs are generated along with terrain derivatives: slope, curvature and various measures of terrain roughness. Spatial analysis of terrain morphological parameters allow characterisation of slope stability issues and are used to predict areas of potential failure or recently failure terrain. On a local scale ground monitoring approaches are employed for the monitoring of changes in selected slopes using ALS and risk assessment approaches. Results are shown from on-going bimonthly Terrestrial LiDAR (TLS) monitoring of the slope within a site specific geodectically referenced network. This has allowed a classification of changes in the slopes with DEMs of difference showing areas of recent movement, erosion and deposition. In addition, changes in the structure of the slope characterised by DEM of difference and morphological parameters in the form of roughness, slope and curvature measures are progressively linked to failures indicated from temporal DEM monitoring. Preliminary results are presented for a case site at Straidkilly Point, Glenarm, Co. Antrim, Northern Ireland, illustrating multiple approaches to the spatial and temporal monitoring of landslides. These indicate how spatial morphological approaches and risk assessment frameworks coupled with TLS monitoring and field instrumentation enable characterisation and prediction of potential areas of slope stability issues. On site weather instrumentation and piezometers document changes in pore water pressures resulting in site-specific information with geotechnical observations parameterised within the temporal LiDAR monitoring. This provides a multifaceted approach to the characterisation and analysis of slope stability issues. The presented methodology of multiscale datasets and surveying approaches utilising spatial parameters and risk index mapping enables a more comprehensive and effective prediction of landslides resulting in effective characterisation and remediation strategies.

  15. Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie

    2006-01-01

    A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.

  16. Quantile Regression with Censored Data

    ERIC Educational Resources Information Center

    Lin, Guixian

    2009-01-01

    The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…

  17. How Analysis Informs Regulation: Success and Failure of Evolving Approaches to Polyfluoroalkyl Acid Contamination

    EPA Science Inventory

    Worldwide attention has recently been focused on Per- and Polyfluorinated Alkyl Substances (PFAS) due to the growing body of evidence indicating that many of these compounds are toxic, bioaccumulative, and persistent in the environment. Advances in analytical chemistry have play...

  18. Towards Real-time, On-board, Hardware-Supported Sensor and Software Health Management for Unmanned Aerial Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Rozier, Kristin Y.; Reinbacher, Thomas; Mengshoel, Ole J.; Mbaya, Timmy; Ippolito, Corey

    2013-01-01

    Unmanned aerial systems (UASs) can only be deployed if they can effectively complete their missions and respond to failures and uncertain environmental conditions while maintaining safety with respect to other aircraft as well as humans and property on the ground. In this paper, we design a real-time, on-board system health management (SHM) capability to continuously monitor sensors, software, and hardware components for detection and diagnosis of failures and violations of safety or performance rules during the flight of a UAS. Our approach to SHM is three-pronged, providing: (1) real-time monitoring of sensor and/or software signals; (2) signal analysis, preprocessing, and advanced on the- fly temporal and Bayesian probabilistic fault diagnosis; (3) an unobtrusive, lightweight, read-only, low-power realization using Field Programmable Gate Arrays (FPGAs) that avoids overburdening limited computing resources or costly re-certification of flight software due to instrumentation. Our implementation provides a novel approach of combining modular building blocks, integrating responsive runtime monitoring of temporal logic system safety requirements with model-based diagnosis and Bayesian network-based probabilistic analysis. We demonstrate this approach using actual data from the NASA Swift UAS, an experimental all-electric aircraft.

  19. A holistic approach to managing a patient with heart failure.

    PubMed

    Duncan, Alison; Cunnington, Colin

    2013-03-01

    Despite varied and complex therapeutic strategies for managing patients with heart failure, the prognosis may remain poor in certain groups. Recognition that patients with heart failure frequently require input from many care groups formed the basis of The British Society of Heart Failure Annual Autumn Meeting in London (UK), in November 2012, entitled: 'Heart failure: a multidisciplinary approach'. Experts in cardiology, cardiac surgery, general practice, care of the elderly, palliative care and cardiac imaging shared their knowledge and expertise. The 2-day symposium was attended by over 500 participants from the UK, Europe and North America, and hosted physicians, nurses, scientists, trainees and representatives from the industry, as well as patient and community groups. The symposium, accredited by the Royal College of Physicians and the Royal College of Nursing, focused on the multidisciplinary approach to heart failure, in particular, current therapeutic advances, cardiac remodeling, palliative care, atrial fibrillation, heart rate-lowering therapies, management of acute heart failure and the management of patients with mitral regurgitation and heart failure.

  20. Failure modes and effects criticality analysis and accelerated life testing of LEDs for medical applications

    NASA Astrophysics Data System (ADS)

    Sawant, M.; Christou, A.

    2012-12-01

    While use of LEDs in Fiber Optics and lighting applications is common, their use in medical diagnostic applications is not very extensive. Since the precise value of light intensity will be used to interpret patient results, understanding failure modes [1-4] is very important. We used the Failure Modes and Effects Criticality Analysis (FMECA) tool to identify the critical failure modes of the LEDs. FMECA involves identification of various failure modes, their effects on the system (LED optical output in this context), their frequency of occurrence, severity and the criticality of the failure modes. The competing failure modes/mechanisms were degradation of: active layer (where electron-hole recombination occurs to emit light), electrodes (provides electrical contact to the semiconductor chip), Indium Tin Oxide (ITO) surface layer (used to improve current spreading and light extraction), plastic encapsulation (protective polymer layer) and packaging failures (bond wires, heat sink separation). A FMECA table is constructed and the criticality is calculated by estimating the failure effect probability (β), failure mode ratio (α), failure rate (λ) and the operating time. Once the critical failure modes were identified, the next steps were generation of prior time to failure distribution and comparing with our accelerated life test data. To generate the prior distributions, data and results from previous investigations were utilized [5-33] where reliability test results of similar LEDs were reported. From the graphs or tabular data, we extracted the time required for the optical power output to reach 80% of its initial value. This is our failure criterion for the medical diagnostic application. Analysis of published data for different LED materials (AlGaInP, GaN, AlGaAs), the Semiconductor Structures (DH, MQW) and the mode of testing (DC, Pulsed) was carried out. The data was categorized according to the materials system and LED structure such as AlGaInP-DH-DC, AlGaInP-MQW-DC, GaN-DH-DC, and GaN-DH-DC. Although the reported testing was carried out at different temperature and current, the reported data was converted to the present application conditions of the medical environment. Comparisons between the model data and accelerated test results carried out in the present are reported. The use of accelerating agent modeling and regression analysis was also carried out. We have used the Inverse Power Law model with the current density J as the accelerating agent and the Arrhenius model with temperature as the accelerating agent. Finally, our reported methodology is presented as an approach for analyzing LED suitability for the target medical diagnostic applications.

  1. The geomechanical strength of carbonate rock in Kinta valley, Ipoh, Perak Malaysia

    NASA Astrophysics Data System (ADS)

    Mazlan, Nur Amanina; Lai, Goh Thian; Razib, Ainul Mardhiyah Mohd; Rafek, Abdul Ghani; Serasa, Ailie Sofyiana; Simon, Norbert; Surip, Noraini; Ern, Lee Khai; Mohamed, Tuan Rusli

    2018-04-01

    The stability of both cut rocks and underground openings were influenced by the geomechanical strength of rock materials, while the strength characteristics are influenced by both material characteristics and the condition of weathering. This paper present a systematic approach to quantify the rock material strength characteristics for material failure and material & discontinuities failure by using uniaxial compressive strength, point load strength index and Brazilian tensile strength for carbonate rocks. Statistical analysis of the results at 95 percent confidence level showed that the mean value of compressive strength, point load strength index and Brazilian tensile strength for with material failure and material & discontinuities failure were 76.8 ± 4.5 and 41.2 ± 4.1 MPa with standard deviation of 15.2 and 6.5 MPa, respectively. The point load strength index for material failure and material & discontinuities failure were 3.1 ± 0.2 MPa and 1.8 ± 0.3 MPa with standard deviation of 0.9 and 0.6 MPa, respectively. The Brazilian tensile strength with material failure and material & discontinuities failure were 7.1 ± 0.3 MPa and 4.1 ± 0.3 MPa with standard deviation of 1.4 and 0.6 MPa, respectively. The results of this research revealed that the geomechanical strengths of rock material of carbonate rocks for material & discontinuities failure deteriorates approximately ½ from material failure.

  2. Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.

    PubMed

    Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof

    2009-04-01

    Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.

  3. New medicinal products for chronic heart failure: advances in clinical trial design and efficacy assessment.

    PubMed

    Cowie, Martin R; Filippatos, Gerasimos S; Alonso Garcia, Maria de Los Angeles; Anker, Stefan D; Baczynska, Anna; Bloomfield, Daniel M; Borentain, Maria; Bruins Slot, Karsten; Cronin, Maureen; Doevendans, Pieter A; El-Gazayerly, Amany; Gimpelewicz, Claudio; Honarpour, Narimon; Janmohamed, Salim; Janssen, Heidi; Kim, Albert M; Lautsch, Dominik; Laws, Ian; Lefkowitz, Martin; Lopez-Sendon, Jose; Lyon, Alexander R; Malik, Fady I; McMurray, John J V; Metra, Marco; Figueroa Perez, Santiago; Pfeffer, Marc A; Pocock, Stuart J; Ponikowski, Piotr; Prasad, Krishna; Richard-Lordereau, Isabelle; Roessig, Lothar; Rosano, Giuseppe M C; Sherman, Warren; Stough, Wendy Gattis; Swedberg, Karl; Tyl, Benoit; Zannad, Faiez; Boulton, Caroline; De Graeff, Pieter

    2017-06-01

    Despite the availability of a number of different classes of therapeutic agents with proven efficacy in heart failure, the clinical course of heart failure patients is characterized by a reduction in life expectancy, a progressive decline in health-related quality of life and functional status, as well as a high risk of hospitalization. New approaches are needed to address the unmet medical needs of this patient population. The European Medicines Agency (EMA) is undertaking a revision of its Guideline on Clinical Investigation of Medicinal Products for the Treatment of Chronic Heart Failure. The draft version of the Guideline was released for public consultation in January 2016. The Cardiovascular Round Table of the European Society of Cardiology (ESC), in partnership with the Heart Failure Association of the ESC, convened a dedicated two-day workshop to discuss three main topic areas of major interest in the field and addressed in this draft EMA guideline: (i) assessment of efficacy (i.e. endpoint selection and statistical analysis); (ii) clinical trial design (i.e. issues pertaining to patient population, optimal medical therapy, run-in period); and (iii) research approaches for testing novel therapeutic principles (i.e. cell therapy). This paper summarizes the key outputs from the workshop, reviews areas of expert consensus, and identifies gaps that require further research or discussion. Collaboration between regulators, industry, clinical trialists, cardiologists, health technology assessment bodies, payers, and patient organizations is critical to address the ongoing challenge of heart failure and to ensure the development and market access of new therapeutics in a scientifically robust, practical and safe way. © 2017 The Authors. European Journal of Heart Failure © 2017 European Society of Cardiology.

  4. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  5. Controllability Analysis for Multirotor Helicopter Rotor Degradation and Failure

    NASA Astrophysics Data System (ADS)

    Du, Guang-Xun; Quan, Quan; Yang, Binxian; Cai, Kai-Yuan

    2015-05-01

    This paper considers the controllability analysis problem for a class of multirotor systems subject to rotor failure/wear. It is shown that classical controllability theories of linear systems are not sufficient to test the controllability of the considered multirotors. Owing to this, an easy-to-use measurement index is introduced to assess the available control authority. Based on it, a new necessary and sufficient condition for the controllability of multirotors is derived. Furthermore, a controllability test procedure is approached. The proposed controllability test method is applied to a class of hexacopters with different rotor configurations and different rotor efficiency parameters to show its effectiveness. The analysis results show that hexacopters with different rotor configurations have different fault-tolerant capabilities. It is therefore necessary to test the controllability of the multirotors before any fault-tolerant control strategies are employed.

  6. A Cross-Cultural Comparison of Symptom Reporting and Symptom Clusters in Heart Failure.

    PubMed

    Park, Jumin; Johantgen, Mary E

    2017-07-01

    An understanding of symptoms in heart failure (HF) among different cultural groups has become increasingly important. The purpose of this study was to compare symptom reporting and symptom clusters in HF patients between a Western (the United States) and an Eastern Asian sample (China and Taiwan). A secondary analysis of a cross-sectional observational study was conducted. The data were obtained from a matched HF patient sample from the United States and China/Taiwan ( N = 240 in each). Eight selective items related to HF symptoms from the Minnesota Living with Heart Failure Questionnaire were analyzed. Compared with the U.S. sample, HF patients from China/Taiwan reported a lower level of symptom distress. Analysis of two different regional groups did not result in the same number of clusters using latent class approach: the United States (four classes) and China/Taiwan (three classes). The study demonstrated that symptom reporting and identification of symptom clusters might be influenced by cultural factors.

  7. Using Failure Mode and Effects Analysis to design a comfortable automotive driver seat.

    PubMed

    Kolich, Mike

    2014-07-01

    Given enough time and use, all designs will fail. There are no fail-free designs. This is especially true when it comes to automotive seating comfort where the characteristics and preferences of individual customers are many and varied. To address this problem, individuals charged with automotive seating comfort development have, traditionally, relied on iterative and, as a result, expensive build-test cycles. Cost pressures being placed on today's vehicle manufacturers have necessitated the search for more efficient alternatives. This contribution aims to fill this need by proposing the application of an analytical technique common to engineering circles (but new to seating comfort development), namely Design Failure Mode and Effects Analysis (DFMEA). An example is offered to describe how development teams can use this systematic and disciplined approach to highlight potential seating comfort failure modes, reduce their risk, and bring capable designs to life. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. An Improved Design Methodology for Modeling Thick-Section Composite Structures Using a Multiscale Approach

    DTIC Science & Technology

    2012-09-01

    Technologies. Helius was developed as a user material subroutine for ABAQUS and ANSYS (9). Through an ABAQUS plug-in and graphical interface, a...incorporated into an ABAQUS subroutine and compared to experimental data. Xie and Biggers (18) look at the effect width-to-hole-diameter ratio on open- hole...smearing-unsmearing” approach, nonlinear anisotropy, and progressive failure analysis into ABAQUS . The subroutine UMAT is used to define the

  9. Bruxism and dental implant failures: a multilevel mixed effects parametric survival analysis approach.

    PubMed

    Chrcanovic, B R; Kisch, J; Albrektsson, T; Wennerberg, A

    2016-11-01

    Recent studies have suggested that the insertion of dental implants in patients being diagnosed with bruxism negatively affected the implant failure rates. The aim of the present study was to investigate the association between the bruxism and the risk of dental implant failure. This retrospective study is based on 2670 patients who received 10 096 implants at one specialist clinic. Implant- and patient-related data were collected. Descriptive statistics were used to describe the patients and implants. Multilevel mixed effects parametric survival analysis was used to test the association between bruxism and risk of implant failure adjusting for several potential confounders. Criteria from a recent international consensus (Lobbezoo et al., J Oral Rehabil, 40, 2013, 2) and from the International Classification of Sleep Disorders (International classification of sleep disorders, revised: diagnostic and coding manual, American Academy of Sleep Medicine, Chicago, 2014) were used to define and diagnose the condition. The number of implants with information available for all variables totalled 3549, placed in 994 patients, with 179 implants reported as failures. The implant failure rates were 13·0% (24/185) for bruxers and 4·6% (155/3364) for non-bruxers (P < 0·001). The statistical model showed that bruxism was a statistically significantly risk factor to implant failure (HR 3·396; 95% CI 1·314, 8·777; P = 0·012), as well as implant length, implant diameter, implant surface, bone quantity D in relation to quantity A, bone quality 4 in relation to quality 1 (Lekholm and Zarb classification), smoking and the intake of proton pump inhibitors. It is suggested that the bruxism may be associated with an increased risk of dental implant failure. © 2016 John Wiley & Sons Ltd.

  10. Statistical analysis of early failures in electromigration

    NASA Astrophysics Data System (ADS)

    Gall, M.; Capasso, C.; Jawarani, D.; Hernandez, R.; Kawasaki, H.; Ho, P. S.

    2001-07-01

    The detection of early failures in electromigration (EM) and the complicated statistical nature of this important reliability phenomenon have been difficult issues to treat in the past. A satisfactory experimental approach for the detection and the statistical analysis of early failures has not yet been established. This is mainly due to the rare occurrence of early failures and difficulties in testing of large sample populations. Furthermore, experimental data on the EM behavior as a function of varying number of failure links are scarce. In this study, a technique utilizing large interconnect arrays in conjunction with the well-known Wheatstone Bridge is presented. Three types of structures with a varying number of Ti/TiN/Al(Cu)/TiN-based interconnects were used, starting from a small unit of five lines in parallel. A serial arrangement of this unit enabled testing of interconnect arrays encompassing 480 possible failure links. In addition, a Wheatstone Bridge-type wiring using four large arrays in each device enabled simultaneous testing of 1920 interconnects. In conjunction with a statistical deconvolution to the single interconnect level, the results indicate that the electromigration failure mechanism studied here follows perfect lognormal behavior down to the four sigma level. The statistical deconvolution procedure is described in detail. Over a temperature range from 155 to 200 °C, a total of more than 75 000 interconnects were tested. None of the samples have shown an indication of early, or alternate, failure mechanisms. The activation energy of the EM mechanism studied here, namely the Cu incubation time, was determined to be Q=1.08±0.05 eV. We surmise that interface diffusion of Cu along the Al(Cu) sidewalls and along the top and bottom refractory layers, coupled with grain boundary diffusion within the interconnects, constitutes the Cu incubation mechanism.

  11. Fault Injection Techniques and Tools

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.

    1997-01-01

    Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.

  12. Modelling Coastal Cliff Recession Based on the GIM-DDD Method

    NASA Astrophysics Data System (ADS)

    Gong, Bin; Wang, Shanyong; Sloan, Scott William; Sheng, Daichao; Tang, Chun'an

    2018-04-01

    The unpredictable and instantaneous collapse behaviour of coastal rocky cliffs may cause damage that extends significantly beyond the area of failure. Gravitational movements that occur during coastal cliff recession involve two major stages: the small deformation stage and the large displacement stage. In this paper, a method of simulating the entire progressive failure process of coastal rocky cliffs is developed based on the gravity increase method (GIM), the rock failure process analysis method and the discontinuous deformation analysis method, and it is referred to as the GIM-DDD method. The small deformation stage, which includes crack initiation, propagation and coalescence processes, and the large displacement stage, which includes block translation and rotation processes during the rocky cliff collapse, are modelled using the GIM-DDD method. In addition, acoustic emissions, stress field variations, crack propagation and failure mode characteristics are further analysed to provide insights that can be used to predict, prevent and minimize potential economic losses and casualties. The calculation and analytical results are consistent with previous studies, which indicate that the developed method provides an effective and reliable approach for performing rocky cliff stability evaluations and coastal cliff recession analyses and has considerable potential for improving the safety and protection of seaside cliff areas.

  13. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  14. Peridynamics for failure and residual strength prediction of fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Colavito, Kyle

    Peridynamics is a reformulation of classical continuum mechanics that utilizes integral equations in place of partial differential equations to remove the difficulty in handling discontinuities, such as cracks or interfaces, within a body. Damage is included within the constitutive model; initiation and propagation can occur without resorting to special crack growth criteria necessary in other commonly utilized approaches. Predicting damage and residual strengths of composite materials involves capturing complex, distinct and progressive failure modes. The peridynamic laminate theory correctly predicts the load redistribution in general laminate layups in the presence of complex failure modes through the use of multiple interaction types. This study presents two approaches to obtain the critical peridynamic failure parameters necessary to capture the residual strength of a composite structure. The validity of both approaches is first demonstrated by considering the residual strength of isotropic materials. The peridynamic theory is used to predict the crack growth and final failure load in both a diagonally loaded square plate with a center crack, as well as a four-point shear specimen subjected to asymmetric loading. This study also establishes the validity of each approach by considering composite laminate specimens in which each failure mode is isolated. Finally, the failure loads and final failure modes are predicted in a laminate with various hole diameters subjected to tensile and compressive loads.

  15. WE-G-BRA-08: Failure Modes and Effects Analysis (FMEA) for Gamma Knife Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Bhatnagar, J; Bednarz, G

    2015-06-15

    Purpose: To perform a failure modes and effects analysis (FMEA) study for Gamma Knife (GK) radiosurgery processes at our institution based on our experience with the treatment of more than 13,000 patients. Methods: A team consisting of medical physicists, nurses, radiation oncologists, neurosurgeons at the University of Pittsburgh Medical Center and an external physicist expert was formed for the FMEA study. A process tree and a failure mode table were created for the GK procedures using the Leksell GK Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detectionmore » (D) for failure modes were assigned to each failure mode by each professional on a scale from 1 to 10. The risk priority number (RPN) for each failure mode was then calculated (RPN = OxSxD) as the average scores from all data sets collected. Results: The established process tree for GK radiosurgery consists of 10 sub-processes and 53 steps, including a sub-process for frame placement and 11 steps that are directly related to the frame-based nature of the GK radiosurgery. Out of the 86 failure modes identified, 40 failure modes are GK specific, caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the GK helmets and plugs, and the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all radiation therapy techniques. The failure modes with the highest hazard scores are related to imperfect frame adaptor attachment, bad fiducial box assembly, overlooked target areas, inaccurate previous treatment information and excessive patient movement during MRI scan. Conclusion: The implementation of the FMEA approach for Gamma Knife radiosurgery enabled deeper understanding of the overall process among all professionals involved in the care of the patient and helped identify potential weaknesses in the overall process.« less

  16. Advanced Fault Diagnosis Methods in Molecular Networks

    PubMed Central

    Habibi, Iman; Emamian, Effat S.; Abdi, Ali

    2014-01-01

    Analysis of the failure of cell signaling networks is an important topic in systems biology and has applications in target discovery and drug development. In this paper, some advanced methods for fault diagnosis in signaling networks are developed and then applied to a caspase network and an SHP2 network. The goal is to understand how, and to what extent, the dysfunction of molecules in a network contributes to the failure of the entire network. Network dysfunction (failure) is defined as failure to produce the expected outputs in response to the input signals. Vulnerability level of a molecule is defined as the probability of the network failure, when the molecule is dysfunctional. In this study, a method to calculate the vulnerability level of single molecules for different combinations of input signals is developed. Furthermore, a more complex yet biologically meaningful method for calculating the multi-fault vulnerability levels is suggested, in which two or more molecules are simultaneously dysfunctional. Finally, a method is developed for fault diagnosis of networks based on a ternary logic model, which considers three activity levels for a molecule instead of the previously published binary logic model, and provides equations for the vulnerabilities of molecules in a ternary framework. Multi-fault analysis shows that the pairs of molecules with high vulnerability typically include a highly vulnerable molecule identified by the single fault analysis. The ternary fault analysis for the caspase network shows that predictions obtained using the more complex ternary model are about the same as the predictions of the simpler binary approach. This study suggests that by increasing the number of activity levels the complexity of the model grows; however, the predictive power of the ternary model does not appear to be increased proportionally. PMID:25290670

  17. Current trends and outcomes of breast reconstruction following nipple-sparing mastectomy: results from a national multicentric registry with 1006 cases over a 6-year period.

    PubMed

    Casella, Donato; Calabrese, Claudio; Orzalesi, Lorenzo; Gaggelli, Ilaria; Cecconi, Lorenzo; Santi, Caterina; Murgo, Roberto; Rinaldi, Stefano; Regolo, Lea; Amanti, Claudio; Roncella, Manuela; Serra, Margherita; Meneghini, Graziano; Bortolini, Massimiliano; Altomare, Vittorio; Cabula, Carlo; Catalano, Francesca; Cirilli, Alfredo; Caruso, Francesco; Lazzaretti, Maria Grazia; Meattini, Icro; Livi, Lorenzo; Cataliotti, Luigi; Bernini, Marco

    2017-05-01

    Reconstruction options following nipple-sparing mastectomy (NSM) are diverse and not yet investigated with level IA evidence. The analysis of surgical and oncological outcomes of NSM from the Italian National Registry shows its safety and wide acceptance both for prophylactic and therapeutic cases. A further in-depth analysis of the reconstructive approaches with their trend over time and their failures is the aim of this study. Data extraction from the National Database was performed restricting cases to the 2009-2014 period. Different reconstruction procedures were analyzed in terms of their distribution over time and with respect to specific indications. A 1-year minimum follow-up was conducted to assess reconstructive unsuccessful events. Univariate and multivariate analyses were performed to investigate the causes of both prosthetic and autologous failures. 913 patients, for a total of 1006 procedures, are included in the analysis. A prosthetic only reconstruction is accomplished in 92.2 % of cases, while pure autologous tissues are employed in 4.2 % and a hybrid (prosthetic plus autologous) in 3.6 %. Direct-to-implant (DTI) reaches 48.7 % of all reconstructions in the year 2014. Prophylactic NSMs have a DTI reconstruction in 35.6 % of cases and an autologous tissue flap in 12.9 % of cases. Failures are 2.7 % overall: 0 % in pure autologous flaps and 9.1 % in hybrid cases. Significant risk factors for failures are diabetes and the previous radiation therapy on the operated breast. Reconstruction following NSM is mostly prosthetic in Italy, with DTI gaining large acceptance over time. Failures are low and occurring in diabetic and irradiated patients at the multivariate analysis.

  18. Compounding effects of sea level rise and fluvial flooding.

    PubMed

    Moftakhari, Hamed R; Salvadori, Gianfausto; AghaKouchak, Amir; Sanders, Brett F; Matthew, Richard A

    2017-09-12

    Sea level rise (SLR), a well-documented and urgent aspect of anthropogenic global warming, threatens population and assets located in low-lying coastal regions all around the world. Common flood hazard assessment practices typically account for one driver at a time (e.g., either fluvial flooding only or ocean flooding only), whereas coastal cities vulnerable to SLR are at risk for flooding from multiple drivers (e.g., extreme coastal high tide, storm surge, and river flow). Here, we propose a bivariate flood hazard assessment approach that accounts for compound flooding from river flow and coastal water level, and we show that a univariate approach may not appropriately characterize the flood hazard if there are compounding effects. Using copulas and bivariate dependence analysis, we also quantify the increases in failure probabilities for 2030 and 2050 caused by SLR under representative concentration pathways 4.5 and 8.5. Additionally, the increase in failure probability is shown to be strongly affected by compounding effects. The proposed failure probability method offers an innovative tool for assessing compounding flood hazards in a warming climate.

  19. Impact of distributed energy resources on the reliability of a critical telecommunications facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, David; Zuffranieri, Jason V.; Atcitty, Christopher B.

    2006-03-01

    This report documents a probabilistic risk assessment of an existing power supply system at a large telecommunications office. The focus is on characterizing the increase in the reliability of power supply through the use of two alternative power configurations. Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages. A logical approach to improve the robustness of telecommunication facilities would be to increase the depth and breadth of technologies available to restore power inmore » the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnham, A K; Weese, R K; Adrzejewski, W J

    Accelerated aging tests play an important role in assessing the lifetime of manufactured products. There are two basic approaches to lifetime qualification. One tests a product to failure over range of accelerated conditions to calibrate a model, which is then used to calculate the failure time for conditions of use. A second approach is to test a component to a lifetime-equivalent dose (thermal or radiation) to see if it still functions to specification. Both methods have their advantages and limitations. A disadvantage of the 2nd method is that one does not know how close one is to incipient failure. Thismore » limitation can be mitigated by testing to some higher level of dose as a safety margin, but having a predictive model of failure via the 1st approach provides an additional measure of confidence. Even so, proper calibration of a failure model is non-trivial, and the extrapolated failure predictions are only as good as the model and the quality of the calibration. This paper outlines results for predicting the potential failure point of a system involving a mixture of two energetic materials, HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate). Global chemical kinetic models for the two materials individually and as a mixture are developed and calibrated from a variety of experiments. These include traditional thermal analysis experiments run on time scales from hours to a couple days, detonator aging experiments with exposures up to 50 months, and sealed-tube aging experiments for up to 5 years. Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long-term thermal aging experiments. An Arrhenius plot of the 10% level of HMX decomposition by itself from a diverse set of experiments is linear from 120 to 260 C, with an apparent activation energy of 165 kJ/mol. Similar but less extensive thermal analysis data for the mixture suggests a slightly lower activation energy for the mixture, and an analogous extrapolation is consistent with the amount of gas observed in the long-term detonator aging experiments, which is about 30 times greater than expected from HMX by itself for 50 months at 100 C. Even with this acceleration, however, it would take {approx}10,000 years to achieve 10% decomposition at {approx}30 C. Correspondingly, negligible decomposition is predicted by this kinetic model for a few decades aging at temperatures slightly above ambient. This prediction is consistent with additional sealed-tube aging experiments at 100-120 C, which are estimated to have an effective thermal dose greater than that from decades of exposure to temperatures slightly above ambient.« less

  1. Reconfigurable Flight Control Using Nonlinear Dynamic Inversion with a Special Accelerometer Implementation

    NASA Technical Reports Server (NTRS)

    Bacon, Barton J.; Ostroff, Aaron J.

    2000-01-01

    This paper presents an approach to on-line control design for aircraft that have suffered either actuator failure, missing effector surfaces, surface damage, or any combination. The approach is based on a modified version of nonlinear dynamic inversion. The approach does not require a model of the baseline vehicle (effectors at zero deflection), but does require feedback of accelerations and effector positions. Implementation issues are addressed and the method is demonstrated on an advanced tailless aircraft. An experimental simulation analysis tool is used to directly evaluate the nonlinear system's stability robustness.

  2. Risk-based maintenance of ethylene oxide production facilities.

    PubMed

    Khan, Faisal I; Haddara, Mahmoud R

    2004-05-20

    This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.

  3. Independent Orbiter Assessment (IOA): FMEA/CIL assessment

    NASA Technical Reports Server (NTRS)

    Hinsdale, L. W.; Swain, L. J.; Barnes, J. E.

    1988-01-01

    The McDonnell Douglas Astronautics Company (MDAC) was selected to perform an Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL). Direction was given by the Orbiter and GFE Projects Office to perform the hardware analysis and assessment using the instructions and ground rules defined in NSTS 22206. The IOA analysis featured a top-down approach to determine hardware failure modes, criticality, and potential critical items. To preserve independence, the analysis was accomplished without reliance upon the results contained within the NASA and Prime Contractor FMEA/CIL documentation. The assessment process compared the independently derived failure modes and criticality assignments to the proposed NASA post 51-L FMEA/CIL documentation. When possible, assessment issues were discussed and resolved with the NASA subsystem managers. Unresolved issues were elevated to the Orbiter and GFE Projects Office manager, Configuration Control Board (CCB), or Program Requirements Control Board (PRCB) for further resolution. The most important Orbiter assessment finding was the previously unknown stuck autopilot push-button criticality 1/1 failure mode. The worst case effect could cause loss of crew/vehicle when the microwave landing system is not active. It is concluded that NASA and Prime Contractor Post 51-L FMEA/CIL documentation assessed by IOA is believed to be technically accurate and complete. All CIL issues were resolved. No FMEA issues remain that have safety implications. Consideration should be given, however, to upgrading NSTS 22206 with definitive ground rules which more clearly spell out the limits of redundancy.

  4. Supporting secure programming in web applications through interactive static analysis.

    PubMed

    Zhu, Jun; Xie, Jing; Lipford, Heather Richter; Chu, Bill

    2014-07-01

    Many security incidents are caused by software developers' failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE) and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases.

  5. Independent Orbiter Assessment (IOA): Analysis of the electrical power generation/power reactant storage and distribution subsystem

    NASA Technical Reports Server (NTRS)

    Gotch, S. M.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NAA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Electrical Power Generation (EPG)/Power Reactants Storage and Distribution (PRSD) System Hardware is documented. The EPG/PRSD hardware is required for performing critical functions of cryogenic hydrogen and oxygen storage and distribution to the Fuel Cell Powerplants (FCP) and Atmospheric Revitalization Pressure Control Subsystem (ARPCS). Specifically, the EPG/PRSD hardware consists of the following: Hydryogen (H2) tanks; Oxygen (O2) tanks; H2 Relief Valve/Filter Packages (HRVFP); O2 Relief Valve/Filter Packages (ORVFP); H2 Valve Modules (HVM); O2 Valve Modules (OVM); and O2 and H2 lines, components, and fittings.

  6. Supporting secure programming in web applications through interactive static analysis

    PubMed Central

    Zhu, Jun; Xie, Jing; Lipford, Heather Richter; Chu, Bill

    2013-01-01

    Many security incidents are caused by software developers’ failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE) and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases. PMID:25685513

  7. Use of Probabilistic Engineering Methods in the Detailed Design and Development Phases of the NASA Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Fayssal, Safie; Weldon, Danny

    2008-01-01

    The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program called Constellation to send crew and cargo to the international Space Station, to the moon, and beyond. As part of the Constellation program, a new launch vehicle, Ares I, is being developed by NASA Marshall Space Flight Center. Designing a launch vehicle with high reliability and increased safety requires a significant effort in understanding design variability and design uncertainty at the various levels of the design (system, element, subsystem, component, etc.) and throughout the various design phases (conceptual, preliminary design, etc.). In a previous paper [1] we discussed a probabilistic functional failure analysis approach intended mainly to support system requirements definition, system design, and element design during the early design phases. This paper provides an overview of the application of probabilistic engineering methods to support the detailed subsystem/component design and development as part of the "Design for Reliability and Safety" approach for the new Ares I Launch Vehicle. Specifically, the paper discusses probabilistic engineering design analysis cases that had major impact on the design and manufacturing of the Space Shuttle hardware. The cases represent important lessons learned from the Space Shuttle Program and clearly demonstrate the significance of probabilistic engineering analysis in better understanding design deficiencies and identifying potential design improvement for Ares I. The paper also discusses the probabilistic functional failure analysis approach applied during the early design phases of Ares I and the forward plans for probabilistic design analysis in the detailed design and development phases.

  8. Failure of engineering artifacts: a life cycle approach.

    PubMed

    Del Frate, Luca

    2013-09-01

    Failure is a central notion both in ethics of engineering and in engineering practice. Engineers devote considerable resources to assure their products will not fail and considerable progress has been made in the development of tools and methods for understanding and avoiding failure. Engineering ethics, on the other hand, is concerned with the moral and social aspects related to the causes and consequences of technological failures. But what is meant by failure, and what does it mean that a failure has occurred? The subject of this paper is how engineers use and define this notion. Although a traditional definition of failure can be identified that is shared by a large part of the engineering community, the literature shows that engineers are willing to consider as failures also events and circumstance that are at odds with this traditional definition. These cases violate one or more of three assumptions made by the traditional approach to failure. An alternative approach, inspired by the notion of product life cycle, is proposed which dispenses with these assumptions. Besides being able to address the traditional cases of failure, it can deal successfully with the problematic cases. The adoption of a life cycle perspective allows the introduction of a clearer notion of failure and allows a classification of failure phenomena that takes into account the roles of stakeholders involved in the various stages of a product life cycle.

  9. Failure Assessment of Brazed Structures

    NASA Technical Reports Server (NTRS)

    Flom, Yuri

    2012-01-01

    Despite the great advances in analytical methods available to structural engineers, designers of brazed structures have great difficulties in addressing fundamental questions related to the loadcarrying capabilities of brazed assemblies. In this chapter we will review why such common engineering tools as Finite Element Analysis (FEA) as well as many well-established theories (Tresca, von Mises, Highest Principal Stress, etc) don't work well for the brazed joints. This chapter will show how the classic approach of using interaction equations and the less known Coulomb-Mohr failure criterion can be employed to estimate Margins of Safety (MS) in brazed joints.

  10. CONFIG: Integrated engineering of systems and their operation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    This article discusses CONFIG 3, a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operations of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. CONFIG supports integration among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. CONFIG is designed to support integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems.

  11. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  12. The use of supportive-educative and mutual goal-setting strategies to improve self-management for patients with heart failure.

    PubMed

    Kline, Kay Setter; Scott, Linda D; Britton, Agnes S

    2007-09-01

    This study examined the effects of 2 home healthcare nursing approaches--supportive-educative and mutual goal setting--on self-management for patients with heart failure. Both approaches are specifically related to participants' understanding of heart failure and self-efficacy in managing the condition. An experimental, longitudinal, repeated-measures design was used with a sample of 88 participants. Although no significant difference was demonstrated in participants' understanding of heart failure, the supportive-educative group showed a significantly increased self-efficacy in managing heart failure symptoms.

  13. Advances in gene therapy for heart failure.

    PubMed

    Fish, Kenneth M; Ishikawa, Kiyotake

    2015-04-01

    Chronic heart failure is expected to increase its social and economic burden as a consequence of improved survival in patients with acute cardiac events. Cardiac gene therapy holds significant promise in heart failure treatment for patients with currently very limited or no treatment options. The introduction of adeno-associated virus (AAV) gene vector changed the paradigm of cardiac gene therapy, and now it is the primary vector of choice for chronic heart failure gene therapy in clinical and preclinical studies. Recently, there has been significant progress towards clinical translation in this field spearheaded by AAV-1 mediated sarcoplasmic reticulum Ca2+ ATPase (SERCA2a) gene therapy targeting chronic advanced heart failure patients. Meanwhile, several independent laboratories are reporting successful gene therapy approaches in clinically relevant large animal models of heart failure and some of these approaches are expected to enter clinical trials in the near future. This review will focus on gene therapy approaches targeting heart failure that is in clinical trials and those close to its initial clinical trial application.

  14. Medical students' personal experience of high-stakes failure: case studies using interpretative phenomenological analysis.

    PubMed

    Patel, R S; Tarrant, C; Bonas, S; Shaw, R L

    2015-05-12

    Failing a high-stakes assessment at medical school is a major event for those who go through the experience. Students who fail at medical school may be more likely to struggle in professional practice, therefore helping individuals overcome problems and respond appropriately is important. There is little understanding about what factors influence how individuals experience failure or make sense of the failing experience in remediation. The aim of this study was to investigate the complexity surrounding the failure experience from the student's perspective using interpretative phenomenological analysis (IPA). The accounts of three medical students who had failed final re-sit exams, were subjected to in-depth analysis using IPA methodology. IPA was used to analyse each transcript case-by-case allowing the researcher to make sense of the participant's subjective world. The analysis process allowed the complexity surrounding the failure to be highlighted, alongside a narrative describing how students made sense of the experience. The circumstances surrounding students as they approached assessment and experienced failure at finals were a complex interaction between academic problems, personal problems (specifically finance and relationships), strained relationships with friends, family or faculty, and various mental health problems. Each student experienced multi-dimensional issues, each with their own individual combination of problems, but experienced remediation as a one-dimensional intervention with focus only on improving performance in written exams. What these students needed to be included was help with clinical skills, plus social and emotional support. Fear of termination of the their course was a barrier to open communication with staff. These students' experience of failure was complex. The experience of remediation is influenced by the way in which students make sense of failing. Generic remediation programmes may fail to meet the needs of students for whom personal, social and mental health issues are a part of the picture.

  15. Multiple perspective vulnerability analysis of the power network

    NASA Astrophysics Data System (ADS)

    Wang, Shuliang; Zhang, Jianhua; Duan, Na

    2018-02-01

    To understand the vulnerability of the power network from multiple perspectives, multi-angle and multi-dimensional vulnerability analysis as well as community based vulnerability analysis are proposed in this paper. Taking into account of central China power grid as an example, correlation analysis of different vulnerability models is discussed. Then, vulnerabilities produced by different vulnerability metrics under the given vulnerability models and failure scenarios are analyzed. At last, applying the community detecting approach, critical areas of central China power grid are identified, Vulnerable and robust communities on both topological and functional perspective are acquired and analyzed. The approach introduced in this paper can be used to help decision makers develop optimal protection strategies. It will be also useful to give a multiple vulnerability analysis of the other infrastructure systems.

  16. Frailty Assessment in Heart Failure: an Overview of the Multi-domain Approach.

    PubMed

    McDonagh, Julee; Ferguson, Caleb; Newton, Phillip J

    2018-02-01

    The study aims (1) to provide a contemporary description of frailty assessment in heart failure and (2) to provide an overview of multi-domain frailty assessment in heart failure. Frailty assessment is an important predictive measure for mortality and hospitalisation in individuals with heart failure. To date, there are no frailty assessment instruments validated for use in heart failure. This has resulted in significant heterogeneity between studies regarding the assessment of frailty. The most common frailty assessment instrument used in heart failure is the Frailty Phenotype which focuses on five physical domains of frailty; the appropriateness a purely physical measure of frailty in individuals with heart failure who frequently experience decreased exercise tolerance and shortness of breath is yet to be determined. A limited number of studies have approached frailty assessment using a multi-domain view which may be more clinically relevant in heart failure. There remains a lack of consensus regarding frailty assessment and an absence of a validated instrument in heart failure. Despite this, frailty continues to be assessed frequently, primarily for research purposes, using predominantly physical frailty measures. A more multidimensional view of frailty assessment using a multi-domain approach will likely be more sensitive to identifying at risk patients.

  17. Independent Orbiter Assessment (IOA): Assessment of the mechanical actuation subsystem, volume 2

    NASA Technical Reports Server (NTRS)

    Bradway, M. W.; Slaughter, W. T.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine draft failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline that was available. A resolution of each discrepancy from the comparison was provided through additional analysis as required. These discrepancies were flagged as issues, and recommendations were made based on the FMEA data available at the time. This report documents the results of that comparison for the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). Criticality was assigned based upon the severity of the effect for each failure mode. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list, detailed analysis, and NASA FMEA/CIL to IOA worksheet cross reference and recommendations.

  18. Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.

    1999-01-01

    A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.

  19. Failure mode and effects analysis: too little for too much?

    PubMed

    Dean Franklin, Bryony; Shebl, Nada Atef; Barber, Nick

    2012-07-01

    Failure mode and effects analysis (FMEA) is a structured prospective risk assessment method that is widely used within healthcare. FMEA involves a multidisciplinary team mapping out a high-risk process of care, identifying the failures that can occur, and then characterising each of these in terms of probability of occurrence, severity of effects and detectability, to give a risk priority number used to identify failures most in need of attention. One might assume that such a widely used tool would have an established evidence base. This paper considers whether or not this is the case, examining the evidence for the reliability and validity of its outputs, the mathematical principles behind the calculation of a risk prioirty number, and variation in how it is used in practice. We also consider the likely advantages of this approach, together with the disadvantages in terms of the healthcare professionals' time involved. We conclude that although FMEA is popular and many published studies have reported its use within healthcare, there is little evidence to support its use for the quantitative prioritisation of process failures. It lacks both reliability and validity, and is very time consuming. We would not recommend its use as a quantitative technique to prioritise, promote or study patient safety interventions. However, the stage of FMEA involving multidisciplinary mapping process seems valuable and work is now needed to identify the best way of converting this into plans for action.

  20. Critical Review: Medical Students' Motivation after Failure

    ERIC Educational Resources Information Center

    Holland, Chris

    2016-01-01

    About 10% of students in each years' entrants to medical school will encounter academic failure at some stage in their programme. The usual approach to supporting these students is to offer them short term remedial study programmes that often enhance approaches to study that are orientated towards avoiding failure. In this critical review I will…

  1. TEXCAD: Textile Composite Analysis for Design. Version 1.0: User's manual

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.

    1994-01-01

    The Textile Composite Analysis for Design (TEXCAD) code provides the materials/design engineer with a user-friendly desktop computer (IBM PC compatible or Apple Macintosh) tool for the analysis of a wide variety of fabric reinforced woven and braided composites. It can be used to calculate overall thermal and mechanical properties along with engineering estimates of damage progression and strength. TEXCAD also calculates laminate properties for stacked, oriented fabric constructions. It discretely models the yarn centerline paths within the textile repeating unit cell (RUC) by assuming sinusoidal undulations at yarn cross-over points and uses a yarn discretization scheme (which subdivides each yarn not smaller, piecewise straight yarn slices) together with a 3-D stress averaging procedure to compute overall stiffness properties. In the calculations for strength, it uses a curved beam-on-elastic foundation model for yarn undulating regions together with an incremental approach in which stiffness properties for the failed yarn slices are reduced based on the predicted yarn slice failure mode. Nonlinear shear effects and nonlinear geometric effects can be simulated. Input to TEXCAD consists of: (1) materials parameters like impregnated yarn and resin properties such moduli, Poisson's ratios, coefficients of thermal expansion, nonlinear parameters, axial failure strains and in-plane failure stresses; and (2) fabric parameters like yarn sizes, braid angle, yarn packing density, filament diameter and overall fiber volume fraction. Output consists of overall thermoelastic constants, yarn slice strains/stresses, yarn slice failure history, in-plane stress-strain response and ultimate failure strength. Strength can be computed under the combined action of thermal and mechanical loading (tension, compression and shear).

  2. Patterns of failure after the reduced volume approach for elective nodal irradiation in nasopharyngeal carcinoma.

    PubMed

    Seol, Ki Ho; Lee, Jeong Eun

    2016-03-01

    To evaluate the patterns of nodal failure after radiotherapy (RT) with the reduced volume approach for elective neck nodal irradiation (ENI) in nasopharyngeal carcinoma (NPC). Fifty-six NPC patients who underwent definitive chemoradiotherapy with the reduced volume approach for ENI were reviewed. The ENI included retropharyngeal and level II lymph nodes, and only encompassed the echelon inferior to the involved level to eliminate the entire neck irradiation. Patients received either moderate hypofractionated intensity-modulated RT for a total of 72.6 Gy (49.5 Gy to elective nodal areas) or a conventional fractionated three-dimensional conformal RT for a total of 68.4-72 Gy (39.6-45 Gy to elective nodal areas). Patterns of failure, locoregional control, and survival were analyzed. The median follow-up was 38 months (range, 3 to 80 months). The out-of-field nodal failure when omitting ENI was none. Three patients developed neck recurrences (one in-field recurrence in the 72.6 Gy irradiated nodal area and two in the elective irradiated region of 39.6 Gy). Overall disease failure at any site developed in 11 patients (19.6%). Among these, there were six local failures (10.7%), three regional failures (5.4%), and five distant metastases (8.9%). The 3-year locoregional control rate was 87.1%, and the distant failure-free rate was 90.4%; disease-free survival and overall survival at 3 years was 80% and 86.8%, respectively. No patient developed nodal failure in the omitted ENI site. Our investigation has demonstrated that the reduced volume approach for ENI appears to be a safe treatment approach in NPC.

  3. Patterns of failure after the reduced volume approach for elective nodal irradiation in nasopharyngeal carcinoma

    PubMed Central

    Seol, Ki Ho

    2016-01-01

    Purpose To evaluate the patterns of nodal failure after radiotherapy (RT) with the reduced volume approach for elective neck nodal irradiation (ENI) in nasopharyngeal carcinoma (NPC). Materials and Methods Fifty-six NPC patients who underwent definitive chemoradiotherapy with the reduced volume approach for ENI were reviewed. The ENI included retropharyngeal and level II lymph nodes, and only encompassed the echelon inferior to the involved level to eliminate the entire neck irradiation. Patients received either moderate hypofractionated intensity-modulated RT for a total of 72.6 Gy (49.5 Gy to elective nodal areas) or a conventional fractionated three-dimensional conformal RT for a total of 68.4–72 Gy (39.6–45 Gy to elective nodal areas). Patterns of failure, locoregional control, and survival were analyzed. Results The median follow-up was 38 months (range, 3 to 80 months). The out-of-field nodal failure when omitting ENI was none. Three patients developed neck recurrences (one in-field recurrence in the 72.6 Gy irradiated nodal area and two in the elective irradiated region of 39.6 Gy). Overall disease failure at any site developed in 11 patients (19.6%). Among these, there were six local failures (10.7%), three regional failures (5.4%), and five distant metastases (8.9%). The 3-year locoregional control rate was 87.1%, and the distant failure-free rate was 90.4%; disease-free survival and overall survival at 3 years was 80% and 86.8%, respectively. Conclusion No patient developed nodal failure in the omitted ENI site. Our investigation has demonstrated that the reduced volume approach for ENI appears to be a safe treatment approach in NPC. PMID:27104162

  4. Determination of fiber-matrix interface failure parameters from off-axis tests

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1993-01-01

    Critical fiber-matrix (FM) interface strength parameters were determined using a micromechanics-based approach together with failure data from off-axis tension (OAT) tests. The ply stresses at failure for a range of off-axis angles were used as input to a micromechanics analysis that was performed using the personal computer-based MICSTRAN code. FM interface stresses at the failure loads were calculated for both the square and the diamond array models. A simple procedure was developed to determine which array had the more severe FM interface stresses and the location of these critical stresses on the interface. For the cases analyzed, critical FM interface stresses were found to occur with the square array model and were located at a point where adjacent fibers were closest together. The critical FM interface stresses were used together with the Tsai-Wu failure theory to determine a failure criterion for the FM interface. This criterion was then used to predict the onset of ply cracking in angle-ply laminates for a range of laminate angles. Predictions for the onset of ply cracking in angle-ply laminates agreed with the test data trends.

  5. NASA's Evolutionary Xenon Thruster (NEXT) Power Processing Unit (PPU) Capacitor Failure Root Cause Analysis

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Pinero, Luis; Schneidegger, Robert; Dunning, John; Birchenough, Art

    2012-01-01

    The NASA's Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hours and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hours of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.

  6. NASA's Evolutionary Xenon Thruster (NEXT) Power Processing Unit (PPU) Capacitor Failure Root Cause Analysis

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Scheidegger, Robert J.; Pinero, Luis R.; Birchenough, Arthur J.; Dunning, John W.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hr and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location-the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hr of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.

  7. A DMAIC approach for process capability improvement an engine crankshaft manufacturing process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, P. Srinivasa

    2014-05-01

    The define-measure-analyze-improve-control (DMAIC) approach is a five-strata approach, namely DMAIC. This approach is the scientific approach for reducing the deviations and improving the capability levels of the manufacturing processes. The present work elaborates on DMAIC approach applied in reducing the process variations of the stub-end-hole boring operation of the manufacture of crankshaft. This statistical process control study starts with selection of the critical-to-quality (CTQ) characteristic in the define stratum. The next stratum constitutes the collection of dimensional measurement data of the CTQ characteristic identified. This is followed by the analysis and improvement strata where the various quality control tools like Ishikawa diagram, physical mechanism analysis, failure modes effects analysis and analysis of variance are applied. Finally, the process monitoring charts are deployed at the workplace for regular monitoring and control of the concerned CTQ characteristic. By adopting DMAIC approach, standard deviation is reduced from 0.003 to 0.002. The process potential capability index ( C P) values improved from 1.29 to 2.02 and the process performance capability index ( C PK) values improved from 0.32 to 1.45, respectively.

  8. Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.

    1997-01-01

    A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.

  9. Resilience Engineering in Critical Long Term Aerospace Software Systems: A New Approach to Spacecraft Software Safety

    NASA Astrophysics Data System (ADS)

    Dulo, D. A.

    Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.

  10. Independent Orbiter Assessment (IOA): Analysis of the instrumentation subsystem

    NASA Technical Reports Server (NTRS)

    Howard, B. S.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Instrumentation Subsystem are documented. The Instrumentation Subsystem (SS) consists of transducers, signal conditioning equipment, pulse code modulation (PCM) encoding equipment, tape recorders, frequency division multiplexers, and timing equipment. For this analysis, the SS is broken into two major groupings: Operational Instrumentation (OI) equipment and Modular Auxiliary Data System (MADS) equipment. The OI equipment is required to acquire, condition, scale, digitize, interleave/multiplex, format, and distribute operational Orbiter and payload data and voice for display, recording, telemetry, and checkout. It also must provide accurate timing for time critical functions for crew and payload specialist use. The MADS provides additional instrumentation to measure and record selected pressure, temperature, strain, vibration, and event data for post-flight playback and analysis. MADS data is used to assess vehicle responses to the flight environment and to permit correlation of such data from flight to flight. The IOA analysis utilized available SS hardware drawings and schematics for identifying hardware assemblies and components and their interfaces. Criticality for each item was assigned on the basis of the worst-case effect of the failure modes identified.

  11. A palliative approach for heart failure end-of-life care

    PubMed Central

    Maciver, Jane; Ross, Heather J.

    2018-01-01

    Purpose of review The current review discusses the integration of guideline and evidence-based palliative care into heart failure end-of-life (EOL) care. Recent findings North American and European heart failure societies recommend the integration of palliative care into heart failure programs. Advance care planning, shared decision-making, routine measurement of symptoms and quality of life and specialist palliative care at heart failure EOL are identified as key components to an effective heart failure palliative care program. There is limited evidence to support the effectiveness of the individual elements. However, results from the palliative care in heart failure trial suggest an integrated heart failure palliative care program can significantly improve quality of life for heart failure patients at EOL. Summary Integration of a palliative approach to heart failure EOL care helps to ensure patients receive the care that is congruent with their values, wishes and preferences. Specialist palliative care referrals are limited to those who are truly at heart failure EOL. PMID:29135524

  12. Experimental micromechanical approach to failure process in CFRP cross-ply laminates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeda, N.; Ogihara, S.; Kobayashi, A.

    The microscopic failure process of three different types of cross-ply laminates, (0/90{sub n}/0) (n = 4, 8, 12), was investigated at R.T. and 80 C. Progressive damage parameters, the transverse crack density and the delamination ratio, were measured. A simple modified shear-lag analysis including the thermal residual strains was conducted to predict the transverse crack density as a function of laminate strain, considering the constraint effect, as well as the strength distribution of the transverse layer. The analysis was also extended to the system containing delamination to predict the delamination length. A prediction was also presented for the transverse crackmore » density including the effect of the delamination growth. The prediction showed good agreement with the experimental results.« less

  13. A Fault Tree Approach to Needs Assessment -- An Overview.

    ERIC Educational Resources Information Center

    Stephens, Kent G.

    A "failsafe" technology is presented based on a new unified theory of needs assessment. Basically the paper discusses fault tree analysis as a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur and then suggesting high priority avoidance strategies for those…

  14. A Potentially Heteroglossic Policy Becomes Monoglossic in Context: An Ethnographic Analysis of Paraguayan Bilingual Education Policy

    ERIC Educational Resources Information Center

    Mortimer, Katherine S.

    2016-01-01

    Ethnographic and discursive approaches to educational language policy (ELP) that explore how policy is appropriated in context are important for understanding policy success/failure in meeting goals of educational equity for language-minoritized students. This study describes how Paraguayan national policy for universal bilingual education…

  15. Academic Self-Concept and Academic Self-Efficacy: Self-Beliefs Enable Academic Achievement of Twice-Exceptional Students

    ERIC Educational Resources Information Center

    Wang, Clare Wen; Neihart, Maureen

    2015-01-01

    Many studies have reported that twice-exceptional (2e) students were vulnerable in psychological traits and exhibited low-academic self-concept and academic self-efficacy. Such vulnerability may cause their academic failures. This study applied interpretative phenomenological analysis (IPA), a qualitative approach to investigate the perceptions of…

  16. Measuring the Impact of Technology on Nurse Workflow: A Mixed Methods Approach

    ERIC Educational Resources Information Center

    Cady, Rhonda Guse

    2012-01-01

    Background. Investment in health information technology (HIT) is rapidly accelerating. The absence of contextual or situational analysis of the environment in which HIT is incorporated makes it difficult to measure success or failure. The methodology introduced in this paper combines observational research with time-motion study to measure the…

  17. The quandaries and promise of risk management: a scientist's perspective on integration of science and management.

    Treesearch

    B.G. Marcot

    2007-01-01

    This paper briefly lists constraints and problems of traditional approaches to natural resource risk analysis and risk management. Such problems include disparate definitions of risk, multiple and conflicting objectives and decisions, conflicting interpretations of uncertainty, and failure of articulating decision criteria, risk attitudes, modeling assumptions, and...

  18. Mechanisms of action of sacubitril/valsartan on cardiac remodeling: a systems biology approach.

    PubMed

    Iborra-Egea, Oriol; Gálvez-Montón, Carolina; Roura, Santiago; Perea-Gil, Isaac; Prat-Vidal, Cristina; Soler-Botija, Carolina; Bayes-Genis, Antoni

    2017-01-01

    Sacubitril/Valsartan, proved superiority over other conventional heart failure management treatments, but its mechanisms of action remains obscure. In this study, we sought to explore the mechanistic details for Sacubitril/Valsartan in heart failure and post-myocardial infarction remodeling, using an in silico, systems biology approach. Myocardial transcriptome obtained in response to myocardial infarction in swine was analyzed to address post-infarction ventricular remodeling. Swine transcriptome hits were mapped to their human equivalents using Reciprocal Best (blast) Hits, Gene Name Correspondence, and InParanoid database. Heart failure remodeling was studied using public data available in gene expression omnibus (accession GSE57345, subseries GSE57338), processed using the GEO2R tool. Using the Therapeutic Performance Mapping System technology, dedicated mathematical models trained to fit a set of molecular criteria, defining both pathologies and including all the information available on Sacubitril/Valsartan, were generated. All relationships incorporated into the biological network were drawn from public resources (including KEGG, REACTOME, INTACT, BIOGRID, and MINT). An artificial neural network analysis revealed that Sacubitril/Valsartan acts synergistically against cardiomyocyte cell death and left ventricular extracellular matrix remodeling via eight principal synergistic nodes. When studying each pathway independently, Valsartan was found to improve cardiac remodeling by inhibiting members of the guanine nucleotide-binding protein family, while Sacubitril attenuated cardiomyocyte cell death, hypertrophy, and impaired myocyte contractility by inhibiting PTEN. The complex molecular mechanisms of action of Sacubitril/Valsartan upon post-myocardial infarction and heart failure cardiac remodeling were delineated using a systems biology approach. Further, this dataset provides pathophysiological rationale for the use of Sacubitril/Valsartan to prevent post-infarct remodeling.

  19. Closed-Loop Evaluation of an Integrated Failure Identification and Fault Tolerant Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan

    2006-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.

  20. Failure analysis of fuel cell electrodes using three-dimensional multi-length scale X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.

    2016-10-01

    X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.

  1. The use of failure mode and effect analysis in a radiation oncology setting: the Cancer Treatment Centers of America experience.

    PubMed

    Denny, Diane S; Allen, Debra K; Worthington, Nicole; Gupta, Digant

    2014-01-01

    Delivering radiation therapy in an oncology setting is a high-risk process where system failures are more likely to occur because of increasing utilization, complexity, and sophistication of the equipment and related processes. Healthcare failure mode and effect analysis (FMEA) is a method used to proactively detect risks to the patient in a particular healthcare process and correct potential errors before adverse events occur. FMEA is a systematic, multidisciplinary team-based approach to error prevention and enhancing patient safety. We describe our experience of using FMEA as a prospective risk-management technique in radiation oncology at a national network of oncology hospitals in the United States, capitalizing not only on the use of a team-based tool but also creating momentum across a network of collaborative facilities seeking to learn from and share best practices with each other. The major steps of our analysis across 4 sites and collectively were: choosing the process and subprocesses to be studied, assembling a multidisciplinary team at each site responsible for conducting the hazard analysis, and developing and implementing actions related to our findings. We identified 5 areas of performance improvement for which risk-reducing actions were successfully implemented across our enterprise. © 2012 National Association for Healthcare Quality.

  2. Update: Acute Heart Failure (VII): Nonpharmacological Management of Acute Heart Failure.

    PubMed

    Plácido, Rui; Mebazaa, Alexandre

    2015-09-01

    Acute heart failure is a major and growing public health problem worldwide with high morbidity, mortality, and cost. Despite recent advances in pharmacological management, the prognosis of patients with acute decompensated heart failure remains poor. Consequently, nonpharmacological approaches are being developed and increasingly used. Such techniques may include several modalities of ventilation, ultrafiltration, mechanical circulatory support, myocardial revascularization, and surgical treatment, among others. This document reviews the nonpharmacological approach in acute heart failure, indications, and prognostic implications. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  3. Analysis and Characterization of Damage and Failure Utilizing a Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Khaled, Bilal; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in state-of-the art composite impact models is under development. In particular, a next generation composite impact material model, jointly developed by the FAA and NASA, is being implemented into the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage, and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters (such as modulus and strength). The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in the various coordinate directions. Due to the fact that the plasticity and damage models are uncoupled, test procedures and methods to both characterize the damage model and to covert the material stress-strain curves from the true (damaged) stress space to the effective (undamaged) stress space have been developed. A methodology has been developed to input the experimentally determined composite failure surface in a tabulated manner. An analytical approach is then utilized to track how close the current stress state is to the failure surface.

  4. The effect of a novel minimally invasive strategy for infected necrotizing pancreatitis.

    PubMed

    Tong, Zhihui; Shen, Xiao; Ke, Lu; Li, Gang; Zhou, Jing; Pan, Yiyuan; Li, Baiqiang; Yang, Dongliang; Li, Weiqin; Li, Jieshou

    2017-11-01

    Step-up approach consisting of multiple minimally invasive techniques has gradually become the mainstream for managing infected pancreatic necrosis (IPN). In the present study, we aimed to compare the safety and efficacy of a novel four-step approach and the conventional approach in managing IPN. According to the treatment strategy, consecutive patients fulfilling the inclusion criteria were put into two time intervals to conduct a before-and-after comparison: the conventional group (2010-2011) and the novel four-step group (2012-2013). The conventional group was essentially open necrosectomy for any patient who failed percutaneous drainage of infected necrosis. And the novel drainage approach consisted of four different steps including percutaneous drainage, negative pressure irrigation, endoscopic necrosectomy and open necrosectomy in sequence. The primary endpoint was major complications (new-onset organ failure, sepsis or local complications, etc.). Secondary endpoints included mortality during hospitalization, need of emergency surgery, duration of organ failure and sepsis, etc. Of the 229 recruited patients, 92 were treated with the conventional approach and the remaining 137 were managed with the novel four-step approach. New-onset major complications occurred in 72 patients (78.3%) in the two-step group and 75 patients (54.7%) in the four-step group (p < 0.001). For other important endpoints, although there was no statistical difference in mortality between the two groups (p = 0.403), significantly fewer patients in the four-step group required emergency surgery when compared with the conventional group [14.6% (20/137) vs. 45.6% (42/92), p < 0.001]. In addition, stratified analysis revealed that the four-step approach group presented significantly lower incidence of new-onset organ failure and other major complications in patients with the most severe type of AP. Comparing with the conventional approach, the novel four-step approach significantly reduced the rate of new-onset major complications and requirement of emergency operations in treating IPN, especially in those with the most severe type of acute pancreatitis.

  5. Flight Test Comparison of Different Adaptive Augmentations for Fault Tolerant Control Laws for a Modified F-15 Aircraft

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Hanson, Curtis E.; Lee, James A.; Kaneshige, John T.

    2009-01-01

    This report describes the improvements and enhancements to a neural network based approach for directly adapting to aerodynamic changes resulting from damage or failures. This research is a follow-on effort to flight tests performed on the NASA F-15 aircraft as part of the Intelligent Flight Control System research effort. Previous flight test results demonstrated the potential for performance improvement under destabilizing damage conditions. Little or no improvement was provided under simulated control surface failures, however, and the adaptive system was prone to pilot-induced oscillations. An improved controller was designed to reduce the occurrence of pilot-induced oscillations and increase robustness to failures in general. This report presents an analysis of the neural networks used in the previous flight test, the improved adaptive controller, and the baseline case with no adaptation. Flight test results demonstrate significant improvement in performance by using the new adaptive controller compared with the previous adaptive system and the baseline system for control surface failures.

  6. Measurement of multiaxial ply strength by an off-axis flexure test

    NASA Technical Reports Server (NTRS)

    Crews, John H., Jr.; Naik, Rajiv A.

    1992-01-01

    An off-axis flexure (OAF) test was performed to measure ply strength under multiaxial stress states. This test involves unidirectional off-axis specimens loaded in bending, using an apparatus that allows these anisotropic specimens to twist as well as flex without the complications of a resisting torque. A 3D finite element stress analysis verified that simple beam theory could be used to compute the specimen bending stresses at failure. Unidirectional graphite/epoxy specimens with fiber angles ranging from 90 deg to 15 deg have combined normal and shear stresses on their failure planes that are typical of 45 deg plies in structural laminates. Tests for a range of stress states with AS4/3501-6 specimens showed that both normal and shear stresses on the failure plane influenced cracking resistance. This OAF test may prove to be useful for generating data needed to predict ply cracking in composite structures and may also provide an approach for studying fiber-matrix interface failures under stress states typical of structures.

  7. Relationship between Sponsorship and Failure Rate of Dental Implants: A Systematic Approach

    PubMed Central

    Popelut, Antoine; Valet, Fabien; Fromentin, Olivier; Thomas, Aurélie; Bouchard, Philippe

    2010-01-01

    Background The number of dental implant treatments increases annually. Dental implants are manufactured by competing companies. Systematic reviews and meta-analysis have shown a clear association between pharmaceutical industry funding of clinical trials and pro-industry results. So far, the impact of industry sponsorship on the outcomes and conclusions of dental implant clinical trials has never been explored. The aim of the present study was to examine financial sponsorship of dental implant trials, and to evaluate whether research funding sources may affect the annual failure rate. Methods and Findings A systematic approach was used to identify systematic reviews published between January 1993 and December 2008 that specifically deal with the length of survival of dental implants. Primary articles were extracted from these reviews. The failure rate of the dental implants included in the trials was calculated. Data on publication year, Impact Factor, prosthetic design, periodontal status reporting, number of dental implants included in the trials, methodological quality of the studies, presence of a statistical advisor, and financial sponsorship were extracted by two independent reviewers (kappa  = 0.90; CI95% [0.77–1.00]). Univariate quasi-Poisson regression models and multivariate analysis were used to identify variables that were significantly associated with failure rates. Five systematic reviews were identified from which 41 analyzable trials were extracted. The mean annual failure rate estimate was 1.09%.(CI95% [0.84–1.42]). The funding source was not reported in 63% of the trials (26/41). Sixty-six percent of the trials were considered as having a risk of bias (27/41). Given study age, both industry associated (OR = 0.21; CI95% [0.12–0.38]) and unknown funding source trials (OR = 0.33; (CI95% [0.21–0.51]) had a lower annual failure rates compared with non-industry associated trials. A conflict of interest statement was disclosed in 2 trials. Conclusions When controlling for other factors, the probability of annual failure for industry associated trials is significantly lower compared with non-industry associated trials. This bias may have significant implications on tooth extraction decision making, research on tooth preservation, and governmental health care policies. PMID:20422000

  8. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.

  9. Use of failure mode, effect and criticality analysis to improve safety in the medication administration process.

    PubMed

    Rodriguez-Gonzalez, Carmen Guadalupe; Martin-Barbero, Maria Luisa; Herranz-Alonso, Ana; Durango-Limarquez, Maria Isabel; Hernandez-Sampelayo, Paloma; Sanjurjo-Saez, Maria

    2015-08-01

    To critically evaluate the causes of preventable adverse drug events during the nurse medication administration process in inpatient units with computerized prescription order entry and profiled automated dispensing cabinets in order to prioritize interventions that need to be implemented and to evaluate the impact of specific interventions on the criticality index. This is a failure mode, effects and criticality analysis (FMECA) study. A multidisciplinary consensus committee composed of pharmacists, nurses and doctors evaluated the process of administering medications in a hospital setting in Spain. By analysing the process, all failure modes were identified and criticality was determined by rating severity, frequency and likelihood of failure detection on a scale of 1 to 10, using adapted versions of already published scales. Safety strategies were identified and prioritized. Through consensus, the committee identified eight processes and 40 failure modes, of which 20 were classified as high risk. The sum of the criticality indices was 5254. For the potential high-risk failure modes, 21 different potential causes were found resulting in 24 recommendations. Thirteen recommendations were prioritized and developed over a 24-month period, reducing total criticality from 5254 to 3572 (a 32.0% reduction). The recommendations with a greater impact on criticality were the development of an electronic medication administration record (-582) and the standardization of intravenous drug compounding in the unit (-168). Other improvements, such as barcode medication administration technology (-1033), were scheduled for a longer period of time because of lower feasibility. FMECA is a useful approach that can improve the medication administration process. © 2015 John Wiley & Sons, Ltd.

  10. Intelligent on-line fault tolerant control for unanticipated catastrophic failures.

    PubMed

    Yen, Gary G; Ho, Liang-Wei

    2004-10-01

    As dynamic systems become increasingly complex, experience rapidly changing environments, and encounter a greater variety of unexpected component failures, solving the control problems of such systems is a grand challenge for control engineers. Traditional control design techniques are not adequate to cope with these systems, which may suffer from unanticipated dynamic failures. In this research work, we investigate the on-line fault tolerant control problem and propose an intelligent on-line control strategy to handle the desired trajectories tracking problem for systems suffering from various unanticipated catastrophic faults. Through theoretical analysis, the sufficient condition of system stability has been derived and two different on-line control laws have been developed. The approach of the proposed intelligent control strategy is to continuously monitor the system performance and identify what the system's current state is by using a fault detection method based upon our best knowledge of the nominal system and nominal controller. Once a fault is detected, the proposed intelligent controller will adjust its control signal to compensate for the unknown system failure dynamics by using an artificial neural network as an on-line estimator to approximate the unexpected and unknown failure dynamics. The first control law is derived directly from the Lyapunov stability theory, while the second control law is derived based upon the discrete-time sliding mode control technique. Both control laws have been implemented in a variety of failure scenarios to validate the proposed intelligent control scheme. The simulation results, including a three-tank benchmark problem, comply with theoretical analysis and demonstrate a significant improvement in trajectory following performance based upon the proposed intelligent control strategy.

  11. Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy.

    PubMed

    Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Di Muzio, Nadia; Longobardi, Barbara; Mangili, Paola; Veronese, Ivan

    2013-09-06

    The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety.

  12. A Hybrid Approach to Composite Damage and Failure Analysis Combining Synergistic Damage Mechanics and Peridynamics

    DTIC Science & Technology

    2016-09-30

    far from uniform . The final nonuniform distribution of fibers consists of clustered regions and resin pockets. The clustered fiber regions promote...period. Approach and Results A novel procedure has been devised to create nonuniform fiber distributions from the initial fiber bundle (with...used in simulations to produce nonuniform configurations. 2 . , •• ... . .. ·~ · . .. 000 8oa~.f𔄂oo o0~&mt~ go ... ·~· %(1 "’ ,~o ooif-l /j

  13. Field Programmable Gate Array Reliability Analysis Guidelines for Launch Vehicle Reliability Block Diagrams

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  14. Experimental analysis of computer system dependability

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar, K.; Tang, Dong

    1993-01-01

    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.

  15. Predictive modeling of dynamic fracture growth in brittle materials with machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel

    We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less

  16. Predictive modeling of dynamic fracture growth in brittle materials with machine learning

    DOE PAGES

    Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel; ...

    2018-02-22

    We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less

  17. Lithographic chip identification: meeting the failure analysis challenge

    NASA Astrophysics Data System (ADS)

    Perkins, Lynn; Riddell, Kevin G.; Flack, Warren W.

    1992-06-01

    This paper describes a novel method using stepper photolithography to uniquely identify individual chips for permanent traceability. A commercially available 1X stepper is used to mark chips with an identifier or `serial number' which can be encoded with relevant information for the integrated circuit manufacturer. The permanent identification of individual chips can improve current methods of quality control, failure analysis, and inventory control. The need for this technology is escalating as manufacturers seek to provide six sigma quality control for their products and trace fabrication problems to their source. This need is especially acute for parts that fail after packaging and are returned to the manufacturer for analysis. Using this novel approach, failure analysis data can be tied back to a particular batch, wafer, or even a position within a wafer. Process control can be enhanced by identifying the root cause of chip failures. Chip identification also addresses manufacturers concerns with increasing incidences of chip theft. Since chips currently carry no identification other than the manufacturer's name and part number, recovery efforts are hampered by the inability to determine the sales history of a specific packaged chip. A definitive identifier or serial number for each chip would address this concern. The results of chip identification (patent pending) are easily viewed through a low power microscope. Batch number, wafer number, exposure step, and chip location within the exposure step can be recorded, as can dates and other items of interest. An explanation of the chip identification procedure and processing requirements are described. Experimental testing and results are presented, and potential applications are discussed.

  18. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  19. Experience with modified aerospace reliability and quality assurance method for wind turbines

    NASA Technical Reports Server (NTRS)

    Klein, W. E.

    1982-01-01

    The SR&QA approach assures that the machine is not hazardous to the public or operating personnel, can operate unattended on a utility grid, demonstrates reliability operation, and helps establish the quality assurance and maintainability requirements for future wind turbine projects. The approach consisted of modified failure modes and effects analysis (FMEA) during the design phase, minimal hardware inspection during parts fabrication, and three simple documents to control activities during machine construction and operation. Five years experience shows that this low cost approach works well enough that it should be considered by others for similar projects.

  20. A Decentralized Adaptive Approach to Fault Tolerant Flight Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Nikulin, Vladimir; Heimes, Felix; Shormin, Victor

    2000-01-01

    This paper briefly reports some results of our study on the application of a decentralized adaptive control approach to a 6 DOF nonlinear aircraft model. The simulation results showed the potential of using this approach to achieve fault tolerant control. Based on this observation and some analysis, the paper proposes a multiple channel adaptive control scheme that makes use of the functionally redundant actuating and sensing capabilities in the model, and explains how to implement the scheme to tolerate actuator and sensor failures. The conditions, under which the scheme is applicable, are stated in the paper.

  1. Use of failure modes, effects, and criticality analysis to compare the vulnerabilities of laparoscopic versus open appendectomy.

    PubMed

    Guida, Edoardo; Rosati, Ubaldo; Pini Prato, Alessio; Avanzini, Stefano; Pio, Luca; Ghezzi, Michele; Jasonni, Vincenzo; Mattioli, Girolamo

    2015-06-01

    To measure the feasibility of using FMECA applied to the surgery and then compare the vulnerabilities of laparoscopic versus open appendectomy by using FMECA. The FMECA study was performed on each single selected phase of appendectomy and on complication-related data during the period January 1, 2009, to December 31, 2010. The risk analysis phase was completed by evaluation of the criticality index (CI) of each appendectomy-related failure mode (FM). The CI is calculated by multiplying the estimated frequency of occurrence (O) of the FM, by the expected severity of the injury to the patient (S), and the detectability (D) of the FM. In the first year of analysis (2009), 177 appendectomies were performed, 110 open and 67 laparoscopic. Eleven adverse events were related to the open appendectomy: 1 bleeding (CI: 8) and 10 postoperative infections (CI: 32). Three adverse events related to the laparoscopic approach were recorded: 1 postoperative infection (CI: 8) and 2 incorrect extractions of the appendix through the umbilical port (CI: 6). In the second year of analysis (2010), 158 appendectomies were performed, 69 open and 89 laparoscopic. Four adverse events were related to the open appendectomy: 1 incorrect management of the histological specimen (CI: 2), 1 dehiscence of the surgical wound (CI: 6), and 2 infections (CI: 6). No adverse events were recorded in laparoscopic approach. FMECA helped the staff compare the 2 approaches through an accurate step-by-step analysis, highlighting that laparoscopic appendectomy is feasible and safe, associated with a lower incidence of infection and other complications, reduced length of hospital stay, and an apparent lower procedure-related risk.

  2. A Market Failure Approach to Linguistic Justice

    ERIC Educational Resources Information Center

    Robichaud, David

    2017-01-01

    This paper will consider language management from the perspective of efficiency, and will set the grounds for a new approach to linguistic justice: a market failure approach. The principle of efficiency emphasises the need to satisfy individuals' preferences in an optimal way. Applying this principle with regard to language would justify language…

  3. Prognostics of Power MOSFET

    NASA Technical Reports Server (NTRS)

    Celaya, Jose Ramon; Saxena, Abhinav; Vashchenko, Vladislay; Saha, Sankalita; Goebel, Kai Frank

    2011-01-01

    This paper demonstrates how to apply prognostics to power MOSFETs (metal oxide field effect transistor). The methodology uses thermal cycling to age devices and Gaussian process regression to perform prognostics. The approach is validated with experiments on 100V power MOSFETs. The failure mechanism for the stress conditions is determined to be die-attachment degradation. Change in ON-state resistance is used as a precursor of failure due to its dependence on junction temperature. The experimental data is augmented with a finite element analysis simulation that is based on a two-transistor model. The simulation assists in the interpretation of the degradation phenomena and SOA (safe operation area) change.

  4. USAF Evaluation of an Automated Adaptive Flight Training System

    DTIC Science & Technology

    1975-10-01

    system. C. What is the most effective wav to utilize the system in ^jierational training’ Student opinion for this question JS equally divided...None Utility hydraulic failure Flap failure left engine failure Right engine failure Stah 2 aug failure No g\\ ro approach procedure, no MIDI

  5. Visualization of Concurrent Program Executions

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Havelund, Klaus; Honiden, Shinichi

    2007-01-01

    Various program analysis techniques are efficient at discovering failures and properties. However, it is often difficult to evaluate results, such as program traces. This calls for abstraction and visualization tools. We propose an approach based on UML sequence diagrams, addressing shortcomings of such diagrams for concurrency. The resulting visualization is expressive and provides all the necessary information at a glance.

  6. Teachers Making Sense of Result-Oriented Teams: A Cognitive Anthropological Approach to Educational Change

    ERIC Educational Resources Information Center

    Wierenga, Sijko J.; Kamsteeg, Frans H.; Simons, P. Robert Jan; Veenswijk, Marcel

    2015-01-01

    Studies on educational change efforts abound but generally limit themselves to post hoc explanations of failure and success. Such explanations are rarely turned into attempts at providing models for predicting change outcomes. The present study tries to develop such a model based on the teachers' impact analysis of a management-driven…

  7. Narratives of Success: A Retrospective Trajectory Analysis of Men of Color Who Successfully Transferred from the Community College

    ERIC Educational Resources Information Center

    Urias, Marissa Vasquez; Falcon, Vannessa; Harris, Frank, III; Wood, J. Luke

    2016-01-01

    With the use of a narrative approach to inquiry, this chapter seeks to reframe deficit-oriented research on men of color, which often focuses on patterns of failure and underachievement, by exploring the pathways of community college men of color who successfully transferred to 4-year institutions.

  8. Factors Leading to Success in Diversified Occupation: A Livelihood Analysis in India

    ERIC Educational Resources Information Center

    Saha, Biswarup; Bahal, Ram

    2015-01-01

    Purpose: Livelihood diversification is a sound alternative for higher economic growth and its success or failure is conditioned by the interplay of a multitude of factors. The study of the profile of the farmers in which they operate is important to highlight the factors leading to success in diversified livelihoods. Design/Methodology/Approach: A…

  9. Generic Sensor Failure Modeling for Cooperative Systems.

    PubMed

    Jäger, Georg; Zug, Sebastian; Casimiro, António

    2018-03-20

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application's fault tolerance and thereby promises maintainability of such system's safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques.

  10. Generic Sensor Failure Modeling for Cooperative Systems

    PubMed Central

    Jäger, Georg; Zug, Sebastian

    2018-01-01

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435

  11. Potent influence of obesity on suppression of plasma B-type natriuretic peptide levels in patients with acute heart failure: An approach using covariance structure analysis.

    PubMed

    Kinoshita, Koji; Kawai, Makoto; Minai, Kosuke; Ogawa, Kazuo; Inoue, Yasunori; Yoshimura, Michihiro

    2016-07-15

    Plasma B-type natriuretic peptide (BNP) levels may vary widely among patients with similar stages of heart failure, in whom obesity might be the only factor reducing plasma BNP levels. We investigated the effect of obesity and body mass index (BMI) on plasma BNP levels using serial measurements before and after treatment (pre- and post-BNP and pre- and post-BMI) in patients with acute heart failure. Multiple regression analysis and covariance structure analysis were performed to study the interactions between clinical factors in 372 patients. The pre-BMI was shown as a combination index of obesity and fluid accumulation, whereas the post-BMI was a conventional index of obesity. There was a significant inverse correlation between BMI and BNP in each condition before and after treatment for heart failure. The direct significant associations of the log pre-BNP with the log post-BNP (β: 0.387), the post-BMI (β: -0.043), and the pre-BMI (β: 0.030) were analyzed by using structural equation modeling. The post-BMI was inversely correlated, but importantly, the pre-BMI was positively correlated, with the log pre-BNP, because the pre-BMI probably entailed an element of fluid accumulation. There were few patients with extremely high levels of pre-BNP among those with high post-BMI, due to suppressed secretion of BNP. The low plasma BNP levels in true obesity patients with acute heart failure are of concern, because plasma BNP cannot increase in such patients. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  12. Physics-based Entry, Descent and Landing Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Huynh, Loc C.; Manning, Ted

    2014-01-01

    A physics-based risk model was developed to assess the risk associated with thermal protection system failures during the entry, descent and landing phase of a manned spacecraft mission. In the model, entry trajectories were computed using a three-degree-of-freedom trajectory tool, the aerothermodynamic heating environment was computed using an engineering-level computational tool and the thermal response of the TPS material was modeled using a one-dimensional thermal response tool. The model was capable of modeling the effect of micrometeoroid and orbital debris impact damage on the TPS thermal response. A Monte Carlo analysis was used to determine the effects of uncertainties in the vehicle state at Entry Interface, aerothermodynamic heating and material properties on the performance of the TPS design. The failure criterion was set as a temperature limit at the bondline between the TPS and the underlying structure. Both direct computation and response surface approaches were used to compute the risk. The model was applied to a generic manned space capsule design. The effect of material property uncertainty and MMOD damage on risk of failure were analyzed. A comparison of the direct computation and response surface approach was undertaken.

  13. Complexity in congestive heart failure: A time-frequency approach

    NASA Astrophysics Data System (ADS)

    Banerjee, Santo; Palit, Sanjay K.; Mukherjee, Sayan; Ariffin, MRK; Rondoni, Lamberto

    2016-03-01

    Reconstruction of phase space is an effective method to quantify the dynamics of a signal or a time series. Various phase space reconstruction techniques have been investigated. However, there are some issues on the optimal reconstructions and the best possible choice of the reconstruction parameters. This research introduces the idea of gradient cross recurrence (GCR) and mean gradient cross recurrence density which shows that reconstructions in time frequency domain preserve more information about the dynamics than the optimal reconstructions in time domain. This analysis is further extended to ECG signals of normal and congestive heart failure patients. By using another newly introduced measure—gradient cross recurrence period density entropy, two classes of aforesaid ECG signals can be classified with a proper threshold. This analysis can be applied to quantifying and distinguishing biomedical and other nonlinear signals.

  14. Reliability modelling and analysis of thermal MEMS

    NASA Astrophysics Data System (ADS)

    Muratet, Sylvaine; Lavu, Srikanth; Fourniols, Jean-Yves; Bell, George; Desmulliez, Marc P. Y.

    2006-04-01

    This paper presents a MEMS reliability study methodology based on the novel concept of 'virtual prototyping'. This methodology can be used for the development of reliable sensors or actuators and also to characterize their behaviour in specific use conditions and applications. The methodology is demonstrated on the U-shaped micro electro thermal actuator used as test vehicle. To demonstrate this approach, a 'virtual prototype' has been developed with the modeling tools MatLab and VHDL-AMS. A best practice FMEA (Failure Mode and Effect Analysis) is applied on the thermal MEMS to investigate and assess the failure mechanisms. Reliability study is performed by injecting the identified defaults into the 'virtual prototype'. The reliability characterization methodology predicts the evolution of the behavior of these MEMS as a function of the number of cycles of operation and specific operational conditions.

  15. Optimization Testbed Cometboards Extended into Stochastic Domain

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.

    2010-01-01

    COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.

  16. EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Žvokelj, Matej; Zupan, Samo; Prebil, Ivan

    2016-05-01

    A novel multivariate and multiscale statistical process monitoring method is proposed with the aim of detecting incipient failures in large slewing bearings, where subjective influence plays a minor role. The proposed method integrates the strengths of the Independent Component Analysis (ICA) multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD), which adaptively decomposes signals into different time scales and can thus cope with multiscale system dynamics. The method, which was named EEMD-based multiscale ICA (EEMD-MSICA), not only enables bearing fault detection but also offers a mechanism of multivariate signal denoising and, in combination with the Envelope Analysis (EA), a diagnostic tool. The multiscale nature of the proposed approach makes the method convenient to cope with data which emanate from bearings in complex real-world rotating machinery and frequently represent the cumulative effect of many underlying phenomena occupying different regions in the time-frequency plane. The efficiency of the proposed method was tested on simulated as well as real vibration and Acoustic Emission (AE) signals obtained through conducting an accelerated run-to-failure lifetime experiment on a purpose-built laboratory slewing bearing test stand. The ability to detect and locate the early-stage rolling-sliding contact fatigue failure of the bearing indicates that AE and vibration signals carry sufficient information on the bearing condition and that the developed EEMD-MSICA method is able to effectively extract it, thereby representing a reliable bearing fault detection and diagnosis strategy.

  17. Vascular access in lipoprotein apheresis: a retrospective analysis from the UK's largest lipoprotein apheresis centre.

    PubMed

    Doherty, Daniel J; Pottle, Alison; Malietzis, George; Hakim, Nadey; Barbir, Mahmoud; Crane, Jeremy S

    2018-01-01

    Lipoprotein apheresis (LA) has proven to be an effective, safe and life-saving therapy. Vascular access is needed to facilitate this treatment but has recognised complications. Despite consistency in treatment indication and duration there are no guidelines in place. The aim of this study is to characterise vascular access practice at the UK's largest LA centre and forward suggestions for future approaches. A retrospective analysis of vascular access strategies was undertaken in all patients who received LA treatment in the low-density lipoprotein (LDL) Apheresis Unit at Harefield Hospital (Middlesex, UK) from November 2000 to March 2016. Fifty-three former and current patients underwent 4260 LA treatments. Peripheral vein cannulation represented 79% of initial vascular access strategies with arteriovenous (AV) fistula use accounting for 15%. Last used method of vascular access was peripheral vein cannulation in 57% versus AV fistula in 32%. Total AV fistula failure rate was 37%. Peripheral vein cannulation remains the most common method to facilitate LA. Practice trends indicate a move towards AV fistula creation; the favoured approach receiving support from the expert body in this area. AV fistula failure rate is high and of great concern, therefore we suggest the implementation of upper limb ultrasound vascular mapping in all patients who meet treatment eligibility criteria. We encourage close ties between apheresis units and specialist surgical centres to facilitate patient counselling and monitoring. Further prospective data regarding fistula failure is needed in this expanding treatment field.

  18. Failure Predictions of Out-of-Autoclave Sandwich Joints with Delaminations under Flexure Loads

    NASA Technical Reports Server (NTRS)

    Nordendale, Nikolas; Goyal, Vinay; Lundgren, Eric; Patel, Dhruv; Farrokh, Babak; Jones, Justin; Fischetti, Grace; Segal, Kenneth

    2015-01-01

    An analysis and a test program was conducted to investigate the damage tolerance of composite sandwich joints. The joints contained a single circular delamination between the face-sheet and the doubler. The coupons were fabricated through out-of-autoclave (OOA) processes, a technology NASA is investigating for joining large composite sections. The four-point bend flexure test was used to induce compression loading into the side of the joint where the delamination was placed. The compression side was chosen since it tends to be one of the most critical loads in launch vehicles. Autoclave cure was used to manufacture the composite sandwich sections, while the doubler was co-bonded onto the sandwich face-sheet using an OOA process after sandwich panels were cured. A building block approach was adopted to characterize the mechanical properties of the joint material, including the fracture toughness between the doubler and facesheet. Twelve four-point-bend samples were tested, six in the sandwich core ribbon orientation and six in sandwich core cross-ribbon direction. Analysis predicted failure initiation and propagation at the pre-delaminated location, consistent with experimental observations. A building block approach using fracture analyses methods predicted failure loads in close agreement with tests. This investigation demonstrated a small strength reduction due to a flaw of significant size compared to the width of the sample. Therefore, concerns of bonding an OOA material to an in-autoclave material was mitigated for the geometries, materials, and load configurations considered.

  19. Failure Predictions of Out-of-Autoclave Sandwich Joints with Delaminations Under Flexure Loads

    NASA Technical Reports Server (NTRS)

    Nordendale, Nikolas A.; Goyal, Vinay K.; Lundgren, Eric C.; Patel, Dhruv N.; Farrokh, Babak; Jones, Justin; Fischetti, Grace; Segal, Kenneth N.

    2015-01-01

    An analysis and a test program was conducted to investigate the damage tolerance of composite sandwich joints. The joints contained a single circular delamination between the face-sheet and the doubler. The coupons were fabricated through out-of-autoclave (OOA) processes, a technology NASA is investigating for joining large composite sections. The four-point bend flexure test was used to induce compression loading into the side of the joint where the delamination was placed. The compression side was chosen since it tends to be one of the most critical loads in launch vehicles. Autoclave cure was used to manufacture the composite sandwich sections, while the doubler was co-bonded onto the sandwich face-sheet using an OOA process after sandwich panels were cured. A building block approach was adopted to characterize the mechanical properties of the joint material, including the fracture toughness between the doubler and face-sheet. Twelve four-point-bend samples were tested, six in the sandwich core ribbon orientation and six in sandwich core cross-ribbon direction. Analysis predicted failure initiation and propagation at the pre-delaminated location, consistent with experimental observations. A building block approach using fracture analyses methods predicted failure loads in close agreement with tests. This investigation demonstrated a small strength reduction due to a flaw of significant size compared to the width of the sample. Therefore, concerns of bonding an OOA material to an in-autoclave material was mitigated for the geometries, materials, and load configurations considered.

  20. Documenting Liquefaction Failures Using Satellite Remote Sensing and Artificial Intelligence Algorithms

    NASA Astrophysics Data System (ADS)

    Oommen, T.; Baise, L. G.; Gens, R.; Prakash, A.; Gupta, R. P.

    2009-12-01

    Historically, earthquake induced liquefaction is known to have caused extensive damage around the world. Therefore, there is a compelling need to characterize and map liquefaction after a seismic event. Currently, after an earthquake event, field-based mapping of liquefaction is sporadic and limited due to inaccessibility, short life of the failures, difficulties in mapping large aerial extents, and lack of resources. We hypothesize that as liquefaction occurs in saturated granular soils due to an increase in pore pressure, the liquefaction related terrain changes should have an associated increase in soil moisture with respect to the surrounding non-liquefied regions. The increase in soil moisture affects the thermal emittance and, hence, change detection using pre- and post-event thermal infrared (TIR) imagery is suitable for identifying areas that have undergone post-earthquake liquefaction. Though change detection using TIR images gives the first indication of areas of liquefaction, the spatial resolution of TIR images is typically coarser than the resolution of corresponding visible, near-infrared (NIR), and shortwave infrared (SWIR) images. We hypothesize that liquefaction induced changes in the soil and associated surface effects cause textural and spectral changes in images acquired in the visible, NIR, and SWIR. Although these changes can be from various factors, a synergistic approach taking advantage of the thermal signature variation due to changing soil moisture condition, together with the spectral information from high resolution visible, NIR, and SWIR bands can help to narrow down the locations of post-event liquefaction for regional documentation. In this study, we analyze the applicability of combining various spectral bands from different satellites (Landsat, Terra-MISR, IRS-1C, and IRS-1D) for documenting liquefaction failures associated with the magnitude 7.6 earthquake that occurred in Bhuj, India, in 2001. We combine the various spectral bands by neighborhood correlation image analysis using an artificial intelligence algorithm called support vector machine to remotely identify and document liquefaction failures across a region; and assess the reliability and accuracy of the thermal remote sensing approach in documenting regional liquefaction failures. Finally, we present the applicability of the satellite data analyzed and appropriateness of a multisensor and multispectral approach for documenting liquefaction related failures.

  1. The Effect of Fiber Strength Stochastics and Local Fiber Volume Fraction on Multiscale Progressive Failure of Composites

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Jr., Thomas E.; Bednarcyk, Brett A.; Arnold, Steven M.

    2013-01-01

    Continuous fiber unidirectional polymer matrix composites (PMCs) can exhibit significant local variations in fiber volume fraction as a result of processing conditions that can lead to further local differences in material properties and failure behavior. In this work, the coupled effects of both local variations in fiber volume fraction and the empirically-based statistical distribution of fiber strengths on the predicted longitudinal modulus and local tensile strength of a unidirectional AS4 carbon fiber/ Hercules 3502 epoxy composite were investigated using the special purpose NASA Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC); local effective composite properties were obtained by homogenizing the material behavior over repeating units cells (RUCs). The predicted effective longitudinal modulus was relatively insensitive to small (8%) variations in local fiber volume fraction. The composite tensile strength, however, was highly dependent on the local distribution in fiber strengths. The RUC-averaged constitutive response can be used to characterize lower length scale material behavior within a multiscale analysis framework that couples the NASA code FEAMAC and the ABAQUS finite element solver. Such an approach can be effectively used to analyze the progressive failure of PMC structures whose failure initiates at the RUC level. Consideration of the effect of local variations in constituent properties and morphologies on progressive failure of PMCs is a central aspect of the application of Integrated Computational Materials Engineering (ICME) principles for composite materials.

  2. Skin-Stiffener Debond Prediction Based on Computational Fracture Analysis

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Gates, Tom (Technical Monitor)

    2005-01-01

    Interlaminar fracture mechanics has proven useful for characterizing the onset of delaminations in composites and has been used with limited success primarily to investigate onset in fracture toughness specimens and laboratory size coupon type specimens. Future acceptance of the methodology by industry and certification authorities however, requires the successful demonstration of the methodology on structural level. For this purpose a panel was selected that is reinforced with stringers. Shear loading causes the panel to buckle and the resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. For finite element analysis, the panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot and the panel in the vicinity of the embedded defect were modeled with a local 3D solid model. Across the width of the stringer foot the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. For small applied loads the failure index is well below one across the entire width. With increasing load the failure index approaches one first near the edge of the stringer foot from which delamination is expected to grow. With increasing delamination lengths the buckling pattern of the panel changes and the failure index increases which suggests that rapid delamination growth from the initial defect is to be expected.

  3. Forensic analysis of rockfall scars

    NASA Astrophysics Data System (ADS)

    de Vilder, Saskia J.; Rosser, Nick J.; Brain, Matthew J.

    2017-10-01

    We characterise and analyse the detachment (scar) surfaces of rockfalls to understand the mechanisms that underpin their failure. Rockfall scars are variously weathered and comprised of both discontinuity release surfaces and surfaces indicative of fracturing through zones of previously intact rock, known as rock bridges. The presence of rock bridges and pre-existing discontinuities is challenging to quantify due to the difficulty in determining discontinuity persistence below the surface of a rock slope. Rock bridges form an important control in holding blocks onto rockslopes, with their frequency, extent and location commonly modelled from the surface exposure of daylighting discontinuities. We explore an alternative approach to assessing their role, by characterising failure scars. We analyse a database of multiple rockfall scar surfaces detailing the areal extent, shape, and location of broken rock bridges and weathered surfaces. Terrestrial laser scanning and gigapixel imagery were combined to record the detailed texture and surface morphology. From this, scar surfaces were mapped via automated classification based on RGB pixel values. Our analysis of the resulting data from scars on the North Yorkshire coast (UK) indicates a wide variation in both weathering and rock bridge properties, controlled by lithology and associated rock mass structure. Importantly, the proportion of rock bridges in a rockfall failure surface does not increase with failure size. Rather larger failures display fracturing through multiple rock bridges, and in contrast smaller failures fracture occurs only through a single critical rock bridge. This holds implications for how failure mechanisms change with rockfall size and shape. Additionally, the location of rock bridges with respect to the geometry of an incipient rockfall is shown to determine failure mode. Weathering can occur both along discontinuity surfaces and previously broken rock bridges, indicating the sequential stages of progressively detaching rockfall. Our findings have wider implications for hazard assessment where rock slope stability is dependent on the nature of rock bridges, how this is accounted for in slope stability modelling, and the implications of rock bridges on long-term rock slope evolution.

  4. Reconstruction of multistage massive rock slope failure: Polymethodical approach in Lake Oeschinen (CH)

    NASA Astrophysics Data System (ADS)

    Knapp, Sibylle; Gilli, Adrian; Anselmetti, Flavio S.; Hajdas, Irka

    2016-04-01

    Lateglacial and Holocene rock-slope failures occur often as multistage failures where paraglacial adjustment and stress adaptation are hypothesised to control stages of detachment. However, we have only limited datasets to reconstruct detailed stages of large multistage rock-slope failures, and still aim at improving our models in terms of geohazard assessment. Here we use lake sediments, well-established for paleoclimate and paleoseismological reconstruction, with a focus on the reconstruction of rock-slope failures. We present a unique inventory from Lake Oeschinen (Bernese Alps, Switzerland) covering about 2.4 kyrs of rock-slope failure history. The lake sediments have been analysed using sediment-core analysis, radiocarbon dating and seismic-to-core and core-to-core correlations, and these were linked to historical and meteorological records. The results imply that the lake is significantly younger than the ~9 kyrs old Kandersteg rock avalanche (Tinner et al., 2005) and shows multiple rock-slope failures, two of which could be C14-dated. Several events detached from the same area potentially initiated by prehistoric earthquakes (Monecke et al., 2006) and later from stress relaxation processes. The data imply unexpected short recurrence rates that can be related to certain detachment scarps and also help to understand the generation of a historical lake-outburst flood. Here we show how polymethodical analysis of lake sediments can help to decipher massive multistage rock-slope failure. References Monecke, K., Anselmetti, F.S., Becker, A., Schnellmann, M., Sturm, M., Giardini, D., 2006. Earthquake-induced deformation structures in lake deposits: A Late Pleistocene to Holocene paleoseismic record for Central Switzerland. Eclogae Geologicae Helvetiae, 99(3), 343-362. Tinner, W., Kaltenrieder, P., Soom, M., Zwahlen, P., Schmidhalter, M., Boschetti, A., Schlüchter, C., 2005. Der nacheiszeitliche Bergsturz im Kandertal (Schweiz): Alter und Auswirkungen auf die damalige Umwelt. Eclogae Geologicae Helvetiae, 98(1), 83-95.

  5. Application of Failure Mode and Effects Analysis to Intraoperative Radiation Therapy Using Mobile Electron Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciocca, Mario, E-mail: mario.ciocca@cnao.it; Cantone, Marie-Claire; Veronese, Ivan

    2012-02-01

    Purpose: Failure mode and effects analysis (FMEA) represents a prospective approach for risk assessment. A multidisciplinary working group of the Italian Association for Medical Physics applied FMEA to electron beam intraoperative radiation therapy (IORT) delivered using mobile linear accelerators, aiming at preventing accidental exposures to the patient. Methods and Materials: FMEA was applied to the IORT process, for the stages of the treatment delivery and verification, and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system,more » based on the product of three parameters (severity, frequency of occurrence and detectability, each ranging from 1 to 10); 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. Results: Twenty-four subprocesses were identified. Ten potential failure modes were found and scored, in terms of RPN, in the range of 42-216. The most critical failure modes consisted of internal shield misalignment, wrong Monitor Unit calculation and incorrect data entry at treatment console. Potential causes of failure included shield displacement, human errors, such as underestimation of CTV extension, mainly because of lack of adequate training and time pressures, failure in the communication between operators, and machine malfunctioning. The main effects of failure were represented by CTV underdose, wrong dose distribution and/or delivery, unintended normal tissue irradiation. As additional safety measures, the utilization of a dedicated staff for IORT, double-checking of MU calculation and data entry and finally implementation of in vivo dosimetry were suggested. Conclusions: FMEA appeared as a useful tool for prospective evaluation of patient safety in radiotherapy. The application of this method to IORT lead to identify three safety measures for risk mitigation.« less

  6. Association between vitamin D deficiency and heart failure risk in the elderly.

    PubMed

    Porto, Catarina Magalhães; Silva, Vanessa De Lima; da Luz, João Soares Brito; Filho, Brivaldo Markman; da Silveira, Vera Magalhães

    2018-02-01

    The aim of this study was to evaluate the association between vitamin D deficiency and risk of heart failure in elderly patients of cardiology outpatient clinics. A cross-sectional study with an analytical approach was employed. Clinical data were collected from the elderly from August 2015 to February 2016. The dependent variable was the risk of heart failure; the independent variable was vitamin D deficiency; and intervening factors were age, gender, education, ethnicity, hypertension, diabetes mellitus, hypothyroidism, renal failure, dementia, stroke, dyslipidaemia, depression, smoking, alcoholism, obesity, andropause, and cardiac arrhythmia. To analyse the association between vitamin D deficiency and risk of heart failure, we used the bivariate logistic analysis, followed by analysis through the multivariate logistic regression model. Of the 137 elderly, the study found the following: women (75.9%); overweight (48.2%); obese (30.6%); increase in the index waist/hip (88.3%); dyslipidaemia (94.2%) and hypertension (91.2%); coronary artery disease (35.0%); and 27.7% with cardiac arrhythmia or left ventricular hypertrophy. Sixty-five per cent of the elderly were deficient in vitamin D. The risk of heart failure was significantly associated with vitamin D deficiency [odds ratio (OR): 12.19; 95% confidence interval (CI) = 4.23-35.16; P = 0.000], male gender (OR: 15.32; 95% CI = 3.39-69.20, P = 0.000), obesity (OR: 4.17; 95% CI = 1.36-12.81; P = 0.012), and cardiac arrhythmia (OR: 3.69; 95% CI = 1.23-11.11; P = 0.020). There was a high prevalence of vitamin D deficiency in the elderly, and the evidence shows a strong association between vitamin D deficiency and increased risk of heart failure in this population. © 2017 The Authors. ESC Heart Failure published by John Wiley & Sons Ltd on behalf of the European Society of Cardiology.

  7. How Do Tissues Respond and Adapt to Stresses Around a Prosthesis? A Primer on Finite Element Stress Analysis for Orthopaedic Surgeons

    PubMed Central

    Brand, Richard A; Stanford, Clark M; Swan, Colby C

    2003-01-01

    Joint implant design clearly affects long-term outcome. While many implant designs have been empirically-based, finite element analysis has the potential to identify beneficial and deleterious features prior to clinical trials. Finite element analysis is a powerful analytic tool allowing computation of the stress and strain distribution throughout an implant construct. Whether it is useful depends upon many assumptions and details of the model. Since ultimate failure is related to biological factors in addition to mechanical, and since the mechanical causes of failure are related to load history, rather than a few loading conditions, chief among them is whether the stresses or strains under limited loading conditions relate to outcome. Newer approaches can minimize this and the many other model limitations. If the surgeon is to critically and properly interpret the results in scientific articles and sales literature, he or she must have a fundamental understanding of finite element analysis. We outline here the major capabilities of finite element analysis, as well as the assumptions and limitations. PMID:14575244

  8. Proactive Approaches to Improving Outcomes for At-Risk Students.

    ERIC Educational Resources Information Center

    Freeman, G.; Gum, M.; Blackbourn, J. M.

    This paper outlines two approaches for improving outcomes for students at risk for academic failure. Both take a systemic approach to the problem by focusing on how specific circumstances create a reality of failure for many students. One school analyzed factors related to retention/promotion decisions and determined that four factors directly…

  9. Association between Vancomycin Day 1 Exposure Profile and Outcomes among Patients with Methicillin-Resistant Staphylococcus aureus Infective Endocarditis

    PubMed Central

    Casapao, Anthony M.; Lodise, Thomas P.; Davis, Susan L.; Claeys, Kimberly C.; Kullar, Ravina; Levine, Donald P.

    2015-01-01

    Given the critical importance of early appropriate therapy, a retrospective cohort (2002 to 2013) was performed at the Detroit Medical Center to evaluate the association between the day 1 vancomycin exposure profile and outcomes among patients with MRSA infective endocarditis (IE). The day 1 vancomycin area under the concentration-time curve (AUC0–24) and the minimum concentration at 24 h (Cmin 24) was estimated for each patient using the Bayesian procedure in ADAPT 5, an approach shown to accurately predict the vancomycin exposure with low bias and high precision with limited pharmacokinetic sampling. Initial MRSA isolates were collected and vancomycin MIC was determined by broth microdilution (BMD) and Etest. The primary outcome was failure, defined as persistent bacteremia (≥7 days) or 30-day attributable mortality. Classification and regression tree analysis (CART) was used to determine the vancomycin exposure variables associated with an increased probability of failure. In total, 139 patients met study criteria; 76.3% had right-sided IE, 16.5% had left-sided IE, and 7.2% had both left and right-sided IE. A total of 89/139 (64%) experienced failure by composite definition. In the CART analysis, failure was more pronounced in patients with an AUC0–24/MIC as determined by BMD of ≤600 relative to those with AUC0–24/MIC as determined by BMD of >600 (69.8% versus 54.7%, respectively, P = 0.073). In the logistic regression analysis, an AUC/MIC as determined by BMD of ≤600 (adjusted odds ratio, 2.3; 95% confidence interval, 1.01 to 5.37; P = 0.047) was independently associated with failure. Given the retrospective nature of the present study, further prospective studies are required but these data suggest that patients with an AUC0–24/MIC as determined by BMD of ≤600 present an increased risk of failure. PMID:25753631

  10. High Speed Dynamics in Brittle Materials

    NASA Astrophysics Data System (ADS)

    Hiermaier, Stefan

    2015-06-01

    Brittle Materials under High Speed and Shock loading provide a continuous challenge in experimental physics, analysis and numerical modelling, and consequently for engineering design. The dependence of damage and fracture processes on material-inherent length and time scales, the influence of defects, rate-dependent material properties and inertia effects on different scales make their understanding a true multi-scale problem. In addition, it is not uncommon that materials show a transition from ductile to brittle behavior when the loading rate is increased. A particular case is spallation, a brittle tensile failure induced by the interaction of stress waves leading to a sudden change from compressive to tensile loading states that can be invoked in various materials. This contribution highlights typical phenomena occurring when brittle materials are exposed to high loading rates in applications such as blast and impact on protective structures, or meteorite impact on geological materials. A short review on experimental methods that are used for dynamic characterization of brittle materials will be given. A close interaction of experimental analysis and numerical simulation has turned out to be very helpful in analyzing experimental results. For this purpose, adequate numerical methods are required. Cohesive zone models are one possible method for the analysis of brittle failure as long as some degree of tension is present. Their recent successful application for meso-mechanical simulations of concrete in Hopkinson-type spallation tests provides new insight into the dynamic failure process. Failure under compressive loading is a particular challenge for numerical simulations as it involves crushing of material which in turn influences stress states in other parts of a structure. On a continuum scale, it can be modeled using more or less complex plasticity models combined with failure surfaces, as will be demonstrated for ceramics. Models which take microstructural cracking directly into account may provide a more physics-based approach for compressive failure in the future.

  11. Model selection criterion in survival analysis

    NASA Astrophysics Data System (ADS)

    Karabey, Uǧur; Tutkun, Nihal Ata

    2017-07-01

    Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.

  12. A combined field/remote sensing approach for characterizing landslide risk in coastal areas

    NASA Astrophysics Data System (ADS)

    Francioni, Mirko; Coggan, John; Eyre, Matthew; Stead, Doug

    2018-05-01

    Understanding the key factors controlling slope failure mechanisms in coastal areas is the first and most important step for analyzing, reconstructing and predicting the scale, location and extent of future instability in rocky coastlines. Different failure mechanisms may be possible depending on the influence of the engineering properties of the rock mass (including the fracture network), the persistence and type of discontinuity and the relative aspect or orientation of the coastline. Using a section of the North Coast of Cornwall, UK, as an example we present a multi-disciplinary approach for characterizing landslide risk associated with coastal instabilities in a blocky rock mass. Remotely captured terrestrial and aerial LiDAR and photogrammetric data were interrogated using Geographic Information System (GIS) techniques to provide a framework for subsequent analysis, interpretation and validation. The remote sensing mapping data was used to define the rock mass discontinuity network of the area and to differentiate between major and minor geological structures controlling the evolution of the North Coast of Cornwall. Kinematic instability maps generated from aerial LiDAR data using GIS techniques and results from structural and engineering geological surveys are presented. With this method, it was possible to highlight the types of kinematic failure mechanism that may generate coastal landslides and highlight areas that are more susceptible to instability or increased risk of future instability. Multi-temporal aerial LiDAR data and orthophotos were also studied using GIS techniques to locate recent landslide failures, validate the results obtained from the kinematic instability maps through site observations and provide improved understanding of the factors controlling the coastal geomorphology. The approach adopted is not only useful for academic research, but also for local authorities and consultancy's when assessing the likely risks of coastal instability.

  13. Improving kNowledge Transfer to Efficaciously RAise the level of Contemporary Treatment in Heart Failure (INTERACT-in-HF): Study protocol of a mixed methods study.

    PubMed

    Baldewijns, Karolien; Bektas, Sema; Boyne, Josiane; Rohde, Carla; De Maesschalck, Lieven; De Bleser, Leentje; Brandenburg, Vincent; Knackstedt, Christian; Devillé, Aleidis; Sanders-Van Wijk, Sandra; Brunner La Rocca, Hans-Peter

    2017-12-01

    Heart failure is a complex disease with poor outcome. This complexity may prevent care providers from covering all aspects of care. This could not only be relevant for individual patient care, but also for care organisation. Disease management programmes applying a multidisciplinary approach are recommended to improve heart failure care. However, there is a scarcity of research considering how disease management programme perform, in what form they should be offered, and what care and support patients and care providers would benefit most. Therefore, the Improving kNowledge Transfer to Efficaciously Raise the level of Contemporary Treatment in Heart Failure (INTERACT-in-HF) study aims to explore the current processes of heart failure care and to identify factors that may facilitate and factors that may hamper heart failure care and guideline adherence. Within a cross-sectional mixed method design in three regions of the North-West part of Europe, patients (n = 88) and their care providers (n = 59) were interviewed. Prior to the in-depth interviews, patients were asked to complete three questionnaires: The Dutch Heart Failure Knowledge scale, The European Heart Failure Self-care Behaviour Scale and The global health status and social economic status. In parallel, retrospective data based on records from these (n = 88) and additional patients (n = 82) are reviewed. All interviews were audiotaped and transcribed verbatim for analysis.

  14. Improving kNowledge Transfer to Efficaciously RAise the level of Contemporary Treatment in Heart Failure (INTERACT-in-HF): Study protocol of a mixed methods study

    PubMed Central

    Boyne, Josiane; Rohde, Carla; De Maesschalck, Lieven; De Bleser, Leentje; Brandenburg, Vincent; Knackstedt, Christian; Devillé, Aleidis; Sanders-Van Wijk, Sandra; Brunner La Rocca, Hans-Peter

    2017-01-01

    Heart failure is a complex disease with poor outcome. This complexity may prevent care providers from covering all aspects of care. This could not only be relevant for individual patient care, but also for care organisation. Disease management programmes applying a multidisciplinary approach are recommended to improve heart failure care. However, there is a scarcity of research considering how disease management programme perform, in what form they should be offered, and what care and support patients and care providers would benefit most. Therefore, the Improving kNowledge Transfer to Efficaciously Raise the level of Contemporary Treatment in Heart Failure (INTERACT-in-HF) study aims to explore the current processes of heart failure care and to identify factors that may facilitate and factors that may hamper heart failure care and guideline adherence. Within a cross-sectional mixed method design in three regions of the North-West part of Europe, patients (n = 88) and their care providers (n = 59) were interviewed. Prior to the in-depth interviews, patients were asked to complete three questionnaires: The Dutch Heart Failure Knowledge scale, The European Heart Failure Self-care Behaviour Scale and The global health status and social economic status. In parallel, retrospective data based on records from these (n = 88) and additional patients (n = 82) are reviewed. All interviews were audiotaped and transcribed verbatim for analysis. PMID:29472989

  15. Cycles till failure of silver-zinc cells with competing failure modes - Preliminary data analysis

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.

    1980-01-01

    The data analysis of cycles to failure of silver-zinc electrochemical cells with competing failure modes is presented. The test ran 129 cells through charge-discharge cycles until failure; preliminary data analysis consisted of response surface estimate of life. Batteries fail through low voltage condition and an internal shorting condition; a competing failure modes analysis was made using maximum likelihood estimation for the extreme value life distribution. Extensive residual plotting and probability plotting were used to verify data quality and selection of model.

  16. Probability of Failure Analysis Standards and Guidelines for Expendable Launch Vehicles

    NASA Astrophysics Data System (ADS)

    Wilde, Paul D.; Morse, Elisabeth L.; Rosati, Paul; Cather, Corey

    2013-09-01

    Recognizing the central importance of probability of failure estimates to ensuring public safety for launches, the Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST), the National Aeronautics and Space Administration (NASA), and U.S. Air Force (USAF), through the Common Standards Working Group (CSWG), developed a guide for conducting valid probability of failure (POF) analyses for expendable launch vehicles (ELV), with an emphasis on POF analysis for new ELVs. A probability of failure analysis for an ELV produces estimates of the likelihood of occurrence of potentially hazardous events, which are critical inputs to launch risk analysis of debris, toxic, or explosive hazards. This guide is intended to document a framework for POF analyses commonly accepted in the US, and should be useful to anyone who performs or evaluates launch risk analyses for new ELVs. The CSWG guidelines provide performance standards and definitions of key terms, and are being revised to address allocation to flight times and vehicle response modes. The POF performance standard allows a launch operator to employ alternative, potentially innovative methodologies so long as the results satisfy the performance standard. Current POF analysis practice at US ranges includes multiple methodologies described in the guidelines as accepted methods, but not necessarily the only methods available to demonstrate compliance with the performance standard. The guidelines include illustrative examples for each POF analysis method, which are intended to illustrate an acceptable level of fidelity for ELV POF analyses used to ensure public safety. The focus is on providing guiding principles rather than "recipe lists." Independent reviews of these guidelines were performed to assess their logic, completeness, accuracy, self- consistency, consistency with risk analysis practices, use of available information, and ease of applicability. The independent reviews confirmed the general validity of the performance standard approach and suggested potential updates to improve the accuracy each of the example methods, especially to address reliability growth.

  17. Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety

    NASA Astrophysics Data System (ADS)

    Mikula, J. F. Kip

    2005-12-01

    This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.

  18. RBFox1-mediated RNA splicing regulates cardiac hypertrophy and heart failure.

    PubMed

    Gao, Chen; Ren, Shuxun; Lee, Jae-Hyung; Qiu, Jinsong; Chapski, Douglas J; Rau, Christoph D; Zhou, Yu; Abdellatif, Maha; Nakano, Astushi; Vondriska, Thomas M; Xiao, Xinshu; Fu, Xiang-Dong; Chen, Jau-Nian; Wang, Yibin

    2016-01-01

    RNA splicing is a major contributor to total transcriptome complexity; however, the functional role and regulation of splicing in heart failure remain poorly understood. Here, we used a total transcriptome profiling and bioinformatic analysis approach and identified a muscle-specific isoform of an RNA splicing regulator, RBFox1 (also known as A2BP1), as a prominent regulator of alternative RNA splicing during heart failure. Evaluation of developing murine and zebrafish hearts revealed that RBFox1 is induced during postnatal cardiac maturation. However, we found that RBFox1 is markedly diminished in failing human and mouse hearts. In a mouse model, RBFox1 deficiency in the heart promoted pressure overload-induced heart failure. We determined that RBFox1 is a potent regulator of RNA splicing and is required for a conserved splicing process of transcription factor MEF2 family members that yields different MEF2 isoforms with differential effects on cardiac hypertrophic gene expression. Finally, induction of RBFox1 expression in murine pressure overload models substantially attenuated cardiac hypertrophy and pathological manifestations. Together, this study identifies regulation of RNA splicing by RBFox1 as an important player in transcriptome reprogramming during heart failure that influence pathogenesis of the disease.

  19. RBFox1-mediated RNA splicing regulates cardiac hypertrophy and heart failure

    PubMed Central

    Gao, Chen; Ren, Shuxun; Lee, Jae-Hyung; Qiu, Jinsong; Chapski, Douglas J.; Rau, Christoph D.; Zhou, Yu; Abdellatif, Maha; Nakano, Astushi; Vondriska, Thomas M.; Xiao, Xinshu; Fu, Xiang-Dong; Chen, Jau-Nian; Wang, Yibin

    2015-01-01

    RNA splicing is a major contributor to total transcriptome complexity; however, the functional role and regulation of splicing in heart failure remain poorly understood. Here, we used a total transcriptome profiling and bioinformatic analysis approach and identified a muscle-specific isoform of an RNA splicing regulator, RBFox1 (also known as A2BP1), as a prominent regulator of alternative RNA splicing during heart failure. Evaluation of developing murine and zebrafish hearts revealed that RBFox1 is induced during postnatal cardiac maturation. However, we found that RBFox1 is markedly diminished in failing human and mouse hearts. In a mouse model, RBFox1 deficiency in the heart promoted pressure overload–induced heart failure. We determined that RBFox1 is a potent regulator of RNA splicing and is required for a conserved splicing process of transcription factor MEF2 family members that yields different MEF2 isoforms with differential effects on cardiac hypertrophic gene expression. Finally, induction of RBFox1 expression in murine pressure overload models substantially attenuated cardiac hypertrophy and pathological manifestations. Together, this study identifies regulation of RNA splicing by RBFox1 as an important player in transcriptome reprogramming during heart failure that influence pathogenesis of the disease. PMID:26619120

  20. Robustness analysis of complex networks with power decentralization strategy via flow-sensitive centrality against cascading failures

    NASA Astrophysics Data System (ADS)

    Guo, Wenzhang; Wang, Hao; Wu, Zhengping

    2018-03-01

    Most existing cascading failure mitigation strategy of power grids based on complex network ignores the impact of electrical characteristics on dynamic performance. In this paper, the robustness of the power grid under a power decentralization strategy is analysed through cascading failure simulation based on AC flow theory. The flow-sensitive (FS) centrality is introduced by integrating topological features and electrical properties to help determine the siting of the generation nodes. The simulation results of the IEEE-bus systems show that the flow-sensitive centrality method is a more stable and accurate approach and can enhance the robustness of the network remarkably. Through the study of the optimal flow-sensitive centrality selection for different networks, we find that the robustness of the network with obvious small-world effect depends more on contribution of the generation nodes detected by community structure, otherwise, contribution of the generation nodes with important influence on power flow is more critical. In addition, community structure plays a significant role in balancing the power flow distribution and further slowing the propagation of failures. These results are useful in power grid planning and cascading failure prevention.

  1. A novel approach on accelerated ageing towards reliability optimization of high concentration photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Tsanakas, John A.; Jaffre, Damien; Sicre, Mathieu; Elouamari, Rachid; Vossier, Alexis; de Salins, Jean-Edouard; Bechou, Laurent; Levrier, Bruno; Perona, Arnaud; Dollet, Alain

    2014-09-01

    This paper presents a preliminary study upon a novel approach proposed for highly accelerated ageing and reliability optimization of high concentrating photovoltaic (HCPV) cells and assemblies. The intended approach aims to overcome several limitations of some current accelerated ageing tests (AAT) adopted up today, proposing the use of an alternative experimental set-up for performing faster and more realistic thermal cycles, under real sun, without the involvement of environmental chamber. The study also includes specific characterization techniques, before and after each AAT sequence, which respectively provide the initial and final diagnosis on the condition of the tested sample. The acquired data from these diagnostic/characterization methods are then used as indices to determine both quantitatively and qualitatively the severity of degradation and, thus, the ageing level for each tested HCPV assembly or cell sample. Ultimate goal of such "initial diagnosis - AAT - final diagnosis" sequences is to provide the basis for a future work on the reliability analysis of the main degradation mechanisms and confident prediction of failure propagation in HCPV cells, by means of acceleration factor (AF) and mean-time-to-failure (MTTF) estimations.

  2. 75 FR 34956 - Airworthiness Directives; Robert E. Rust, Jr. Model DeHavilland DH.C1 Chipmunk 21, DH.C1 Chipmunk...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-21

    ... retraction of the flaps. This failure could lead to a stall during a landing approach. DATES: We must receive...-commanded retraction of the flaps. This failure could lead to a stall during a landing approach. Relevant... result in an un-commanded retraction of the flaps. This failure could lead to a stall during a landing...

  3. Cost-utility analysis of the EVOLVO study on remote monitoring for heart failure patients with implantable defibrillators: randomized controlled trial.

    PubMed

    Zanaboni, Paolo; Landolina, Maurizio; Marzegalli, Maurizio; Lunati, Maurizio; Perego, Giovanni B; Guenzati, Giuseppe; Curnis, Antonio; Valsecchi, Sergio; Borghetti, Francesca; Borghi, Gabriella; Masella, Cristina

    2013-05-30

    Heart failure patients with implantable defibrillators place a significant burden on health care systems. Remote monitoring allows assessment of device function and heart failure parameters, and may represent a safe, effective, and cost-saving method compared to conventional in-office follow-up. We hypothesized that remote device monitoring represents a cost-effective approach. This paper summarizes the economic evaluation of the Evolution of Management Strategies of Heart Failure Patients With Implantable Defibrillators (EVOLVO) study, a multicenter clinical trial aimed at measuring the benefits of remote monitoring for heart failure patients with implantable defibrillators. Two hundred patients implanted with a wireless transmission-enabled implantable defibrillator were randomized to receive either remote monitoring or the conventional method of in-person evaluations. Patients were followed for 16 months with a protocol of scheduled in-office and remote follow-ups. The economic evaluation of the intervention was conducted from the perspectives of the health care system and the patient. A cost-utility analysis was performed to measure whether the intervention was cost-effective in terms of cost per quality-adjusted life year (QALY) gained. Overall, remote monitoring did not show significant annual cost savings for the health care system (€1962.78 versus €2130.01; P=.80). There was a significant reduction of the annual cost for the patients in the remote arm in comparison to the standard arm (€291.36 versus €381.34; P=.01). Cost-utility analysis was performed for 180 patients for whom QALYs were available. The patients in the remote arm gained 0.065 QALYs more than those in the standard arm over 16 months, with a cost savings of €888.10 per patient. Results from the cost-utility analysis of the EVOLVO study show that remote monitoring is a cost-effective and dominant solution. Remote management of heart failure patients with implantable defibrillators appears to be cost-effective compared to the conventional method of in-person evaluations. ClinicalTrials.gov NCT00873899; http://clinicaltrials.gov/show/NCT00873899 (Archived by WebCite at http://www.webcitation.org/6H0BOA29f).

  4. Accidental Water Pollution Risk Analysis of Mine Tailings Ponds in Guanting Reservoir Watershed, Zhangjiakou City, China.

    PubMed

    Liu, Renzhi; Liu, Jing; Zhang, Zhijiao; Borthwick, Alistair; Zhang, Ke

    2015-12-02

    Over the past half century, a surprising number of major pollution incidents occurred due to tailings dam failures. Most previous studies of such incidents comprised forensic analyses of environmental impacts after a tailings dam failure, with few considering the combined pollution risk before incidents occur at a watershed-scale. We therefore propose Watershed-scale Tailings-pond Pollution Risk Analysis (WTPRA), designed for multiple mine tailings ponds, stemming from previous watershed-scale accidental pollution risk assessments. Transferred and combined risk is embedded using risk rankings of multiple routes of the "source-pathway-target" in the WTPRA. The previous approach is modified using multi-criteria analysis, dam failure models, and instantaneous water quality models, which are modified for application to multiple tailings ponds. The study area covers the basin of Gutanting Reservoir (the largest backup drinking water source for Beijing) in Zhangjiakou City, where many mine tailings ponds are located. The resultant map shows that risk is higher downstream of Gutanting Reservoir and in its two tributary basins (i.e., Qingshui River and Longyang River). Conversely, risk is lower in the midstream and upstream reaches. The analysis also indicates that the most hazardous mine tailings ponds are located in Chongli and Xuanhua, and that Guanting Reservoir is the most vulnerable receptor. Sensitivity and uncertainty analyses are performed to validate the robustness of the WTPRA method.

  5. Application of ICH Q9 Quality Risk Management Tools for Advanced Development of Hot Melt Coated Multiparticulate Systems.

    PubMed

    Stocker, Elena; Becker, Karin; Hate, Siddhi; Hohl, Roland; Schiemenz, Wolfgang; Sacher, Stephan; Zimmer, Andreas; Salar-Behzadi, Sharareh

    2017-01-01

    This study aimed to apply quality risk management based on the The International Conference on Harmonisation guideline Q9 for the early development stage of hot melt coated multiparticulate systems for oral administration. N-acetylcysteine crystals were coated with a formulation composing tripalmitin and polysorbate 65. The critical quality attributes (CQAs) were initially prioritized using failure mode and effects analysis. The CQAs of the coated material were defined as particle size, taste-masking efficiency, and immediate release profile. The hot melt coated process was characterized via a flowchart, based on the identified potential critical process parameters (CPPs) and their impact on the CQAs. These CPPs were prioritized using a process failure mode, effects, and criticality analysis and their critical impact on the CQAs was experimentally confirmed using a statistical design of experiments. Spray rate, atomization air pressure, and air flow rate were identified as CPPs. Coating amount and content of polysorbate 65 in the coating formulation were identified as critical material attributes. A hazard and critical control points analysis was applied to define control strategies at the critical process points. A fault tree analysis evaluated causes for potential process failures. We successfully demonstrated that a standardized quality risk management approach optimizes the product development sustainability and supports the regulatory aspects. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  6. Estimation of failure criteria in multivariate sensory shelf life testing using survival analysis.

    PubMed

    Giménez, Ana; Gagliardi, Andrés; Ares, Gastón

    2017-09-01

    For most food products, shelf life is determined by changes in their sensory characteristics. A predetermined increase or decrease in the intensity of a sensory characteristic has frequently been used to signal that a product has reached the end of its shelf life. Considering all attributes change simultaneously, the concept of multivariate shelf life allows a single measurement of deterioration that takes into account all these sensory changes at a certain storage time. The aim of the present work was to apply survival analysis to estimate failure criteria in multivariate sensory shelf life testing using two case studies, hamburger buns and orange juice, by modelling the relationship between consumers' rejection of the product and the deterioration index estimated using PCA. In both studies, a panel of 13 trained assessors evaluated the samples using descriptive analysis whereas a panel of 100 consumers answered a "yes" or "no" question regarding intention to buy or consume the product. PC1 explained the great majority of the variance, indicating all sensory characteristics evolved similarly with storage time. Thus, PC1 could be regarded as index of sensory deterioration and a single failure criterion could be estimated through survival analysis for 25 and 50% consumers' rejection. The proposed approach based on multivariate shelf life testing may increase the accuracy of shelf life estimations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Accidental Water Pollution Risk Analysis of Mine Tailings Ponds in Guanting Reservoir Watershed, Zhangjiakou City, China

    PubMed Central

    Liu, Renzhi; Liu, Jing; Zhang, Zhijiao; Borthwick, Alistair; Zhang, Ke

    2015-01-01

    Over the past half century, a surprising number of major pollution incidents occurred due to tailings dam failures. Most previous studies of such incidents comprised forensic analyses of environmental impacts after a tailings dam failure, with few considering the combined pollution risk before incidents occur at a watershed-scale. We therefore propose Watershed-scale Tailings-pond Pollution Risk Analysis (WTPRA), designed for multiple mine tailings ponds, stemming from previous watershed-scale accidental pollution risk assessments. Transferred and combined risk is embedded using risk rankings of multiple routes of the “source-pathway-target” in the WTPRA. The previous approach is modified using multi-criteria analysis, dam failure models, and instantaneous water quality models, which are modified for application to multiple tailings ponds. The study area covers the basin of Gutanting Reservoir (the largest backup drinking water source for Beijing) in Zhangjiakou City, where many mine tailings ponds are located. The resultant map shows that risk is higher downstream of Gutanting Reservoir and in its two tributary basins (i.e., Qingshui River and Longyang River). Conversely, risk is lower in the midstream and upstream reaches. The analysis also indicates that the most hazardous mine tailings ponds are located in Chongli and Xuanhua, and that Guanting Reservoir is the most vulnerable receptor. Sensitivity and uncertainty analyses are performed to validate the robustness of the WTPRA method. PMID:26633450

  8. A New Approach to Fibrous Composite Laminate Strength Prediction

    NASA Technical Reports Server (NTRS)

    Hart-Smith, L. J.

    1990-01-01

    A method of predicting the strength of cross-plied fibrous composite laminates is based on expressing the classical maximum-shear-stress failure criterion for ductile metals in terms of strains. Starting with such a formulation for classical isotropic materials, the derivation is extended to orthotropic materials having a longitudinal axis of symmetry, to represent the fibers in a unidirectional composite lamina. The only modification needed to represent those same fibers with properties normalized to the lamina rather than fiber is a change in axial modulus. A mirror image is added to the strain-based lamina failure criterion for fiber-dominated failures to reflect the cutoffs due to the presence of orthogonal fibers. It is found that the combined failure envelope is now identical with the well-known maximum-strain failure model in the tension-tension and compression-compression quadrants but is truncated in the shear quadrants. The successive application of this simple failure model for fibers in the 0/90 degree and +/- 45 degree orientations, in turn, is shown to be the necessary and sufficient characterization of the fiber-dominated failures of laminates made from fibers having the same tensile and compressive strengths. When one such strength is greater than the other, the failure envelope is appropriately truncated for the lesser direct strain. The shear-failure cutoffs are now based on the higher axial strain to failure since they occur at lower strains than and are usually not affected by such mechanisms as microbuckling. Premature matrix failures can also be covered by appropriately truncating the fiber failure envelope. Matrix failures are excluded from consideration for conventional fiber/polymer composites but the additional features needed for a more rigorous analysis of exotic materials are covered. The new failure envelope is compared with published biaxial test data. The theory is developed for unnotched laminates but is easily shrunk to incorporate reductions to allow for bolt holes, cutouts, reduced compressive strength after impact, and the like.

  9. The Relationship between Serum Zinc Level and Heart Failure: A Meta-Analysis

    PubMed Central

    Yu, Xuefang; Huang, Lei; Zhao, Jinyan; Wang, Zhuoqun; Yao, Wei; Wu, Xianming; Huang, Jingjing

    2018-01-01

    Zinc is essential for the maintenance of normal cellular structure and functions. Zinc dyshomeostasis can lead to many diseases, such as cardiovascular disease. However, there are conflicting reports on the relationship between serum zinc levels and heart failure (HF). The purpose of the present study is to explore the relationship between serum zinc levels and HF by using a meta-analysis approach. PubMed, Web of Science, and OVID databases were searched for reports on the association between serum zinc levels and HF until June 2016. 12 reports with 1453 subjects from 27 case-control studies were chosen for the meta-analysis. Overall, the pooled analysis indicated that patients with HF had lower zinc levels than the control subjects. Further subgroup analysis stratified by different geographic locations also showed that HF patients had lower zinc levels than the control subjects. In addition, subgroup analysis stratified by HF subgroups found that patients with idiopathic dilated cardiomyopathy (IDCM) had lower zinc levels than the control subjects, except for patients with ischemic cardiomyopathy (ICM). In conclusion, the results of the meta-analysis indicate that there is a significant association between low serum zinc levels and HF. PMID:29682528

  10. Application of dynamic programming to evaluate the slope stability of a vertical extension to a balefill.

    PubMed

    Kremen, Arie; Tsompanakis, Yiannis

    2010-04-01

    The slope-stability of a proposed vertical extension of a balefill was investigated in the present study, in an attempt to determine a geotechnically conservative design, compliant with New Jersey Department of Environmental Protection regulations, to maximize the utilization of unclaimed disposal capacity. Conventional geotechnical analytical methods are generally limited to well-defined failure modes, which may not occur in landfills or balefills due to the presence of preferential slip surfaces. In addition, these models assume an a priori stress distribution to solve essentially indeterminate problems. In this work, a different approach has been applied, which avoids several of the drawbacks of conventional methods. Specifically, the analysis was performed in a two-stage process: (a) calculation of stress distribution, and (b) application of an optimization technique to identify the most probable failure surface. The stress analysis was performed using a finite element formulation and the location of the failure surface was located by dynamic programming optimization method. A sensitivity analysis was performed to evaluate the effect of the various waste strength parameters of the underlying mathematical model on the results, namely the factor of safety of the landfill. Although this study focuses on the stability investigation of an expanded balefill, the methodology presented can easily be applied to general geotechnical investigations.

  11. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  12. Impact of different variables on the outcome of patients with clinically confined prostate carcinoma: prediction of pathologic stage and biochemical failure using an artificial neural network.

    PubMed

    Ziada, A M; Lisle, T C; Snow, P B; Levine, R F; Miller, G; Crawford, E D

    2001-04-15

    The advent of advanced computing techniques has provided the opportunity to analyze clinical data using artificial intelligence techniques. This study was designed to determine whether a neural network could be developed using preoperative prognostic indicators to predict the pathologic stage and time of biochemical failure for patients who undergo radical prostatectomy. The preoperative information included TNM stage, prostate size, prostate specific antigen (PSA) level, biopsy results (Gleason score and percentage of positive biopsy), as well as patient age. All 309 patients underwent radical prostatectomy at the University of Colorado Health Sciences Center. The data from all patients were used to train a multilayer perceptron artificial neural network. The failure rate was defined as a rise in the PSA level > 0.2 ng/mL. The biochemical failure rate in the data base used was 14.2%. Univariate and multivariate analyses were performed to validate the results. The neural network statistics for the validation set showed a sensitivity and specificity of 79% and 81%, respectively, for the prediction of pathologic stage with an overall accuracy of 80% compared with an overall accuracy of 67% using the multivariate regression analysis. The sensitivity and specificity for the prediction of failure were 67% and 85%, respectively, demonstrating a high confidence in predicting failure. The overall accuracy rates for the artificial neural network and the multivariate analysis were similar. Neural networks can offer a convenient vehicle for clinicians to assess the preoperative risk of disease progression for patients who are about to undergo radical prostatectomy. Continued investigation of this approach with larger data sets seems warranted. Copyright 2001 American Cancer Society.

  13. Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy

    PubMed Central

    Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Muzio, Nadia Di; Longobardi, Barbara; Mangili, Paola

    2013-01-01

    The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety. PACS number: 87.55.Qr PMID:24036868

  14. An accelerating precursor to predict "time-to-failure" in creep and volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Hao, Shengwang; Yang, Hang; Elsworth, Derek

    2017-09-01

    Real-time prediction by monitoring of the evolution of response variables is a central goal in predicting rock failure. A linear relation Ω˙Ω¨-1 = C(tf - t) has been developed to describe the time to failure, where Ω represents a response quantity, C is a constant and tf represents the failure time. Observations from laboratory creep failure experiments and precursors to volcanic eruptions are used to test the validity of the approach. Both cumulative and simple moving window techniques are developed to perform predictions and to illustrate the effects of data selection on the results. Laboratory creep failure experiments on granites show that the linear relation works well during the final approach to failure. For blind prediction, the simple moving window technique is preferred because it always uses the most recent data and excludes effects of early data deviating significantly from the predicted trend. When the predicted results show only small fluctuations, failure is imminent.

  15. Implantable Hemodynamic Monitoring for Heart Failure Patients.

    PubMed

    Abraham, William T; Perl, Leor

    2017-07-18

    Rates of heart failure hospitalization remain unacceptably high. Such hospitalizations are associated with substantial patient, caregiver, and economic costs. Randomized controlled trials of noninvasive telemedical systems have failed to demonstrate reduced rates of hospitalization. The failure of these technologies may be due to the limitations of the signals measured. Intracardiac and pulmonary artery pressure-guided management has become a focus of hospitalization reduction in heart failure. Early studies using implantable hemodynamic monitors demonstrated the potential of pressure-based heart failure management, whereas subsequent studies confirmed the clinical utility of this approach. One large pivotal trial proved the safety and efficacy of pulmonary artery pressure-guided heart failure management, showing a marked reduction in heart failure hospitalizations in patients randomized to active pressure-guided management. "Next-generation" implantable hemodynamic monitors are in development, and novel approaches for the use of this data promise to expand the use of pressure-guided heart failure management. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  16. Failure Analysis of Discrete Damaged Tailored Extension-Shear-Coupled Stiffened Composite Panels

    NASA Technical Reports Server (NTRS)

    Baker, Donald J.

    2005-01-01

    The results of an analytical and experimental investigation of the failure of composite is tiffener panels with extension-shear coupling are presented. This tailored concept, when used in the cover skins of a tiltrotor aircraft wing has the potential for increasing the aeroelastic stability margins and improving the aircraft productivity. The extension-shear coupling is achieved by using unbalanced 45 plies in the skin. The failure analysis of two tailored panel configurations that have the center stringer and adjacent skin severed is presented. Finite element analysis of the damaged panels was conducted using STAGS (STructural Analysis of General Shells) general purpose finite element program that includes a progressive failure capability for laminated composite structures that is based on point-stress analysis, traditional failure criteria, and ply discounting for material degradation. The progressive failure predicted the path of the failure and maximum load capability. There is less than 12 percent difference between the predicted failure load and experimental failure load. There is a good match of the panel stiffness and strength between the progressive failure analysis and the experimental results. The results indicate that the tailored concept would be feasible to use in the wing skin of a tiltrotor aircraft.

  17. PRO-Elicere: A Study for Create a New Process of Dependability Analysis of Space Computer Systems

    NASA Astrophysics Data System (ADS)

    da Silva, Glauco; Netto Lahoz, Carlos Henrique

    2013-09-01

    This paper presents the new approach to the computer system dependability analysis, called PRO-ELICERE, which introduces data mining concepts and intelligent mechanisms to decision support to analyze the potential hazards and failures of a critical computer system. Also, are presented some techniques and tools that support the traditional dependability analysis and briefly discusses the concept of knowledge discovery and intelligent databases for critical computer systems. After that, introduces the PRO-ELICERE process, an intelligent approach to automate the ELICERE, a process created to extract non-functional requirements for critical computer systems. The PRO-ELICERE can be used in the V&V activities in the projects of Institute of Aeronautics and Space, such as the Brazilian Satellite Launcher (VLS-1).

  18. Minding the Cyber-Physical Gap: Model-Based Analysis and Mitigation of Systemic Perception-Induced Failure.

    PubMed

    Mordecai, Yaniv; Dori, Dov

    2017-07-17

    The cyber-physical gap (CPG) is the difference between the 'real' state of the world and the way the system perceives it. This discrepancy often stems from the limitations of sensing and data collection technologies and capabilities, and is inevitable at some degree in any cyber-physical system (CPS). Ignoring or misrepresenting such limitations during system modeling, specification, design, and analysis can potentially result in systemic misconceptions, disrupted functionality and performance, system failure, severe damage, and potential detrimental impacts on the system and its environment. We propose CPG-Aware Modeling & Engineering (CPGAME), a conceptual model-based approach to capturing, explaining, and mitigating the CPG. CPGAME enhances the systems engineer's ability to cope with CPGs, mitigate them by design, and prevent erroneous decisions and actions. We demonstrate CPGAME by applying it for modeling and analysis of the 1979 Three Miles Island 2 nuclear accident, and show how its meltdown could be mitigated. We use ISO-19450:2015-Object Process Methodology as our conceptual modeling framework.

  19. Testing and analysis of flat and curved panels with multiple cracks

    NASA Technical Reports Server (NTRS)

    Broek, David; Jeong, David Y.; Thomson, Douglas

    1994-01-01

    An experimental and analytical investigation of multiple cracking in various types of test specimens is described in this paper. The testing phase is comprised of a flat unstiffened panel series and curved stiffened and unstiffened panel series. The test specimens contained various configurations for initial damage. Static loading was applied to these specimens until ultimate failure, while loads and crack propagation were recorded. This data provides the basis for developing and validating methodologies for predicting linkup of multiple cracks, progression to failure, and overall residual strength. The results from twelve flat coupon and ten full scale curved panel tests are presented. In addition, an engineering analysis procedure was developed to predict multiple crack linkup. Reasonable agreement was found between predictions and actual test results for linkup and residual strength for both flat and curved panels. The results indicate that an engineering analysis approach has the potential to quantitatively assess the effect of multiple cracks in the arrest capability of an aircraft fuselage structure.

  20. Common cause evaluations in applied risk analysis of nuclear power plants. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taniguchi, T.; Ligon, D.; Stamatelatos, M.

    1983-04-01

    Qualitative and quantitative approaches were developed for the evaluation of common cause failures (CCFs) in nuclear power plants and were applied to the analysis of the auxiliary feedwater systems of several pressurized water reactors (PWRs). Key CCF variables were identified through a survey of experts in the field and a review of failure experience in operating PWRs. These variables were classified into categories of high, medium, and low defense against a CCF. Based on the results, a checklist was developed for analyzing CCFs of systems. Several known techniques for quantifying CCFs were also reviewed. The information provided valuable insights inmore » the development of a new model for estimating CCF probabilities, which is an extension of and improvement over the Beta Factor method. As applied to the analysis of the PWR auxiliary feedwater systems, the method yielded much more realistic values than the original Beta Factor method for a one-out-of-three system.« less

  1. Learning from the Failures of Others: The Effects of Post-Exit Knowledge Spillovers on Recipient Firms

    ERIC Educational Resources Information Center

    Amankwah-Amoah, Joseph

    2011-01-01

    Purpose: The purpose of this study is to examine the effects of post-exit knowledge diffusion created by departed firms on recipient firms. Design/methodology/approach: This is an inductive and exploratory study which tries to understand questions of how and why. The research used a qualitative interview methodology and data analysis using within…

  2. Placement Model for First-Time Freshmen in Calculus I (Math 131): University of Northern Colorado

    ERIC Educational Resources Information Center

    Heiny, Robert L.; Heiny, Erik L.; Raymond, Karen

    2017-01-01

    Two approaches, Linear Discriminant Analysis, and Logistic Regression are used and compared to predict success or failure for first-time freshmen in the first calculus course at a medium-sized public, 4-year institution prior to Fall registration. The predictor variables are high school GPA, the number, and GPA's of college prep mathematics…

  3. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  4. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  5. Development of a realistic stress analysis for fatigue analysis of notched composite laminates

    NASA Technical Reports Server (NTRS)

    Humphreys, E. A.; Rosen, B. W.

    1979-01-01

    A finite element stress analysis which consists of a membrane and interlaminar shear spring analysis was developed. This approach was utilized in order to model physically realistic failure mechanisms while maintaining a high degree of computational economy. The accuracy of the stress analysis predictions is verified through comparisons with other solutions to the composite laminate edge effect problem. The stress analysis model was incorporated into an existing fatigue analysis methodology and the entire procedure computerized. A fatigue analysis is performed upon a square laminated composite plate with a circular central hole. A complete description and users guide for the computer code FLAC (Fatigue of Laminated Composites) is included as an appendix.

  6. Application of the NUREG/CR-6850 EPRI/NRC Fire PRA Methodology to a DOE Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Elicson; Bentley Harwood; Richard Yorg

    2011-03-01

    The application NUREG/CR-6850 EPRI/NRC fire PRA methodology to DOE facility presented several challenges. This paper documents the process and discusses several insights gained during development of the fire PRA. A brief review of the tasks performed is provided with particular focus on the following: • Tasks 5 and 14: Fire-induced risk model and fire risk quantification. A key lesson learned was to begin model development and quantification as early as possible in the project using screening values and simplified modeling if necessary. • Tasks 3 and 9: Fire PRA cable selection and detailed circuit failure analysis. In retrospect, it wouldmore » have been beneficial to perform the model development and quantification in 2 phases with detailed circuit analysis applied during phase 2. This would have allowed for development of a robust model and quantification earlier in the project and would have provided insights into where to focus the detailed circuit analysis efforts. • Tasks 8 and 11: Scoping fire modeling and detailed fire modeling. More focus should be placed on detailed fire modeling and less focus on scoping fire modeling. This was the approach taken for the fire PRA. • Task 14: Fire risk quantification. Typically, multiple safe shutdown (SSD) components fail during a given fire scenario. Therefore dependent failure analysis is critical to obtaining a meaningful fire risk quantification. Dependent failure analysis for the fire PRA presented several challenges which will be discussed in the full paper.« less

  7. Optimum design of bolted composite lap joints under mechanical and thermal loading

    NASA Astrophysics Data System (ADS)

    Kradinov, Vladimir Yurievich

    A new approach is developed for the analysis and design of mechanically fastened composite lap joints under mechanical and thermal loading. Based on the combined complex potential and variational formulation, the solution method satisfies the equilibrium equations exactly while the boundary conditions are satisfied by minimizing the total potential. This approach is capable of modeling finite laminate planform dimensions, uniform and variable laminate thickness, laminate lay-up, interaction among bolts, bolt torque, bolt flexibility, bolt size, bolt-hole clearance and interference, insert dimensions and insert material properties. Comparing to the finite element analysis, the robustness of the method does not decrease when modeling the interaction of many bolts; also, the method is more suitable for parametric study and design optimization. The Genetic Algorithm (GA), a powerful optimization technique for multiple extrema functions in multiple dimensions search spaces, is applied in conjunction with the complex potential and variational formulation to achieve optimum designs of bolted composite lap joints. The objective of the optimization is to acquire such a design that ensures the highest strength of the joint. The fitness function for the GA optimization is based on the average stress failure criterion predicting net-section, shear-out, and bearing failure modes in bolted lap joints. The criterion accounts for the stress distribution in the thickness direction at the bolt location by applying an approach utilizing a beam on an elastic foundation formulation.

  8. Accounting for failure: risk-based regulation and the problems of ensuring healthcare quality in the NHS

    PubMed Central

    Beaussier, Anne-Laure; Demeritt, David; Griffiths, Alex; Rothstein, Henry

    2016-01-01

    In this paper, we examine why risk-based policy instruments have failed to improve the proportionality, effectiveness, and legitimacy of healthcare quality regulation in the National Health Service (NHS) in England. Rather than trying to prevent all possible harms, risk-based approaches promise to rationalise and manage the inevitable limits of what regulation can hope to achieve by focusing regulatory standard-setting and enforcement activity on the highest priority risks, as determined through formal assessments of their probability and consequences. As such, risk-based approaches have been enthusiastically adopted by healthcare quality regulators over the last decade. However, by drawing on historical policy analysis and in-depth interviews with 15 high-level UK informants in 2013–2015, we identify a series of practical problems in using risk-based policy instruments for defining, assessing, and ensuring compliance with healthcare quality standards. Based on our analysis, we go on to consider why, despite a succession of failures, healthcare regulators remain committed to developing and using risk-based approaches. We conclude by identifying several preconditions for successful risk-based regulation: goals must be clear and trade-offs between them amenable to agreement; regulators must be able to reliably assess the probability and consequences of adverse outcomes; regulators must have a range of enforcement tools that can be deployed in proportion to risk; and there must be political tolerance for adverse outcomes. PMID:27499677

  9. Accounting for failure: risk-based regulation and the problems of ensuring healthcare quality in the NHS.

    PubMed

    Beaussier, Anne-Laure; Demeritt, David; Griffiths, Alex; Rothstein, Henry

    2016-05-18

    In this paper, we examine why risk-based policy instruments have failed to improve the proportionality, effectiveness, and legitimacy of healthcare quality regulation in the National Health Service (NHS) in England. Rather than trying to prevent all possible harms, risk-based approaches promise to rationalise and manage the inevitable limits of what regulation can hope to achieve by focusing regulatory standard-setting and enforcement activity on the highest priority risks, as determined through formal assessments of their probability and consequences. As such, risk-based approaches have been enthusiastically adopted by healthcare quality regulators over the last decade. However, by drawing on historical policy analysis and in-depth interviews with 15 high-level UK informants in 2013-2015, we identify a series of practical problems in using risk-based policy instruments for defining, assessing, and ensuring compliance with healthcare quality standards. Based on our analysis, we go on to consider why, despite a succession of failures, healthcare regulators remain committed to developing and using risk-based approaches. We conclude by identifying several preconditions for successful risk-based regulation: goals must be clear and trade-offs between them amenable to agreement; regulators must be able to reliably assess the probability and consequences of adverse outcomes; regulators must have a range of enforcement tools that can be deployed in proportion to risk; and there must be political tolerance for adverse outcomes.

  10. Modeling of damage driven fracture failure of fiber post-restored teeth.

    PubMed

    Xu, Binting; Wang, Yining; Li, Qing

    2015-09-01

    Mechanical failure of biomaterials, which can be initiated by either violent force, or progressive stress fatigue, is a serious issue. Great efforts have been made to improve the mechanical performances of dental restorations. Virtual simulation is a promising approach for biomechanical investigations, which presents significant advantages in improving efficiency than traditional in vivo/in vitro studies. Over the past few decades, a number of virtual studies have been conducted to investigate the biomechanical issues concerning dental biomaterials, but only with limited incorporation of brittle failure phenomena. Motivated by the contradictory findings between several finite element analyses and common clinical observations on the fracture resistance of post-restored teeth, this study aimed to provide an approach using numerical simulations for investigating the fracture failure process through a non-linear fracture mechanics model. The ability of this approach to predict fracture initiation and propagation in a complex biomechanical status based on the intrinsic material properties was investigated. Results of the virtual simulations matched the findings of experimental tests, in terms of the ultimate fracture failure strengths and predictive areas under risk of clinical failure. This study revealed that the failure of dental post-restored restorations is a typical damage-driven continuum-to-discrete process. This approach is anticipated to have ramifications not only for modeling fracture events, but also for the design and optimization of the mechanical properties of biomaterials for specific clinically determined requirements. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Sparse Group Penalized Integrative Analysis of Multiple Cancer Prognosis Datasets

    PubMed Central

    Liu, Jin; Huang, Jian; Xie, Yang; Ma, Shuangge

    2014-01-01

    SUMMARY In cancer research, high-throughput profiling studies have been extensively conducted, searching for markers associated with prognosis. Because of the “large d, small n” characteristic, results generated from the analysis of a single dataset can be unsatisfactory. Recent studies have shown that integrative analysis, which simultaneously analyzes multiple datasets, can be more effective than single-dataset analysis and classic meta-analysis. In most of existing integrative analysis, the homogeneity model has been assumed, which postulates that different datasets share the same set of markers. Several approaches have been designed to reinforce this assumption. In practice, different datasets may differ in terms of patient selection criteria, profiling techniques, and many other aspects. Such differences may make the homogeneity model too restricted. In this study, we assume the heterogeneity model, under which different datasets are allowed to have different sets of markers. With multiple cancer prognosis datasets, we adopt the AFT (accelerated failure time) model to describe survival. This model may have the lowest computational cost among popular semiparametric survival models. For marker selection, we adopt a sparse group MCP (minimax concave penalty) approach. This approach has an intuitive formulation and can be computed using an effective group coordinate descent algorithm. Simulation study shows that it outperforms the existing approaches under both the homogeneity and heterogeneity models. Data analysis further demonstrates the merit of heterogeneity model and proposed approach. PMID:23938111

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teixeira, Flavia C., E-mail: flavitiz@gmail.com; Almeida, Carlos E. de; Saiful Huq, M.

    Purpose: The goal of this study was to evaluate the safety and quality management program for stereotactic radiosurgery (SRS) treatment processes at three radiotherapy centers in Brazil by using three industrial engineering tools (1) process mapping, (2) failure modes and effects analysis (FMEA), and (3) fault tree analysis. Methods: The recommendations of Task Group 100 of American Association of Physicists in Medicine were followed to apply the three tools described above to create a process tree for SRS procedure for each radiotherapy center and then FMEA was performed. Failure modes were identified for all process steps and values of riskmore » priority number (RPN) were calculated from O, S, and D (RPN = O × S × D) values assigned by a professional team responsible for patient care. Results: The subprocess treatment planning was presented with the highest number of failure modes for all centers. The total number of failure modes were 135, 104, and 131 for centers I, II, and III, respectively. The highest RPN value for each center is as follows: center I (204), center II (372), and center III (370). Failure modes with RPN ≥ 100: center I (22), center II (115), and center III (110). Failure modes characterized by S ≥ 7, represented 68% of the failure modes for center III, 62% for center II, and 45% for center I. Failure modes with RPNs values ≥100 and S ≥ 7, D ≥ 5, and O ≥ 5 were considered as high priority in this study. Conclusions: The results of the present study show that the safety risk profiles for the same stereotactic radiotherapy process are different at three radiotherapy centers in Brazil. Although this is the same treatment process, this present study showed that the risk priority is different and it will lead to implementation of different safety interventions among the centers. Therefore, the current practice of applying universal device-centric QA is not adequate to address all possible failures in clinical processes at different radiotherapy centers. Integrated approaches to device-centric and process specific quality management program specific to each radiotherapy center are the key to a safe quality management program.« less

  13. Simulation of Mechanical Behavior and Damage of a Large Composite Wind Turbine Blade under Critical Loads

    NASA Astrophysics Data System (ADS)

    Tarfaoui, M.; Nachtane, M.; Khadimallah, H.; Saifaoui, D.

    2018-04-01

    Issues such as energy generation/transmission and greenhouse gas emissions are the two energy problems we face today. In this context, renewable energy sources are a necessary part of the solution essentially winds power, which is one of the most profitable sources of competition with new fossil energy facilities. This paper present the simulation of mechanical behavior and damage of a 48 m composite wind turbine blade under critical wind loads. The finite element analysis was performed by using ABAQUS code to predict the most critical damage behavior and to apprehend and obtain knowledge of the complex structural behavior of wind turbine blades. The approach developed based on the nonlinear FE analysis using mean values for the material properties and the failure criteria of Tsai-Hill to predict failure modes in large structures and to identify the sensitive zones.

  14. Fundamental analysis of the failure of polymer-based fiber reinforced composites

    NASA Technical Reports Server (NTRS)

    Kanninen, M. F.; Rybicki, E. F.; Griffith, W. I.; Broek, D.

    1975-01-01

    A mathematical model predicting the strength of unidirectional fiber reinforced composites containing known flaws and with linear elastic-brittle material behavior was developed. The approach was to imbed a local heterogeneous region surrounding the crack tip into an anisotropic elastic continuum. This (1) permits an explicit analysis of the micromechanical processes involved in the fracture, and (2) remains simple enough to be useful in practical computations. Computations for arbitrary flaw size and orientation under arbitrary applied loads were performed. The mechanical properties were those of graphite epoxy. With the rupture properties arbitrarily varied to test the capabilities of the model to reflect real fracture modes, it was shown that fiber breakage, matrix crazing, crack bridging, matrix-fiber debonding, and axial splitting can all occur during a period of (gradually) increasing load prior to catastrophic failure. The calculations also reveal the sequential nature of the stable crack growth process proceding fracture.

  15. Assessment of Intralaminar Progressive Damage and Failure Analysis Using an Efficient Evaluation Framework

    NASA Technical Reports Server (NTRS)

    Hyder, Imran; Schaefer, Joseph; Justusson, Brian; Wanthal, Steve; Leone, Frank; Rose, Cheryl

    2017-01-01

    Reducing the timeline for development and certification for composite structures has been a long standing objective of the aerospace industry. This timeline can be further exacerbated when attempting to integrate new fiber-reinforced composite materials due to the large number of testing required at every level of design. computational progressive damage and failure analysis (PDFA) attempts to mitigate this effect; however, new PDFA methods have been slow to be adopted in industry since material model evaluation techniques have not been fully defined. This study presents an efficient evaluation framework which uses a piecewise verification and validation (V&V) approach for PDFA methods. Specifically, the framework is applied to evaluate PDFA research codes within the context of intralaminar damage. Methods are incrementally taken through various V&V exercises specifically tailored to study PDFA intralaminar damage modeling capability. Finally, methods are evaluated against a defined set of success criteria to highlight successes and limitations.

  16. Applicability of NASA contract quality management and failure mode effect analysis procedures to the USGS Outer Continental Shelf oil and gas lease management program

    NASA Technical Reports Server (NTRS)

    Dyer, M. K.; Little, D. G.; Hoard, E. G.; Taylor, A. C.; Campbell, R.

    1972-01-01

    An approach that might be used for determining the applicability of NASA management techniques to benefit almost any type of down-to-earth enterprise is presented. A study was made to determine the following: (1) the practicality of adopting NASA contractual quality management techniques to the U.S. Geological Survey Outer Continental Shelf lease management function; (2) the applicability of failure mode effects analysis to the drilling, production, and delivery systems in use offshore; (3) the impact on industrial offshore operations and onshore management operations required to apply recommended NASA techniques; and (4) the probable changes required in laws or regulations in order to implement recommendations. Several management activities that have been applied to space programs are identified, and their institution for improved management of offshore and onshore oil and gas operations is recommended.

  17. Failure analysis of braided U-shaped metal bellows flexible hoses

    NASA Astrophysics Data System (ADS)

    Pierce, Stephen O.

    Most of the research performed extensively reviews the effects of non-reinforced metal bellows and their pressurized characteristics. However, the majority of flex hoses are manufactured with reinforcement by the means of interweaved wire braids. For this research, the outer braid reinforced metal bellows flex hoses will be examined for their failure at differing lengths. The relationship with the bellows expansion joints is such that as the length of the flex hoses increases, the pressure at which squirm occurs decreases. As such, for the testing being performed, the same approach to failure is expected. As the length of the flex hose increases, it is predicted that the hose will fail at a decreasing pressure. Since the braid is the only thing that prevents the squirm from occurring, more of the load will be displaced from the bellows and into the braid. This will ultimately cause failure of the braid to occur at a lower pressure as the length of the hoses increase due to more of the load being transmitted from the bellows and into the braid.

  18. An efficient scan diagnosis methodology according to scan failure mode for yield enhancement

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Tae; Seo, Nam-Sik; Oh, Ghil-Geun; Kim, Dae-Gue; Lee, Kyu-Taek; Choi, Chi-Young; Kim, InSoo; Min, Hyoung Bok

    2008-12-01

    Yield has always been a driving consideration during fabrication of modern semiconductor industry. Statistically, the largest portion of wafer yield loss is defective scan failure. This paper presents efficient failure analysis methods for initial yield ramp up and ongoing product with scan diagnosis. Result of our analysis shows that more than 60% of the scan failure dies fall into the category of shift mode in the very deep submicron (VDSM) devices. However, localization of scan shift mode failure is very difficult in comparison to capture mode failure because it is caused by the malfunction of scan chain. Addressing the biggest challenge, we propose the most suitable analysis method according to scan failure mode (capture / shift) for yield enhancement. In the event of capture failure mode, this paper describes the method that integrates scan diagnosis flow and backside probing technology to obtain more accurate candidates. We also describe several unique techniques, such as bulk back-grinding solution, efficient backside probing and signal analysis method. Lastly, we introduce blocked chain analysis algorithm for efficient analysis of shift failure mode. In this paper, we contribute to enhancement of the yield as a result of the combination of two methods. We confirm the failure candidates with physical failure analysis (PFA) method. The direct feedback of the defective visualization is useful to mass-produce devices in a shorter time. The experimental data on mass products show that our method produces average reduction by 13.7% in defective SCAN & SRAM-BIST failure rates and by 18.2% in wafer yield rates.

  19. Operations analysis (study 2.1). Contingency analysis. [of failure modes anticipated during space shuttle upper stage planning

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Future operational concepts for the space transportation system were studied in terms of space shuttle upper stage failure contingencies possible during deployment, retrieval, or space servicing of automated satellite programs. Problems anticipated during mission planning were isolated using a modified 'fault tree' technique, normally used in safety analyses. A comprehensive space servicing hazard analysis is presented which classifies possible failure modes under the catagories of catastrophic collision, failure to rendezvous and dock, servicing failure, and failure to undock. The failure contingencies defined are to be taken into account during design of the upper stage.

  20. 49 CFR Appendix A to Part 214 - Schedule of Civil Penalties 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ring bouys 5,000 10,000 (ii) Failure to use ring bouys 1,500 (f)(i) Failure to provide skiff 1,000 2,500 (ii) Failure to use skiff 1,500 214.109 Scaffolding: (a)-(f) Failure to provide conforming... approach warning signal 2,000 (e) Failure to communicate proper warning signal 1,500 3,000 (f)(1...

  1. 49 CFR Appendix A to Part 214 - Schedule of Civil Penalties 1

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ring bouys 5,000 10,000 (ii) Failure to use ring bouys 1,500 (f)(i) Failure to provide skiff 1,000 2,500 (ii) Failure to use skiff 1,500 214.109 Scaffolding: (a)-(f) Failure to provide conforming... train approach warning signal 2,000 (e) Failure to communicate proper warning signal 1,500 3,000 (f)(1...

  2. 49 CFR Appendix A to Part 214 - Schedule of Civil Penalties 1

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ring bouys 5,000 10,000 (ii) Failure to use ring bouys 1,500 (f)(i) Failure to provide skiff 1,000 2,500 (ii) Failure to use skiff 1,500 214.109 Scaffolding: (a)-(f) Failure to provide conforming... approach warning signal 2,000 (e) Failure to communicate proper warning signal 1,500 3,000 (f)(1...

  3. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  4. Application of Failure Mode and Effect Analysis (FMEA) and cause and effect analysis in conjunction with ISO 22000 to a snails (Helix aspersa) processing plant; A case study.

    PubMed

    Arvanitoyannis, Ioannis S; Varzakas, Theodoros H

    2009-08-01

    Failure Mode and Effect Analysis (FMEA) has been applied for the risk assessment of snails manufacturing. A tentative approach of FMEA application to the snails industry was attempted in conjunction with ISO 22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (snails processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and fishbone diagram). In this work a comparison of ISO22000 analysis with HACCP is carried out over snails processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the RPN per identified processing hazard. Sterilization of tins, bioaccumulation of heavy metals, packaging of shells and poisonous mushrooms, were the processes identified as the ones with the highest RPN (280, 240, 147, 144, respectively) and corrective actions were undertaken. Following the application of corrective actions, a second calculation of RPN values was carried out leading to considerably lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO22000 system of a snails processing industry is considered imperative.

  5. Know thy eHealth user: Development of biopsychosocial personas from a study of older adults with heart failure.

    PubMed

    Holden, Richard J; Kulanthaivel, Anand; Purkayastha, Saptarshi; Goggins, Kathryn M; Kripalani, Sunil

    2017-12-01

    Personas are a canonical user-centered design method increasingly used in health informatics research. Personas-empirically-derived user archetypes-can be used by eHealth designers to gain a robust understanding of their target end users such as patients. To develop biopsychosocial personas of older patients with heart failure using quantitative analysis of survey data. Data were collected using standardized surveys and medical record abstraction from 32 older adults with heart failure recently hospitalized for acute heart failure exacerbation. Hierarchical cluster analysis was performed on a final dataset of n=30. Nonparametric analyses were used to identify differences between clusters on 30 clustering variables and seven outcome variables. Six clusters were produced, ranging in size from two to eight patients per cluster. Clusters differed significantly on these biopsychosocial domains and subdomains: demographics (age, sex); medical status (comorbid diabetes); functional status (exhaustion, household work ability, hygiene care ability, physical ability); psychological status (depression, health literacy, numeracy); technology (Internet availability); healthcare system (visit by home healthcare, trust in providers); social context (informal caregiver support, cohabitation, marital status); and economic context (employment status). Tabular and narrative persona descriptions provide an easy reference guide for informatics designers. Personas development using approaches such as clustering of structured survey data is an important tool for health informatics professionals. We describe insights from our study of patients with heart failure, then recommend a generic ten-step personas development process. Methods strengths and limitations of the study and of personas development generally are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Pilot performance in zero-visibility precision approach. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Ephrath, A. R.

    1975-01-01

    The pilot's short-term decisions regarding performance assessment and failure monitoring is examined. The performance of airline pilots who flew simulated zero-visibility landing approaches is reported. Results indicate that the pilot's mode of participation in the control task has a strong effect on his workload, the induced workload being lowest when the pilot acts as a monitor during a coupled approach and highest when the pilot is an active element in the control loop. A marked increase in workload at altitudes below 500 ft. is documented at all participation modes; this increase is inversely related to distance-to-go. The participation mode is shown to have a dominant effect on failure-detection performance, with a failure in a monitored (coupled) axis being detected faster than a comparable failure in a manually-controlled axis. Touchdown performance is also documented. It is concluded that the conventional instrument panel and its associated displays are inadequate for zero-visibility operations in the final phases of the landing approach.

  7. Use of failure mode and effects analysis for proactive identification of communication and handoff failures from organ procurement to transplantation.

    PubMed

    Steinberger, Dina M; Douglas, Stephen V; Kirschbaum, Mark S

    2009-09-01

    A multidisciplinary team from the University of Wisconsin Hospital and Clinics transplant program used failure mode and effects analysis to proactively examine opportunities for communication and handoff failures across the continuum of care from organ procurement to transplantation. The team performed a modified failure mode and effects analysis that isolated the multiple linked, serial, and complex information exchanges occurring during the transplantation of one solid organ. Failure mode and effects analysis proved effective for engaging a diverse group of persons who had an investment in the outcome in analysis and discussion of opportunities to improve the system's resilience for avoiding errors during a time-pressured and complex process.

  8. Failure Mode, Effects, and Criticality Analysis (FMECA)

    DTIC Science & Technology

    1993-04-01

    Preliminary Failure Modes, Effects and Criticality Analysis (FMECA) of the Brayton Isotope Power System Ground Demonstration System, Report No. TID 27301...No. TID/SNA - 3015, Aeroject Nuclear Systems Co., Sacramento, California: 1970. 95. Taylor , J.R. A Formalization of Failure Mode Analysis of Control...Roskilde, Denmark: 1973. 96. Taylor , J.R. A Semi-Automatic Method for Oualitative Failure Mode Analysis. Report No. RISO-M-1707. Available from a

  9. Comprehension and retrieval of failure cases in airborne observatories

    NASA Technical Reports Server (NTRS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-01-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  10. Comprehension and retrieval of failure cases in airborne observatories

    NASA Astrophysics Data System (ADS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-05-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  11. Photoresist and stochastic modeling

    NASA Astrophysics Data System (ADS)

    Hansen, Steven G.

    2018-01-01

    Analysis of physical modeling results can provide unique insights into extreme ultraviolet stochastic variation, which augment, and sometimes refute, conclusions based on physical intuition and even wafer experiments. Simulations verify the primacy of "imaging critical" counting statistics (photons, electrons, and net acids) and the image/blur-dependent dose sensitivity in describing the local edge or critical dimension variation. But the failure of simple counting when resist thickness is varied highlights a limitation of this exact analytical approach, so a calibratable empirical model offers useful simplicity and convenience. Results presented here show that a wide range of physical simulation results can be well matched by an empirical two-parameter model based on blurred image log-slope (ILS) for lines/spaces and normalized ILS for holes. These results are largely consistent with a wide range of published experimental results; however, there is some disagreement with the recently published dataset of De Bisschop. The present analysis suggests that the origin of this model failure is an unexpected blurred ILS:dose-sensitivity relationship failure in that resist process. It is shown that a photoresist mechanism based on high photodecomposable quencher loading and high quencher diffusivity can give rise to pitch-dependent blur, which may explain the discrepancy.

  12. The challenge of measuring emergency preparedness: integrating component metrics to build system-level measures for strategic national stockpile operations.

    PubMed

    Jackson, Brian A; Faith, Kay Sullivan

    2013-02-01

    Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.

  13. Motivational Interviewing Tailored Interventions for Heart Failure (MITI-HF): study design and methods.

    PubMed

    Masterson Creber, Ruth; Patey, Megan; Dickson, Victoria Vaughan; DeCesaris, Marissa; Riegel, Barbara

    2015-03-01

    Lack of engagement in self-care is common among patients needing to follow a complex treatment regimen, especially patients with heart failure who are affected by comorbidity, disability and side effects of poly-pharmacy. The purpose of Motivational Interviewing Tailored Interventions for Heart Failure (MITI-HF) is to test the feasibility and comparative efficacy of an MI intervention on self-care, acute heart failure physical symptoms and quality of life. We are conducting a brief, nurse-led motivational interviewing randomized controlled trial to address behavioral and motivational issues related to heart failure self-care. Participants in the intervention group receive home and phone-based motivational interviewing sessions over 90-days and those in the control group receive care as usual. Participants in both groups receive patient education materials. The primary study outcome is change in self-care maintenance from baseline to 90-days. This article presents the study design, methods, plans for statistical analysis and descriptive characteristics of the study sample for MITI-HF. Study findings will contribute to the literature on the efficacy of motivational interviewing to promote heart failure self-care. We anticipate that using an MI approach can help patients with heart failure focus on their internal motivation to change in a non-confrontational, patient-centered and collaborative way. It also affirms their ability to practice competent self-care relevant to their personal health goals. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  15. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  16. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  17. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  18. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  19. Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations

    NASA Technical Reports Server (NTRS)

    Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor

    2014-01-01

    One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.

  20. Nurses' strategies to address self-care aspects related to medication adherence and symptom recognition in heart failure patients: an in-depth look.

    PubMed

    Jaarsma, Tiny; Nikolova-Simons, Mariana; van der Wal, Martje H L

    2012-01-01

    Despite an increasing body of knowledge on self-care in heart failure patients, the need for effective interventions remains. We sought to deepen the understanding of interventions that heart failure nurses use in clinical practice to improve patient adherence to medication and symptom monitoring. A qualitative study with a directed content analysis was performed, using data from a selected sample of Dutch-speaking heart failure nurses who completed booklets with two vignettes involving medication adherence and symptom recognition. Nurses regularly assess and reassess patients before they decide on an intervention. They evaluate basic/factual information and barriers in a patient's behavior, and try to find room for improvement in a patient's behavior. Interventions that heart failure nurses use to improve adherence to medication and symptom monitoring were grouped into the themes of increasing knowledge, increasing motivation, and providing patients with practical tools. Nurses also described using technology-based tools, increased social support, alternative communication, partnership approaches, and coordination of care to improve adherence to medications and symptom monitoring. Despite a strong focus on educational strategies, nurses also reported other strategies to increase patient adherence. Nurses use several strategies to improve patient adherence that are not incorporated into guidelines. These interventions need to be evaluated for further applications in improving heart failure management. Copyright © 2012 Elsevier Inc. All rights reserved.

Top