Sample records for failure process analysis

  1. Practical, transparent prospective risk analysis for the clinical laboratory.

    PubMed

    Janssens, Pim Mw

    2014-11-01

    Prospective risk analysis (PRA) is an essential element in quality assurance for clinical laboratories. Practical approaches to conducting PRA in laboratories, however, are scarce. On the basis of the classical Failure Mode and Effect Analysis method, an approach to PRA was developed for application to key laboratory processes. First, the separate, major steps of the process under investigation are identified. Scores are then given for the Probability (P) and Consequence (C) of predefined types of failures and the chances of Detecting (D) these failures. Based on the P and C scores (on a 10-point scale), an overall Risk score (R) is calculated. The scores for each process were recorded in a matrix table. Based on predetermined criteria for R and D, it was determined whether a more detailed analysis was required for potential failures and, ultimately, where risk-reducing measures were necessary, if any. As an illustration, this paper presents the results of the application of PRA to our pre-analytical and analytical activities. The highest R scores were obtained in the stat processes, the most common failure type in the collective process steps was 'delayed processing or analysis', the failure type with the highest mean R score was 'inappropriate analysis' and the failure type most frequently rated as suboptimal was 'identification error'. The PRA designed is a useful semi-objective tool to identify process steps with potential failures rated as risky. Its systematic design and convenient output in matrix tables makes it easy to perform, practical and transparent. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  2. How to apply clinical cases and medical literature in the framework of a modified "failure mode and effects analysis" as a clinical reasoning tool--an illustration using the human biliary system.

    PubMed

    Wong, Kam Cheong

    2016-04-06

    Clinicians use various clinical reasoning tools such as Ishikawa diagram to enhance their clinical experience and reasoning skills. Failure mode and effects analysis, which is an engineering methodology in origin, can be modified and applied to provide inputs into an Ishikawa diagram. The human biliary system is used to illustrate a modified failure mode and effects analysis. The anatomical and physiological processes of the biliary system are reviewed. Failure is defined as an abnormality caused by infective, inflammatory, obstructive, malignancy, autoimmune and other pathological processes. The potential failures, their effect(s), main clinical features, and investigation that can help a clinician to diagnose at each anatomical part and physiological process are reviewed and documented in a modified failure mode and effects analysis table. Relevant medical and surgical cases are retrieved from the medical literature and weaved into the table. A total of 80 clinical cases which are relevant to the modified failure mode and effects analysis for the human biliary system have been reviewed and weaved into a designated table. The table is the backbone and framework for further expansion. Reviewing and updating the table is an iterative and continual process. The relevant clinical features in the modified failure mode and effects analysis are then extracted and included in the relevant Ishikawa diagram. This article illustrates an application of engineering methodology in medicine, and it sows the seeds of potential cross-pollination between engineering and medicine. Establishing a modified failure mode and effects analysis can be a teamwork project or self-directed learning process, or a mix of both. Modified failure mode and effects analysis can be deployed to obtain inputs for an Ishikawa diagram which in turn can be used to enhance clinical experiences and clinical reasoning skills for clinicians, medical educators, and students.

  3. TU-AB-BRD-02: Failure Modes and Effects Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huq, M.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  4. Failure Modes and Effects Analysis (FMEA): A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Failure modes and effects analysis (FMEA) is a bottom-up analytical process that identifies process hazards, which helps managers understand vulnerabilities of systems, as well as assess and mitigate risk. It is one of several engineering tools and techniques available to program and project managers aimed at increasing the likelihood of safe and successful NASA programs and missions. This bibliography references 465 documents in the NASA STI Database that contain the major concepts, failure modes or failure analysis, in either the basic index of the major subject terms.

  5. Stingray Failure Mode, Effects and Criticality Analysis: WEC Risk Registers

    DOE Data Explorer

    Ken Rhinefrank

    2016-07-25

    Analysis method to systematically identify all potential failure modes and their effects on the Stingray WEC system. This analysis is incorporated early in the development cycle such that the mitigation of the identified failure modes can be achieved cost effectively and efficiently. The FMECA can begin once there is enough detail to functions and failure modes of a given system, and its interfaces with other systems. The FMECA occurs coincidently with the design process and is an iterative process which allows for design changes to overcome deficiencies in the analysis.Risk Registers for major subsystems completed according to the methodology described in "Failure Mode Effects and Criticality Analysis Risk Reduction Program Plan.pdf" document below, in compliance with the DOE Risk Management Framework developed by NREL.

  6. Application of multi attribute failure mode analysis of milk production using analytical hierarchy process method

    NASA Astrophysics Data System (ADS)

    Rucitra, A. L.

    2018-03-01

    Pusat Koperasi Induk Susu (PKIS) Sekar Tanjung, East Java is one of the modern dairy industries producing Ultra High Temperature (UHT) milk. A problem that often occurs in the production process in PKIS Sekar Tanjung is a mismatch between the production process and the predetermined standard. The purpose of applying Analytical Hierarchy Process (AHP) was to identify the most potential cause of failure in the milk production process. Multi Attribute Failure Mode Analysis (MAFMA) method was used to eliminate or reduce the possibility of failure when viewed from the failure causes. This method integrates the severity, occurrence, detection, and expected cost criteria obtained from depth interview with the head of the production department as an expert. The AHP approach was used to formulate the priority ranking of the cause of failure in the milk production process. At level 1, the severity has the highest weight of 0.41 or 41% compared to other criteria. While at level 2, identifying failure in the UHT milk production process, the most potential cause was the average mixing temperature of more than 70 °C which was higher than the standard temperature (≤70 ° C). This failure cause has a contributes weight of 0.47 or 47% of all criteria Therefore, this study suggested the company to control the mixing temperature to minimise or eliminate the failure in this process.

  7. Verification and Validation Process for Progressive Damage and Failure Analysis Methods in the NASA Advanced Composites Consortium

    NASA Technical Reports Server (NTRS)

    Wanthal, Steven; Schaefer, Joseph; Justusson, Brian; Hyder, Imran; Engelstad, Stephen; Rose, Cheryl

    2017-01-01

    The Advanced Composites Consortium is a US Government/Industry partnership supporting technologies to enable timeline and cost reduction in the development of certified composite aerospace structures. A key component of the consortium's approach is the development and validation of improved progressive damage and failure analysis methods for composite structures. These methods will enable increased use of simulations in design trade studies and detailed design development, and thereby enable more targeted physical test programs to validate designs. To accomplish this goal with confidence, a rigorous verification and validation process was developed. The process was used to evaluate analysis methods and associated implementation requirements to ensure calculation accuracy and to gage predictability for composite failure modes of interest. This paper introduces the verification and validation process developed by the consortium during the Phase I effort of the Advanced Composites Project. Specific structural failure modes of interest are first identified, and a subset of standard composite test articles are proposed to interrogate a progressive damage analysis method's ability to predict each failure mode of interest. Test articles are designed to capture the underlying composite material constitutive response as well as the interaction of failure modes representing typical failure patterns observed in aerospace structures.

  8. [Failure modes and effects analysis in the prescription, validation and dispensing process].

    PubMed

    Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T

    2012-01-01

    To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  9. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  10. TU-AB-BRD-03: Fault Tree Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunscombe, P.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  11. Study of the Rock Mass Failure Process and Mechanisms During the Transformation from Open-Pit to Underground Mining Based on Microseismic Monitoring

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Yang, Tianhong; Bohnhoff, Marco; Zhang, Penghai; Yu, Qinglei; Zhou, Jingren; Liu, Feiyue

    2018-05-01

    To quantitatively understand the failure process and failure mechanism of a rock mass during the transformation from open-pit mining to underground mining, the Shirengou Iron Mine was selected as an engineering project case study. The study area was determined using the rock mass basic quality classification method and the kinematic analysis method. Based on the analysis of the variations in apparent stress and apparent volume over time, the rock mass failure process was analyzed. According to the recent research on the temporal and spatial change of microseismic events in location, energy, apparent stress, and displacement, the migration characteristics of rock mass damage were studied. A hybrid moment tensor inversion method was used to determine the rock mass fracture source mechanisms, the fracture orientations, and fracture scales. The fracture area can be divided into three zones: Zone A, Zone B, and Zone C. A statistical analysis of the orientation information of the fracture planes orientations was carried out, and four dominant fracture planes were obtained. Finally, the slip tendency analysis method was employed, and the unstable fracture planes were obtained. The results show: (1) The microseismic monitoring and hybrid moment tensor analysis can effectively analyze the failure process and failure mechanism of rock mass, (2) during the transformation from open-pit to underground mining, the failure type of rock mass is mainly shear failure and the tensile failure is mostly concentrated in the roof of goafs, and (3) the rock mass of the pit bottom and the upper of goaf No. 18 have the possibility of further damage.

  12. Analysis and Test Correlation of Proof of Concept Box for Blended Wing Body-Low Speed Vehicle

    NASA Technical Reports Server (NTRS)

    Spellman, Regina L.

    2003-01-01

    The Low Speed Vehicle (LSV) is a 14.2% scale remotely piloted vehicle of the revolutionary Blended Wing Body concept. The design of the LSV includes an all composite airframe. Due to internal manufacturing capability restrictions, room temperature layups were necessary. An extensive materials testing and manufacturing process development effort was underwent to establish a process that would achieve the high modulus/low weight properties required to meet the design requirements. The analysis process involved a loads development effort that incorporated aero loads to determine internal forces that could be applied to a traditional FEM of the vehicle and to conduct detailed component analyses. A new tool, Hypersizer, was added to the design process to address various composite failure modes and to optimize the skin panel thickness of the upper and lower skins for the vehicle. The analysis required an iterative approach as material properties were continually changing. As a part of the material characterization effort, test articles, including a proof of concept wing box and a full-scale wing, were fabricated. The proof of concept box was fabricated based on very preliminary material studies and tested in bending, torsion, and shear. The box was then tested to failure under shear. The proof of concept box was also analyzed using Nastran and Hypersizer. The results of both analyses were scaled to determine the predicted failure load. The test results were compared to both the Nastran and Hypersizer analytical predictions. The actual failure occurred at 899 lbs. The failure was predicted at 1167 lbs based on the Nastran analysis. The Hypersizer analysis predicted a lower failure load of 960 lbs. The Nastran analysis alone was not sufficient to predict the failure load because it does not identify local composite failure modes. This analysis has traditionally been done using closed form solutions. Although Hypersizer is typically used as an optimizer for the design process, the failure prediction was used to help gain acceptance and confidence in this new tool. The correlated models and process were to be used to analyze the full BWB-LSV airframe design. The analysis and correlation with test results of the proof of concept box is presented here, including the comparison of the Nastran and Hypersizer results.

  13. Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.

    PubMed

    Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi

    2015-10-01

    In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.

  14. Risk management of key issues of FPSO

    NASA Astrophysics Data System (ADS)

    Sun, Liping; Sun, Hai

    2012-12-01

    Risk analysis of key systems have become a growing topic late of because of the development of offshore structures. Equipment failures of offloading system and fire accidents were analyzed based on the floating production, storage and offloading (FPSO) features. Fault tree analysis (FTA), and failure modes and effects analysis (FMEA) methods were examined based on information already researched on modules of relex reliability studio (RRS). Equipment failures were also analyzed qualitatively by establishing a fault tree and Boolean structure function based on the shortage of failure cases, statistical data, and risk control measures examined. Failure modes of fire accident were classified according to the different areas of fire occurrences during the FMEA process, using risk priority number (RPN) methods to evaluate their severity rank. The qualitative analysis of FTA gave the basic insight of forming the failure modes of FPSO offloading, and the fire FMEA gave the priorities and suggested processes. The research has practical importance for the security analysis problems of FPSO.

  15. Use of Failure Mode and Effects Analysis to Improve Emergency Department Handoff Processes.

    PubMed

    Sorrentino, Patricia

    2016-01-01

    The purpose of this article is to describe a quality improvement process using failure mode and effects analysis (FMEA) to evaluate systems handoff communication processes, improve emergency department (ED) throughput and reduce crowding through development of a standardized handoff, and, ultimately, improve patient safety. Risk of patient harm through ineffective communication during handoff transitions is a major reason for breakdown of systems. Complexities of ED processes put patient safety at risk. An increased incidence of submitted patient safety event reports for handoff communication failures between the ED and inpatient units solidified a decision to implement the use of FMEA to identify handoff failures to mitigate patient harm through redesign. The clinical nurse specialist implemented an FMEA. Handoff failure themes were created from deidentified retrospective reviews. Weekly meetings were held over a 3-month period to identify failure modes and determine cause and effect on the process. A functional block diagram process map tool was used to illustrate handoff processes. An FMEA grid was used to list failure modes and assign a risk priority number to quantify results. Multiple areas with actionable failures were identified. A majority of causes for high-priority failure modes were specific to communications. Findings demonstrate the complexity of transition and handoff processes. The FMEA served to identify and evaluate risk of handoff failures and provide a framework for process improvement. A focus on mentoring nurses to quality handoff processes so that it becomes habitual practice is crucial to safe patient transitions. Standardizing content and hardwiring within the system are best practice. The clinical nurse specialist is prepared to provide strong leadership to drive and implement system-wide quality projects.

  16. Tribology symposium 1995. PD-Volume 72

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masudi, H.

    After the keynote presentation by Professor Aaron Cohen of Texas A and M University, entitled Processes Used in Design, the program is divided into five major sessions: Research and Development -- Recent research and development of tribological components; Tribology in Manufacturing -- The impact of tribology on modern manufacturing; Design/Design Representation -- Aspects of design related to tribological systems; Tribo-Chemistry/Tribo-Physics -- Discussion of chemical and physical behavior of substances as related to tribology; and Failure Analysis -- An analysis of failure, failure detection, and failure monitoring as related to manufacturing processes. Papers have been processed separately for inclusion on themore » data base.« less

  17. Use of a systematic risk analysis method to improve safety in the production of paediatric parenteral nutrition solutions

    PubMed Central

    Bonnabry, P; Cingria, L; Sadeghipour, F; Ing, H; Fonzo-Christe, C; Pfister, R

    2005-01-01

    Background: Until recently, the preparation of paediatric parenteral nutrition formulations in our institution included re-transcription and manual compounding of the mixture. Although no significant clinical problems have occurred, re-engineering of this high risk activity was undertaken to improve its safety. Several changes have been implemented including new prescription software, direct recording on a server, automatic printing of the labels, and creation of a file used to pilot a BAXA MM 12 automatic compounder. The objectives of this study were to compare the risks associated with the old and new processes, to quantify the improved safety with the new process, and to identify the major residual risks. Methods: A failure modes, effects, and criticality analysis (FMECA) was performed by a multidisciplinary team. A cause-effect diagram was built, the failure modes were defined, and the criticality index (CI) was determined for each of them on the basis of the likelihood of occurrence, the severity of the potential effect, and the detection probability. The CIs for each failure mode were compared for the old and new processes and the risk reduction was quantified. Results: The sum of the CIs of all 18 identified failure modes was 3415 for the old process and 1397 for the new (reduction of 59%). The new process reduced the CIs of the different failure modes by a mean factor of 7. The CI was smaller with the new process for 15 failure modes, unchanged for two, and slightly increased for one. The greatest reduction (by a factor of 36) concerned re-transcription errors, followed by readability problems (by a factor of 30) and chemical cross contamination (by a factor of 10). The most critical steps in the new process were labelling mistakes (CI 315, maximum 810), failure to detect a dosage or product mistake (CI 288), failure to detect a typing error during the prescription (CI 175), and microbial contamination (CI 126). Conclusions: Modification of the process resulted in a significant risk reduction as shown by risk analysis. Residual failure opportunities were also quantified, allowing additional actions to be taken to reduce the risk of labelling mistakes. This study illustrates the usefulness of prospective risk analysis methods in healthcare processes. More systematic use of risk analysis is needed to guide continuous safety improvement of high risk activities. PMID:15805453

  18. Use of a systematic risk analysis method to improve safety in the production of paediatric parenteral nutrition solutions.

    PubMed

    Bonnabry, P; Cingria, L; Sadeghipour, F; Ing, H; Fonzo-Christe, C; Pfister, R E

    2005-04-01

    Until recently, the preparation of paediatric parenteral nutrition formulations in our institution included re-transcription and manual compounding of the mixture. Although no significant clinical problems have occurred, re-engineering of this high risk activity was undertaken to improve its safety. Several changes have been implemented including new prescription software, direct recording on a server, automatic printing of the labels, and creation of a file used to pilot a BAXA MM 12 automatic compounder. The objectives of this study were to compare the risks associated with the old and new processes, to quantify the improved safety with the new process, and to identify the major residual risks. A failure modes, effects, and criticality analysis (FMECA) was performed by a multidisciplinary team. A cause-effect diagram was built, the failure modes were defined, and the criticality index (CI) was determined for each of them on the basis of the likelihood of occurrence, the severity of the potential effect, and the detection probability. The CIs for each failure mode were compared for the old and new processes and the risk reduction was quantified. The sum of the CIs of all 18 identified failure modes was 3415 for the old process and 1397 for the new (reduction of 59%). The new process reduced the CIs of the different failure modes by a mean factor of 7. The CI was smaller with the new process for 15 failure modes, unchanged for two, and slightly increased for one. The greatest reduction (by a factor of 36) concerned re-transcription errors, followed by readability problems (by a factor of 30) and chemical cross contamination (by a factor of 10). The most critical steps in the new process were labelling mistakes (CI 315, maximum 810), failure to detect a dosage or product mistake (CI 288), failure to detect a typing error during the prescription (CI 175), and microbial contamination (CI 126). Modification of the process resulted in a significant risk reduction as shown by risk analysis. Residual failure opportunities were also quantified, allowing additional actions to be taken to reduce the risk of labelling mistakes. This study illustrates the usefulness of prospective risk analysis methods in healthcare processes. More systematic use of risk analysis is needed to guide continuous safety improvement of high risk activities.

  19. TU-AB-BRD-01: Process Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palta, J.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  20. Multi-institutional application of Failure Mode and Effects Analysis (FMEA) to CyberKnife Stereotactic Body Radiation Therapy (SBRT).

    PubMed

    Veronese, Ivan; De Martin, Elena; Martinotti, Anna Stefania; Fumagalli, Maria Luisa; Vite, Cristina; Redaelli, Irene; Malatesta, Tiziana; Mancosu, Pietro; Beltramo, Giancarlo; Fariselli, Laura; Cantone, Marie Claire

    2015-06-13

    A multidisciplinary and multi-institutional working group applied the Failure Mode and Effects Analysis (FMEA) approach to assess the risks for patients undergoing Stereotactic Body Radiation Therapy (SBRT) treatments for lesions located in spine and liver in two CyberKnife® Centres. The various sub-processes characterizing the SBRT treatment were identified to generate the process trees of both the treatment planning and delivery phases. This analysis drove to the identification and subsequent scoring of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system. Novel solutions aimed to increase patient safety were accordingly considered. The process-tree characterising the SBRT treatment planning stage was composed with a total of 48 sub-processes. Similarly, 42 sub-processes were identified in the stage of delivery to liver tumours and 30 in the stage of delivery to spine lesions. All the sub-processes were judged to be potentially prone to one or more failure modes. Nineteen failures (i.e. 5 in treatment planning stage, 5 in the delivery to liver lesions and 9 in the delivery to spine lesions) were considered of high concern in view of the high RPN and/or severity index value. The analysis of the potential failures, their causes and effects allowed to improve the safety strategies already adopted in the clinical practice with additional measures for optimizing quality management workflow and increasing patient safety.

  1. Risk analysis by FMEA as an element of analytical validation.

    PubMed

    van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M

    2009-12-05

    We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.

  2. TU-AB-BRD-00: Task Group 100

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  3. TU-AB-BRD-04: Development of Quality Management Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomadsen, B.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less

  4. Tribology symposium -- 1994. PD-Volume 61

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masudi, H.

    This year marks the first Tribology Symposium within the Energy-Sources Technology Conference, sponsored by the ASME Petroleum Division. The program was divided into five sessions: Tribology in High Technology, a historical discussion of some watershed events in tribology; Research/Development, design, research and development on modern manufacturing; Tribology in Manufacturing, the impact of tribology on modern manufacturing; Design/Design Representation, aspects of design related to tribological systems; and Failure Analysis, an analysis of failure, failure detection, and failure monitoring as relating to manufacturing processes. Eleven papers have been processed separately for inclusion on the data base.

  5. Failure mode and effects analysis and fault tree analysis of surface image guided cranial radiosurgery.

    PubMed

    Manger, Ryan P; Paxton, Adam B; Pawlicki, Todd; Kim, Gwe-Ya

    2015-05-01

    Surface image guided, Linac-based radiosurgery (SIG-RS) is a modern approach for delivering radiosurgery that utilizes optical stereoscopic imaging to monitor the surface of the patient during treatment in lieu of using a head frame for patient immobilization. Considering the novelty of the SIG-RS approach and the severity of errors associated with delivery of large doses per fraction, a risk assessment should be conducted to identify potential hazards, determine their causes, and formulate mitigation strategies. The purpose of this work is to investigate SIG-RS using the combined application of failure modes and effects analysis (FMEA) and fault tree analysis (FTA), report on the effort required to complete the analysis, and evaluate the use of FTA in conjunction with FMEA. A multidisciplinary team was assembled to conduct the FMEA on the SIG-RS process. A process map detailing the steps of the SIG-RS was created to guide the FMEA. Failure modes were determined for each step in the SIG-RS process, and risk priority numbers (RPNs) were estimated for each failure mode to facilitate risk stratification. The failure modes were ranked by RPN, and FTA was used to determine the root factors contributing to the riskiest failure modes. Using the FTA, mitigation strategies were formulated to address the root factors and reduce the risk of the process. The RPNs were re-estimated based on the mitigation strategies to determine the margin of risk reduction. The FMEA and FTAs for the top two failure modes required an effort of 36 person-hours (30 person-hours for the FMEA and 6 person-hours for two FTAs). The SIG-RS process consisted of 13 major subprocesses and 91 steps, which amounted to 167 failure modes. Of the 91 steps, 16 were directly related to surface imaging. Twenty-five failure modes resulted in a RPN of 100 or greater. Only one of these top 25 failure modes was specific to surface imaging. The riskiest surface imaging failure mode had an overall RPN-rank of eighth. Mitigation strategies for the top failure mode decreased the RPN from 288 to 72. Based on the FMEA performed in this work, the use of surface imaging for monitoring intrafraction position in Linac-based stereotactic radiosurgery (SRS) did not greatly increase the risk of the Linac-based SRS process. In some cases, SIG helped to reduce the risk of Linac-based RS. The FMEA was augmented by the use of FTA since it divided the failure modes into their fundamental components, which simplified the task of developing mitigation strategies.

  6. Application of failure mode and effect analysis in a radiology department.

    PubMed

    Thornton, Eavan; Brook, Olga R; Mendiratta-Lala, Mishal; Hallett, Donna T; Kruskal, Jonathan B

    2011-01-01

    With increasing deployment, complexity, and sophistication of equipment and related processes within the clinical imaging environment, system failures are more likely to occur. These failures may have varying effects on the patient, ranging from no harm to devastating harm. Failure mode and effect analysis (FMEA) is a tool that permits the proactive identification of possible failures in complex processes and provides a basis for continuous improvement. This overview of the basic principles and methodology of FMEA provides an explanation of how FMEA can be applied to clinical operations in a radiology department to reduce, predict, or prevent errors. The six sequential steps in the FMEA process are explained, and clinical magnetic resonance imaging services are used as an example for which FMEA is particularly applicable. A modified version of traditional FMEA called Healthcare Failure Mode and Effect Analysis, which was introduced by the U.S. Department of Veterans Affairs National Center for Patient Safety, is briefly reviewed. In conclusion, FMEA is an effective and reliable method to proactively examine complex processes in the radiology department. FMEA can be used to highlight the high-risk subprocesses and allows these to be targeted to minimize the future occurrence of failures, thus improving patient safety and streamlining the efficiency of the radiology department. RSNA, 2010

  7. Use of FMEA analysis to reduce risk of errors in prescribing and administering drugs in paediatric wards: a quality improvement report

    PubMed Central

    Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio

    2012-01-01

    Objective Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Design and setting Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. Primary outcome To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. Results In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. Conclusions FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children. PMID:23253870

  8. Use of FMEA analysis to reduce risk of errors in prescribing and administering drugs in paediatric wards: a quality improvement report.

    PubMed

    Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio

    2012-01-01

    Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children.

  9. A Case Study on Improving Intensive Care Unit (ICU) Services Reliability: By Using Process Failure Mode and Effects Analysis (PFMEA)

    PubMed Central

    Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad

    2016-01-01

    Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mossahebi, S; Feigenberg, S; Nichols, E

    Purpose: GammaPod™, the first stereotactic radiotherapy device for early stage breast cancer treatment, has been recently installed and commissioned at our institution. A multidisciplinary working group applied the failure mode and effects analysis (FMEA) approach to perform a risk analysis. Methods: FMEA was applied to the GammaPod™ treatment process by: 1) generating process maps for each stage of treatment; 2) identifying potential failure modes and outlining their causes and effects; 3) scoring the potential failure modes using the risk priority number (RPN) system based on the product of severity, frequency of occurrence, and detectability (ranging 1–10). An RPN of highermore » than 150 was set as the threshold for minimal concern of risk. For these high-risk failure modes, potential quality assurance procedures and risk control techniques have been proposed. A new set of severity, occurrence, and detectability values were re-assessed in presence of the suggested mitigation strategies. Results: In the single-day image-and-treat workflow, 19, 22, and 27 sub-processes were identified for the stages of simulation, treatment planning, and delivery processes, respectively. During the simulation stage, 38 potential failure modes were found and scored, in terms of RPN, in the range of 9-392. 34 potential failure modes were analyzed in treatment planning with a score range of 16-200. For the treatment delivery stage, 47 potential failure modes were found with an RPN score range of 16-392. The most critical failure modes consisted of breast-cup pressure loss and incorrect target localization due to patient upper-body alignment inaccuracies. The final RPN score of these failure modes based on recommended actions were assessed to be below 150. Conclusion: FMEA risk analysis technique was applied to the treatment process of GammaPod™, a new stereotactic radiotherapy technology. Application of systematic risk analysis methods is projected to lead to improved quality of GammaPod™ treatments. Ying Niu and Cedric Yu are affiliated with Xcision Medical Systems.« less

  11. Failure mode and effects analysis based risk profile assessment for stereotactic radiosurgery programs at three cancer centers in Brazil.

    PubMed

    Teixeira, Flavia C; de Almeida, Carlos E; Saiful Huq, M

    2016-01-01

    The goal of this study was to evaluate the safety and quality management program for stereotactic radiosurgery (SRS) treatment processes at three radiotherapy centers in Brazil by using three industrial engineering tools (1) process mapping, (2) failure modes and effects analysis (FMEA), and (3) fault tree analysis. The recommendations of Task Group 100 of American Association of Physicists in Medicine were followed to apply the three tools described above to create a process tree for SRS procedure for each radiotherapy center and then FMEA was performed. Failure modes were identified for all process steps and values of risk priority number (RPN) were calculated from O, S, and D (RPN = O × S × D) values assigned by a professional team responsible for patient care. The subprocess treatment planning was presented with the highest number of failure modes for all centers. The total number of failure modes were 135, 104, and 131 for centers I, II, and III, respectively. The highest RPN value for each center is as follows: center I (204), center II (372), and center III (370). Failure modes with RPN ≥ 100: center I (22), center II (115), and center III (110). Failure modes characterized by S ≥ 7, represented 68% of the failure modes for center III, 62% for center II, and 45% for center I. Failure modes with RPNs values ≥100 and S ≥ 7, D ≥ 5, and O ≥ 5 were considered as high priority in this study. The results of the present study show that the safety risk profiles for the same stereotactic radiotherapy process are different at three radiotherapy centers in Brazil. Although this is the same treatment process, this present study showed that the risk priority is different and it will lead to implementation of different safety interventions among the centers. Therefore, the current practice of applying universal device-centric QA is not adequate to address all possible failures in clinical processes at different radiotherapy centers. Integrated approaches to device-centric and process specific quality management program specific to each radiotherapy center are the key to a safe quality management program.

  12. EVALUATION OF SAFETY IN A RADIATION ONCOLOGY SETTING USING FAILURE MODE AND EFFECTS ANALYSIS

    PubMed Central

    Ford, Eric C.; Gaudette, Ray; Myers, Lee; Vanderver, Bruce; Engineer, Lilly; Zellars, Richard; Song, Danny Y.; Wong, John; DeWeese, Theodore L.

    2013-01-01

    Purpose Failure mode and effects analysis (FMEA) is a widely used tool for prospectively evaluating safety and reliability. We report our experiences in applying FMEA in the setting of radiation oncology. Methods and Materials We performed an FMEA analysis for our external beam radiation therapy service, which consisted of the following tasks: (1) create a visual map of the process, (2) identify possible failure modes; assign risk probability numbers (RPN) to each failure mode based on tabulated scores for the severity, frequency of occurrence, and detectability, each on a scale of 1 to 10; and (3) identify improvements that are both feasible and effective. The RPN scores can span a range of 1 to 1000, with higher scores indicating the relative importance of a given failure mode. Results Our process map consisted of 269 different nodes. We identified 127 possible failure modes with RPN scores ranging from 2 to 160. Fifteen of the top-ranked failure modes were considered for process improvements, representing RPN scores of 75 and more. These specific improvement suggestions were incorporated into our practice with a review and implementation by each department team responsible for the process. Conclusions The FMEA technique provides a systematic method for finding vulnerabilities in a process before they result in an error. The FMEA framework can naturally incorporate further quantification and monitoring. A general-use system for incident and near miss reporting would be useful in this regard. PMID:19409731

  13. TU-FG-201-12: Designing a Risk-Based Quality Assurance Program for a Newly Implemented Y-90 Microspheres Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vile, D; Zhang, L; Cuttino, L

    2016-06-15

    Purpose: To create a quality assurance program based upon a risk-based assessment of a newly implemented SirSpheres Y-90 procedure. Methods: A process map was created for a newly implemented SirSpheres procedure at a community hospital. The process map documented each step of this collaborative procedure, as well as the roles and responsibilities of each member. From the process map, different potential failure modes were determined as well as any current controls in place. From this list, a full failure mode and effects analysis (FMEA) was performed by grading each failure mode’s likelihood of occurrence, likelihood of detection, and potential severity.more » These numbers were then multiplied to compute the risk priority number (RPN) for each potential failure mode. Failure modes were then ranked based on their RPN. Additional controls were then added, with failure modes corresponding to the highest RPNs taking priority. Results: A process map was created that succinctly outlined each step in the SirSpheres procedure in its current implementation. From this, 72 potential failure modes were identified and ranked according to their associated RPN. Quality assurance controls and safety barriers were then added for failure modes associated with the highest risk being addressed first. Conclusion: A quality assurance program was created from a risk-based assessment of the SirSpheres process. Process mapping and FMEA were effective in identifying potential high-risk failure modes for this new procedure, which were prioritized for new quality assurance controls. TG 100 recommends the fault tree analysis methodology to design a comprehensive and effective QC/QM program, yet we found that by simply introducing additional safety barriers to address high RPN failure modes makes the whole process simpler and safer.« less

  14. Use of failure mode and effects analysis for proactive identification of communication and handoff failures from organ procurement to transplantation.

    PubMed

    Steinberger, Dina M; Douglas, Stephen V; Kirschbaum, Mark S

    2009-09-01

    A multidisciplinary team from the University of Wisconsin Hospital and Clinics transplant program used failure mode and effects analysis to proactively examine opportunities for communication and handoff failures across the continuum of care from organ procurement to transplantation. The team performed a modified failure mode and effects analysis that isolated the multiple linked, serial, and complex information exchanges occurring during the transplantation of one solid organ. Failure mode and effects analysis proved effective for engaging a diverse group of persons who had an investment in the outcome in analysis and discussion of opportunities to improve the system's resilience for avoiding errors during a time-pressured and complex process.

  15. Failure mode and effects analysis of witnessing protocols for ensuring traceability during IVF.

    PubMed

    Rienzi, Laura; Bariani, Fiorenza; Dalla Zorza, Michela; Romano, Stefania; Scarica, Catello; Maggiulli, Roberta; Nanni Costa, Alessandro; Ubaldi, Filippo Maria

    2015-10-01

    Traceability of cells during IVF is a fundamental aspect of treatment, and involves witnessing protocols. Failure mode and effects analysis (FMEA) is a method of identifying real or potential breakdowns in processes, and allows strategies to mitigate risks to be developed. To examine the risks associated with witnessing protocols, an FMEA was carried out in a busy IVF centre, before and after implementation of an electronic witnessing system (EWS). A multidisciplinary team was formed and moderated by human factors specialists. Possible causes of failures, and their potential effects, were identified and risk priority number (RPN) for each failure calculated. A second FMEA analysis was carried out after implementation of an EWS. The IVF team identified seven main process phases, 19 associated process steps and 32 possible failure modes. The highest RPN was 30, confirming the relatively low risk that mismatches may occur in IVF when a manual witnessing system is used. The introduction of the EWS allowed a reduction in the moderate-risk failure mode by two-thirds (highest RPN = 10). In our experience, FMEA is effective in supporting multidisciplinary IVF groups to understand the witnessing process, identifying critical steps and planning changes in practice to enable safety to be enhanced. Copyright © 2015 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  16. SU-E-T-421: Failure Mode and Effects Analysis (FMEA) of Xoft Electronic Brachytherapy for the Treatment of Superficial Skin Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoisak, J; Manger, R; Dragojevic, I

    Purpose: To perform a failure mode and effects analysis (FMEA) of the process for treating superficial skin cancers with the Xoft Axxent electronic brachytherapy (eBx) system, given the recent introduction of expanded quality control (QC) initiatives at our institution. Methods: A process map was developed listing all steps in superficial treatments with Xoft eBx, from the initial patient consult to the completion of the treatment course. The process map guided the FMEA to identify the failure modes for each step in the treatment workflow and assign Risk Priority Numbers (RPN), calculated as the product of the failure mode’s probability ofmore » occurrence (O), severity (S) and lack of detectability (D). FMEA was done with and without the inclusion of recent QC initiatives such as increased staffing, physics oversight, standardized source calibration, treatment planning and documentation. The failure modes with the highest RPNs were identified and contrasted before and after introduction of the QC initiatives. Results: Based on the FMEA, the failure modes with the highest RPN were related to source calibration, treatment planning, and patient setup/treatment delivery (Fig. 1). The introduction of additional physics oversight, standardized planning and safety initiatives such as checklists and time-outs reduced the RPNs of these failure modes. High-risk failure modes that could be mitigated with improved hardware and software interlocks were identified. Conclusion: The FMEA analysis identified the steps in the treatment process presenting the highest risk. The introduction of enhanced QC initiatives mitigated the risk of some of these failure modes by decreasing their probability of occurrence and increasing their detectability. This analysis demonstrates the importance of well-designed QC policies, procedures and oversight in a Xoft eBx programme for treatment of superficial skin cancers. Unresolved high risk failure modes highlight the need for non-procedural quality initiatives such as improved planning software and more robust hardware interlock systems.« less

  17. CAPSULE REPORT: REVERSE OSMOSIS PROCESS

    EPA Science Inventory

    A failure analysis has been completed for the reverse osmosis (RO) process. The focus was on process failures that result in releases of liquids and vapors to the environment. The report includes the following: 1) A description of RO and coverage of the principles behind the proc...

  18. Preventing blood transfusion failures: FMEA, an effective assessment method.

    PubMed

    Najafpour, Zhila; Hasoumi, Mojtaba; Behzadi, Faranak; Mohamadi, Efat; Jafary, Mohamadreza; Saeedi, Morteza

    2017-06-30

    Failure Mode and Effect Analysis (FMEA) is a method used to assess the risk of failures and harms to patients during the medical process and to identify the associated clinical issues. The aim of this study was to conduct an assessment of blood transfusion process in a teaching general hospital, using FMEA as the method. A structured FMEA was recruited in our study performed in 2014, and corrective actions were implemented and re-evaluated after 6 months. Sixteen 2-h sessions were held to perform FMEA in the blood transfusion process, including five steps: establishing the context, selecting team members, analysis of the processes, hazard analysis, and developing a risk reduction protocol for blood transfusion. Failure modes with the highest risk priority numbers (RPNs) were identified. The overall RPN scores ranged from 5 to 100 among which, four failure modes were associated with RPNs over 75. The data analysis indicated that failures with the highest RPNs were: labelling (RPN: 100), transfusion of blood or the component (RPN: 100), patient identification (RPN: 80) and sampling (RPN: 75). The results demonstrated that mis-transfusion of blood or blood component is the most important error, which can lead to serious morbidity or mortality. Provision of training to the personnel on blood transfusion, knowledge raising on hazards and appropriate preventative measures, as well as developing standard safety guidelines are essential, and must be implemented during all steps of blood and blood component transfusion.

  19. WE-G-BRA-08: Failure Modes and Effects Analysis (FMEA) for Gamma Knife Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Bhatnagar, J; Bednarz, G

    2015-06-15

    Purpose: To perform a failure modes and effects analysis (FMEA) study for Gamma Knife (GK) radiosurgery processes at our institution based on our experience with the treatment of more than 13,000 patients. Methods: A team consisting of medical physicists, nurses, radiation oncologists, neurosurgeons at the University of Pittsburgh Medical Center and an external physicist expert was formed for the FMEA study. A process tree and a failure mode table were created for the GK procedures using the Leksell GK Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detectionmore » (D) for failure modes were assigned to each failure mode by each professional on a scale from 1 to 10. The risk priority number (RPN) for each failure mode was then calculated (RPN = OxSxD) as the average scores from all data sets collected. Results: The established process tree for GK radiosurgery consists of 10 sub-processes and 53 steps, including a sub-process for frame placement and 11 steps that are directly related to the frame-based nature of the GK radiosurgery. Out of the 86 failure modes identified, 40 failure modes are GK specific, caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the GK helmets and plugs, and the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all radiation therapy techniques. The failure modes with the highest hazard scores are related to imperfect frame adaptor attachment, bad fiducial box assembly, overlooked target areas, inaccurate previous treatment information and excessive patient movement during MRI scan. Conclusion: The implementation of the FMEA approach for Gamma Knife radiosurgery enabled deeper understanding of the overall process among all professionals involved in the care of the patient and helped identify potential weaknesses in the overall process.« less

  20. Application of failure mode and effects analysis to intracranial stereotactic radiation surgery by linear accelerator.

    PubMed

    Masini, Laura; Donis, Laura; Loi, Gianfranco; Mones, Eleonora; Molina, Elisa; Bolchini, Cesare; Krengli, Marco

    2014-01-01

    The aim of this study was to analyze the application of the failure modes and effects analysis (FMEA) to intracranial stereotactic radiation surgery (SRS) by linear accelerator in order to identify the potential failure modes in the process tree and adopt appropriate safety measures to prevent adverse events (AEs) and near-misses, thus improving the process quality. A working group was set up to perform FMEA for intracranial SRS in the framework of a quality assurance program. FMEA was performed in 4 consecutive tasks: (1) creation of a visual map of the process; (2) identification of possible failure modes; (3) assignment of a risk probability number (RPN) to each failure mode based on tabulated scores of severity, frequency of occurrence and detectability; and (4) identification of preventive measures to minimize the risk of occurrence. The whole SRS procedure was subdivided into 73 single steps; 116 total possible failure modes were identified and a score of severity, occurrence, and detectability was assigned to each. Based on these scores, RPN was calculated for each failure mode thus obtaining values from 1 to 180. In our analysis, 112/116 (96.6%) RPN values were <60, 2 (1.7%) between 60 and 125 (63, 70), and 2 (1.7%) >125 (135, 180). The 2 highest RPN scores were assigned to the risk of using the wrong collimator's size and incorrect coordinates on the laser target localizer frame. Failure modes and effects analysis is a simple and practical proactive tool for systematic analysis of risks in radiation therapy. In our experience of SRS, FMEA led to the adoption of major changes in various steps of the SRS procedure.

  1. Proactive risk assessment of blood transfusion process, in pediatric emergency, using the Health Care Failure Mode and Effects Analysis (HFMEA).

    PubMed

    Dehnavieh, Reza; Ebrahimipour, Hossein; Molavi-Taleghani, Yasamin; Vafaee-Najar, Ali; Noori Hekmat, Somayeh; Esmailzdeh, Hamid

    2014-12-25

    Pediatric emergency has been considered as a high risk area, and blood transfusion is known as a unique clinical measure, therefore this study was conducted with the purpose of assessing the proactive risk assessment of blood transfusion process in Pediatric Emergency of Qaem education- treatment center in Mashhad, by the Healthcare Failure Mode and Effects Analysis (HFMEA) methodology. This cross-sectional study analyzed the failure mode and effects of blood transfusion process by a mixture of quantitative-qualitative method. The proactive HFMEA was used to identify and analyze the potential failures of the process. The information of the items in HFMEA forms was collected after obtaining a consensus of experts' panel views via the interview and focus group discussion sessions. The Number of 77 failure modes were identified for 24 sub-processes enlisted in 8 processes of blood transfusion. Totally 13 failure modes were identified as non-acceptable risk (a hazard score above 8) in the blood transfusion process and were transferred to the decision tree. Root causes of high risk modes were discussed in cause-effect meetings and were classified based on the UK national health system (NHS) approved classifications model. Action types were classified in the form of acceptance (11.6%), control (74.2%) and elimination (14.2%). Recommendations were placed in 7 categories using TRIZ ("Theory of Inventive Problem Solving.") The re-engineering process for the required changes, standardizing and updating the blood transfusion procedure, root cause analysis of blood transfusion catastrophic events, patient identification bracelet, training classes and educational pamphlets for raising awareness of personnel, and monthly gathering of transfusion medicine committee have all been considered as executive strategies in work agenda in pediatric emergency.

  2. Proactive Risk Assessment of Blood Transfusion Process, in Pediatric Emergency, Using the Health Care Failure Mode and Effects Analysis (HFMEA)

    PubMed Central

    Dehnavieh, Reza; Ebrahimipour, Hossein; Molavi-Taleghani, Yasamin; Vafaee-Najar, Ali; Hekmat, Somayeh Noori; Esmailzdeh, Hamid

    2015-01-01

    Introduction: Pediatric emergency has been considered as a high risk area, and blood transfusion is known as a unique clinical measure, therefore this study was conducted with the purpose of assessing the proactive risk assessment of blood transfusion process in Pediatric Emergency of Qaem education- treatment center in Mashhad, by the Healthcare Failure Mode and Effects Analysis (HFMEA) methodology. Methodology: This cross-sectional study analyzed the failure mode and effects of blood transfusion process by a mixture of quantitative-qualitative method. The proactive HFMEA was used to identify and analyze the potential failures of the process. The information of the items in HFMEA forms was collected after obtaining a consensus of experts’ panel views via the interview and focus group discussion sessions. Results: The Number of 77 failure modes were identified for 24 sub-processes enlisted in 8 processes of blood transfusion. Totally 13 failure modes were identified as non-acceptable risk (a hazard score above 8) in the blood transfusion process and were transferred to the decision tree. Root causes of high risk modes were discussed in cause-effect meetings and were classified based on the UK national health system (NHS) approved classifications model. Action types were classified in the form of acceptance (11.6%), control (74.2%) and elimination (14.2%). Recommendations were placed in 7 categories using TRIZ (“Theory of Inventive Problem Solving.”) Conclusion: The re-engineering process for the required changes, standardizing and updating the blood transfusion procedure, root cause analysis of blood transfusion catastrophic events, patient identification bracelet, training classes and educational pamphlets for raising awareness of personnel, and monthly gathering of transfusion medicine committee have all been considered as executive strategies in work agenda in pediatric emergency. PMID:25560332

  3. Finite Element Creep-Fatigue Analysis of a Welded Furnace Roll for Identifying Failure Root Cause

    NASA Astrophysics Data System (ADS)

    Yang, Y. P.; Mohr, W. C.

    2015-11-01

    Creep-fatigue induced failures are often observed in engineering components operating under high temperature and cyclic loading. Understanding the creep-fatigue damage process and identifying failure root cause are very important for preventing such failures and improving the lifetime of engineering components. Finite element analyses including a heat transfer analysis and a creep-fatigue analysis were conducted to model the cyclic thermal and mechanical process of a furnace roll in a continuous hot-dip coating line. Typically, the roll has a short life, <1 year, which has been a problem for a long time. The failure occurred in the weld joining an end bell to a roll shell and resulted in the complete 360° separation of the end bell from the roll shell. The heat transfer analysis was conducted to predict the temperature history of the roll by modeling heat convection from hot air inside the furnace. The creep-fatigue analysis was performed by inputting the predicted temperature history and applying mechanical loads. The analysis results showed that the failure was resulted from a creep-fatigue mechanism rather than a creep mechanism. The difference of material properties between the filler metal and the base metal is the root cause for the roll failure, which induces higher creep strain and stress in the interface between the weld and the HAZ.

  4. SU-F-T-246: Evaluation of Healthcare Failure Mode And Effect Analysis For Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harry, T; University of California, San Diego, La Jolla, CA; Manger, R

    Purpose: To evaluate the differences between the Veteran Affairs Healthcare Failure Modes and Effect Analysis (HFMEA) and the AAPM Task Group 100 Failure and Effect Analysis (FMEA) risk assessment techniques in the setting of a stereotactic radiosurgery (SRS) procedure were compared respectively. Understanding the differences in the techniques methodologies and outcomes will provide further insight into the applicability and utility of risk assessments exercises in radiation therapy. Methods: HFMEA risk assessment analysis was performed on a stereotactic radiosurgery procedure. A previous study from our institution completed a FMEA of our SRS procedure and the process map generated from this workmore » was used for the HFMEA. The process of performing the HFMEA scoring was analyzed, and the results from both analyses were compared. Results: The key differences between the two risk assessments are the scoring criteria for failure modes and identifying critical failure modes for potential hazards. The general consensus among the team performing the analyses was that scoring for the HFMEA was simpler and more intuitive then the FMEA. The FMEA identified 25 critical failure modes while the HFMEA identified 39. Seven of the FMEA critical failure modes were not identified by the HFMEA and 21 of the HFMEA critical failure modes were not identified by the FMEA. HFMEA as described by the Veteran Affairs provides guidelines on which failure modes to address first. Conclusion: HFMEA is a more efficient model for identifying gross risks in a process than FMEA. Clinics with minimal staff, time and resources can benefit from this type of risk assessment to eliminate or mitigate high risk hazards with nominal effort. FMEA can provide more in depth details but at the cost of elevated effort.« less

  5. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  6. Note: Durability analysis of optical fiber hydrogen sensor based on Pd-Y alloy film.

    PubMed

    Huang, Peng-cheng; Chen, You-ping; Zhang, Gang; Song, Han; Liu, Yi

    2016-02-01

    The Pd-Y alloy sensing film has an excellent property for hydrogen detection, but just for one month, the sensing film's property decreases seriously. To study the failure of the sensing film, the XPS spectra analysis was used to explore the chemical content of the Pd-Y alloy film, and analysis results demonstrate that the yttrium was oxidized. The paper presented that such an oxidized process was the potential reason of the failure of the sensing film. By understanding the reason of the failure of the sensing film better, we could improve the manufacturing process to enhance the property of hydrogen sensor.

  7. Application of ISO22000 and Failure Mode and Effect Analysis (fmea) for Industrial Processing of Poultry Products

    NASA Astrophysics Data System (ADS)

    Varzakas, Theodoros H.; Arvanitoyannis, Ioannis S.

    Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of poultry slaughtering and manufacturing. In this work comparison of ISO22000 analysis with HACCP is carried out over poultry slaughtering, processing and packaging. Critical Control points and Prerequisite programs (PrPs) have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram and fishbone diagram).

  8. Academic-Community Hospital Comparison of Vulnerabilities in Door-to-Needle Process for Acute Ischemic Stroke.

    PubMed

    Prabhakaran, Shyam; Khorzad, Rebeca; Brown, Alexandra; Nannicelli, Anna P; Khare, Rahul; Holl, Jane L

    2015-10-01

    Although best practices have been developed for achieving door-to-needle (DTN) times ≤60 minutes for stroke thrombolysis, critical DTN process failures persist. We sought to compare these failures in the Emergency Department at an academic medical center and a community hospital. Failure modes effects and criticality analysis was used to identify system and process failures. Multidisciplinary teams involved in DTN care participated in moderated sessions at each site. As a result, DTN process maps were created and potential failures and their causes, frequency, severity, and existing safeguards were identified. For each failure, a risk priority number and criticality score were calculated; failures were then ranked, with the highest scores representing the most critical failures and targets for intervention. We detected a total of 70 failures in 50 process steps and 76 failures in 42 process steps at the community hospital and academic medical center, respectively. At the community hospital, critical failures included (1) delay in registration because of Emergency Department overcrowding, (2) incorrect triage diagnosis among walk-in patients, and (3) delay in obtaining consent for thrombolytic treatment. At the academic medical center, critical failures included (1) incorrect triage diagnosis among walk-in patients, (2) delay in stroke team activation, and (3) delay in obtaining computed tomographic imaging. Although the identification of common critical failures suggests opportunities for a generalizable process redesign, differences in the criticality and nature of failures must be addressed at the individual hospital level, to develop robust and sustainable solutions to reduce DTN time. © 2015 American Heart Association, Inc.

  9. A streamlined failure mode and effects analysis.

    PubMed

    Ford, Eric C; Smith, Koren; Terezakis, Stephanie; Croog, Victoria; Gollamudi, Smitha; Gage, Irene; Keck, Jordie; DeWeese, Theodore; Sibley, Greg

    2014-06-01

    Explore the feasibility and impact of a streamlined failure mode and effects analysis (FMEA) using a structured process that is designed to minimize staff effort. FMEA for the external beam process was conducted at an affiliate radiation oncology center that treats approximately 60 patients per day. A structured FMEA process was developed which included clearly defined roles and goals for each phase. A core group of seven people was identified and a facilitator was chosen to lead the effort. Failure modes were identified and scored according to the FMEA formalism. A risk priority number,RPN, was calculated and used to rank failure modes. Failure modes with RPN > 150 received safety improvement interventions. Staff effort was carefully tracked throughout the project. Fifty-two failure modes were identified, 22 collected during meetings, and 30 from take-home worksheets. The four top-ranked failure modes were: delay in film check, missing pacemaker protocol/consent, critical structures not contoured, and pregnant patient simulated without the team's knowledge of the pregnancy. These four failure modes had RPN > 150 and received safety interventions. The FMEA was completed in one month in four 1-h meetings. A total of 55 staff hours were required and, additionally, 20 h by the facilitator. Streamlined FMEA provides a means of accomplishing a relatively large-scale analysis with modest effort. One potential value of FMEA is that it potentially provides a means of measuring the impact of quality improvement efforts through a reduction in risk scores. Future study of this possibility is needed.

  10. Independent Orbiter Assessment (IOA): Analysis of the crew equipment subsystem

    NASA Technical Reports Server (NTRS)

    Sinclair, Susan; Graham, L.; Richard, Bill; Saxon, H.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical (PCIs) items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results coresponding to the Orbiter crew equipment hardware are documented. The IOA analysis process utilized available crew equipment hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 352 failure modes analyzed, 78 were determined to be PCIs.

  11. Failure modes and effects analysis automation

    NASA Technical Reports Server (NTRS)

    Kamhieh, Cynthia H.; Cutts, Dannie E.; Purves, R. Byron

    1988-01-01

    A failure modes and effects analysis (FMEA) assistant was implemented as a knowledge based system and will be used during design of the Space Station to aid engineers in performing the complex task of tracking failures throughout the entire design effort. The three major directions in which automation was pursued were the clerical components of the FMEA process, the knowledge acquisition aspects of FMEA, and the failure propagation/analysis portions of the FMEA task. The system is accessible to design, safety, and reliability engineers at single user workstations and, although not designed to replace conventional FMEA, it is expected to decrease by many man years the time required to perform the analysis.

  12. Validating FMEA output against incident learning data: A study in stereotactic body radiation therapy.

    PubMed

    Yang, F; Cao, N; Young, L; Howard, J; Logan, W; Arbuckle, T; Sponseller, P; Korssjoen, T; Meyer, J; Ford, E

    2015-06-01

    Though failure mode and effects analysis (FMEA) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge, its output has never been validated against data on errors that actually occur. The objective of this study was to perform FMEA of a stereotactic body radiation therapy (SBRT) treatment planning process and validate the results against data recorded within an incident learning system. FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, dosimetrists, and IT technologists. Potential failure modes were identified through a systematic review of the process map. Failure modes were rated for severity, occurrence, and detectability on a scale of one to ten and risk priority number (RPN) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that has been active for two and a half years. Differences between FMEA anticipated failure modes and existing incidents were identified. FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. Combining both methods yielded a total of 76 possible process failures, of which 13 (17%) were missed by FMEA while 43 (57%) identified by FMEA only. When scored for RPN, the 13 events missed by FMEA ranked within the lower half of all failure modes and exhibited significantly lower severity relative to those identified by FMEA (p = 0.02). FMEA, though valuable, is subject to certain limitations. In this study, FMEA failed to identify 17% of actual failure modes, though these were of lower risk. Similarly, an incident learning system alone fails to identify a large number of potentially high-severity process errors. Using FMEA in combination with incident learning may render an improved overview of risks within a process.

  13. FMEA of manual and automated methods for commissioning a radiotherapy treatment planning system.

    PubMed

    Wexler, Amy; Gu, Bruce; Goddu, Sreekrishna; Mutic, Maya; Yaddanapudi, Sridhar; Olsen, Lindsey; Harry, Taylor; Noel, Camille; Pawlicki, Todd; Mutic, Sasa; Cai, Bin

    2017-09-01

    To evaluate the level of risk involved in treatment planning system (TPS) commissioning using a manual test procedure, and to compare the associated process-based risk to that of an automated commissioning process (ACP) by performing an in-depth failure modes and effects analysis (FMEA). The authors collaborated to determine the potential failure modes of the TPS commissioning process using (a) approaches involving manual data measurement, modeling, and validation tests and (b) an automated process utilizing application programming interface (API) scripting, preloaded, and premodeled standard radiation beam data, digital heterogeneous phantom, and an automated commissioning test suite (ACTS). The severity (S), occurrence (O), and detectability (D) were scored for each failure mode and the risk priority numbers (RPN) were derived based on TG-100 scale. Failure modes were then analyzed and ranked based on RPN. The total number of failure modes, RPN scores and the top 10 failure modes with highest risk were described and cross-compared between the two approaches. RPN reduction analysis is also presented and used as another quantifiable metric to evaluate the proposed approach. The FMEA of a MTP resulted in 47 failure modes with an RPN ave of 161 and S ave of 6.7. The highest risk process of "Measurement Equipment Selection" resulted in an RPN max of 640. The FMEA of an ACP resulted in 36 failure modes with an RPN ave of 73 and S ave of 6.7. The highest risk process of "EPID Calibration" resulted in an RPN max of 576. An FMEA of treatment planning commissioning tests using automation and standardization via API scripting, preloaded, and pre-modeled standard beam data, and digital phantoms suggests that errors and risks may be reduced through the use of an ACP. © 2017 American Association of Physicists in Medicine.

  14. [Failure mode effect analysis applied to preparation of intravenous cytostatics].

    PubMed

    Santos-Rubio, M D; Marín-Gil, R; Muñoz-de la Corte, R; Velázquez-López, M D; Gil-Navarro, M V; Bautista-Paloma, F J

    2016-01-01

    To proactively identify risks in the preparation of intravenous cytostatic drugs, and to prioritise and establish measures to improve safety procedures. Failure Mode Effect Analysis methodology was used. A multidisciplinary team identified potential failure modes of the procedure through a brainstorming session. The impact associated with each failure mode was assessed with the Risk Priority Number (RPN), which involves three variables: occurrence, severity, and detectability. Improvement measures were established for all identified failure modes, with those with RPN>100 considered critical. The final RPN (theoretical) that would result from the proposed measures was also calculated and the process was redesigned. A total of 34 failure modes were identified. The initial accumulated RPN was 3022 (range: 3-252), and after recommended actions the final RPN was 1292 (range: 3-189). RPN scores >100 were obtained in 13 failure modes; only the dispensing sub-process was free of critical points (RPN>100). A final reduction of RPN>50% was achieved in 9 failure modes. This prospective risk analysis methodology allows the weaknesses of the procedure to be prioritised, optimize use of resources, and a substantial improvement in the safety of the preparation of cytostatic drugs through the introduction of double checking and intermediate product labelling. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  15. Poster - Thur Eve - 05: Safety systems and failure modes and effects analysis for a magnetic resonance image guided radiation therapy system.

    PubMed

    Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D

    2012-07-01

    An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.

  16. Proposal on How To Conduct a Biopharmaceutical Process Failure Mode and Effect Analysis (FMEA) as a Risk Assessment Tool.

    PubMed

    Zimmermann, Hartmut F; Hentschel, Norbert

    2011-01-01

    With the publication of the quality guideline ICH Q9 "Quality Risk Management" by the International Conference on Harmonization, risk management has already become a standard requirement during the life cycle of a pharmaceutical product. Failure mode and effect analysis (FMEA) is a powerful risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to biopharmaceutical processes brings about some difficulties. The proposal presented here is intended to serve as a brief but nevertheless comprehensive and detailed guideline on how to conduct a biopharmaceutical process FMEA. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. The application for such a biopharmaceutical process FMEA is widespread. It can be useful whenever a biopharmaceutical manufacturing process is developed or scaled-up, or when it is transferred to a different manufacturing site. It may also be conducted during substantial optimization of an existing process or the development of a second-generation process. According to their resulting risk ratings, process parameters can be ranked for importance and important variables for process development, characterization, or validation can be identified. Health authorities around the world ask pharmaceutical companies to manage risk during development and manufacturing of pharmaceuticals. The so-called failure mode and effect analysis (FMEA) is an established risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to pharmaceutical processes that use modern biotechnology (biopharmaceutical processes) brings about some difficulties, because those biopharmaceutical processes differ from processes in mechanical and electrical industries. The proposal presented here explains how a biopharmaceutical process FMEA can be conducted. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. With the help of this guideline, different details of the manufacturing process can be ranked according to their potential risks, and this can help pharmaceutical companies to identify aspects with high potential risks and to react accordingly to improve the safety of medicines.

  17. Independent Orbiter Assessment (IOA): Analysis of the pyrotechnics subsystem

    NASA Technical Reports Server (NTRS)

    Robinson, W. W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Pyrotechnics hardware. The IOA analysis process utilized available pyrotechnics hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  18. Improving the treatment planning and delivery process of Xoft electronic skin brachytherapy.

    PubMed

    Manger, Ryan; Rahn, Douglas; Hoisak, Jeremy; Dragojević, Irena

    2018-05-14

    To develop an improved Xoft electronic skin brachytherapy process and identify areas of further improvement. A multidisciplinary team conducted a failure modes and effects analysis (FMEA) by developing a process map and a corresponding list of failure modes. The failure modes were scored for their occurrence, severity, and detectability, and a risk priority number (RPN) was calculated for each failure mode as the product of occurrence, severity, and detectability. Corrective actions were implemented to address the higher risk failure modes, and a revised process was generated. The RPNs of the failure modes were compared between the initial process and final process to assess the perceived benefits of the corrective actions. The final treatment process consists of 100 steps and 114 failure modes. The FMEA took approximately 20 person-hours (one physician, three physicists, and two therapists) to complete. The 10 most dangerous failure modes had RPNs ranging from 336 to 630. Corrective actions were effective at addressing most failure modes (10 riskiest RPNs ranging from 189 to 310), yet the RPNs were higher than those published for alternative systems. Many of these high-risk failure modes remained due to hardware design limitations. FMEA helps guide process improvement efforts by emphasizing the riskiest steps. Significant risks are apparent when using a Xoft treatment unit for skin brachytherapy due to hardware limitations such as the lack of several interlocks, a short source lifespan, and variability in source output. The process presented in this article is expected to reduce but not eliminate these risks. Copyright © 2018 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  19. A streamlined failure mode and effects analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, Eric C., E-mail: eford@uw.edu; Smith, Koren; Terezakis, Stephanie

    Purpose: Explore the feasibility and impact of a streamlined failure mode and effects analysis (FMEA) using a structured process that is designed to minimize staff effort. Methods: FMEA for the external beam process was conducted at an affiliate radiation oncology center that treats approximately 60 patients per day. A structured FMEA process was developed which included clearly defined roles and goals for each phase. A core group of seven people was identified and a facilitator was chosen to lead the effort. Failure modes were identified and scored according to the FMEA formalism. A risk priority number,RPN, was calculated and usedmore » to rank failure modes. Failure modes with RPN > 150 received safety improvement interventions. Staff effort was carefully tracked throughout the project. Results: Fifty-two failure modes were identified, 22 collected during meetings, and 30 from take-home worksheets. The four top-ranked failure modes were: delay in film check, missing pacemaker protocol/consent, critical structures not contoured, and pregnant patient simulated without the team's knowledge of the pregnancy. These four failure modes hadRPN > 150 and received safety interventions. The FMEA was completed in one month in four 1-h meetings. A total of 55 staff hours were required and, additionally, 20 h by the facilitator. Conclusions: Streamlined FMEA provides a means of accomplishing a relatively large-scale analysis with modest effort. One potential value of FMEA is that it potentially provides a means of measuring the impact of quality improvement efforts through a reduction in risk scores. Future study of this possibility is needed.« less

  20. Using the failure mode and effects analysis model to improve parathyroid hormone and adrenocorticotropic hormone testing

    PubMed Central

    Magnezi, Racheli; Hemi, Asaf; Hemi, Rina

    2016-01-01

    Background Risk management in health care systems applies to all hospital employees and directors as they deal with human life and emergency routines. There is a constant need to decrease risk and increase patient safety in the hospital environment. The purpose of this article is to review the laboratory testing procedures for parathyroid hormone and adrenocorticotropic hormone (which are characterized by short half-lives) and to track failure modes and risks, and offer solutions to prevent them. During a routine quality improvement review at the Endocrine Laboratory in Tel Hashomer Hospital, we discovered these tests are frequently repeated unnecessarily due to multiple failures. The repetition of the tests inconveniences patients and leads to extra work for the laboratory and logistics personnel as well as the nurses and doctors who have to perform many tasks with limited resources. Methods A team of eight staff members accompanied by the Head of the Endocrine Laboratory formed the team for analysis. The failure mode and effects analysis model (FMEA) was used to analyze the laboratory testing procedure and was designed to simplify the process steps and indicate and rank possible failures. Results A total of 23 failure modes were found within the process, 19 of which were ranked by level of severity. The FMEA model prioritizes failures by their risk priority number (RPN). For example, the most serious failure was the delay after the samples were collected from the department (RPN =226.1). Conclusion This model helped us to visualize the process in a simple way. After analyzing the information, solutions were proposed to prevent failures, and a method to completely avoid the top four problems was also developed. PMID:27980440

  1. Predictive failure analysis: planning for the worst so that it never happens!

    PubMed

    Hipple, Jack

    2008-01-01

    This article reviews an alternative approach to failure analysis involving a deliberate saboteurial approach rather than a checklist approach to disaster and emergency preparedness. This process is in the form of an algorithm that is easily applied to any planning situation.

  2. Failures to further developing orphan medicinal products after designation granted in Europe: an analysis of marketing authorisation failures and abandoned drugs.

    PubMed

    Giannuzzi, Viviana; Landi, Annalisa; Bosone, Enrico; Giannuzzi, Floriana; Nicotri, Stefano; Torrent-Farnell, Josep; Bonifazi, Fedele; Felisi, Mariagrazia; Bonifazi, Donato; Ceci, Adriana

    2017-09-11

    The research and development process in the field of rare diseases is characterised by many well-known difficulties, and a large percentage of orphan medicinal products do not reach the marketing approval.This work aims at identifying orphan medicinal products that failed the developmental process and investigating reasons for and possible factors influencing failures. Drugs designated in Europe under Regulation (European Commission) 141/2000 in the period 2000-2012 were investigated in terms of the following failures: (1) marketing authorisation failures (refused or withdrawn) and (2) drugs abandoned by sponsors during development.Possible risk factors for failure were analysed using statistically validated methods. This study points out that 437 out of 788 designations are still under development, while 219 failed the developmental process. Among the latter, 34 failed the marketing authorisation process and 185 were abandoned during the developmental process. In the first group of drugs (marketing authorisation failures), 50% reached phase II, 47% reached phase III and 3% reached phase I, while in the second group (abandoned drugs), the majority of orphan medicinal products apparently never started the development process, since no data on 48.1% of them were published and the 3.2% did not progress beyond the non-clinical stage.The reasons for failures of marketing authorisation were: efficacy/safety issues (26), insufficient data (12), quality issues (7), regulatory issues on trials (4) and commercial reasons (1). The main causes for abandoned drugs were efficacy/safety issues (reported in 54 cases), inactive companies (25.4%), change of company strategy (8.1%) and drug competition (10.8%). No information concerning reasons for failure was available for 23.2% of the analysed products. This analysis shows that failures occurred in 27.8% of all designations granted in Europe, the main reasons being safety and efficacy issues. Moreover, the stage of development reached by drugs represents a specific risk factor for failures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Interfacing LabVIEW With Instrumentation for Electronic Failure Analysis and Beyond

    NASA Technical Reports Server (NTRS)

    Buchanan, Randy K.; Bryan, Coleman; Ludwig, Larry

    1996-01-01

    The Laboratory Virtual Instrumentation Engineering Workstation (LabVIEW) software is designed such that equipment and processes related to control systems can be operationally lined and controlled by the use of a computer. Various processes within the failure analysis laboratories of NASA's Kennedy Space Center (KSC) demonstrate the need for modernization and, in some cases, automation, using LabVIEW. An examination of procedures and practices with the Failure Analaysis Laboratory resulted in the conclusion that some device was necessary to elevate the potential users of LabVIEW to an operational level in minimum time. This paper outlines the process involved in creating a tutorial application to enable personnel to apply LabVIEW to their specific projects. Suggestions for furthering the extent to which LabVIEW is used are provided in the areas of data acquisition and process control.

  4. Independent Orbiter Assessment (IOA): Analysis of the communication and tracking subsystem

    NASA Technical Reports Server (NTRS)

    Gardner, J. R.; Robinson, W. M.; Trahan, W. H.; Daley, E. S.; Long, W. C.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Communication and Tracking hardware. The IOA analysis process utilized available Communication and Tracking hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teixeira, Flavia C., E-mail: flavitiz@gmail.com; Almeida, Carlos E. de; Saiful Huq, M.

    Purpose: The goal of this study was to evaluate the safety and quality management program for stereotactic radiosurgery (SRS) treatment processes at three radiotherapy centers in Brazil by using three industrial engineering tools (1) process mapping, (2) failure modes and effects analysis (FMEA), and (3) fault tree analysis. Methods: The recommendations of Task Group 100 of American Association of Physicists in Medicine were followed to apply the three tools described above to create a process tree for SRS procedure for each radiotherapy center and then FMEA was performed. Failure modes were identified for all process steps and values of riskmore » priority number (RPN) were calculated from O, S, and D (RPN = O × S × D) values assigned by a professional team responsible for patient care. Results: The subprocess treatment planning was presented with the highest number of failure modes for all centers. The total number of failure modes were 135, 104, and 131 for centers I, II, and III, respectively. The highest RPN value for each center is as follows: center I (204), center II (372), and center III (370). Failure modes with RPN ≥ 100: center I (22), center II (115), and center III (110). Failure modes characterized by S ≥ 7, represented 68% of the failure modes for center III, 62% for center II, and 45% for center I. Failure modes with RPNs values ≥100 and S ≥ 7, D ≥ 5, and O ≥ 5 were considered as high priority in this study. Conclusions: The results of the present study show that the safety risk profiles for the same stereotactic radiotherapy process are different at three radiotherapy centers in Brazil. Although this is the same treatment process, this present study showed that the risk priority is different and it will lead to implementation of different safety interventions among the centers. Therefore, the current practice of applying universal device-centric QA is not adequate to address all possible failures in clinical processes at different radiotherapy centers. Integrated approaches to device-centric and process specific quality management program specific to each radiotherapy center are the key to a safe quality management program.« less

  6. Fault management for the Space Station Freedom control center

    NASA Technical Reports Server (NTRS)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  7. High-throughput sequencing: a failure mode analysis.

    PubMed

    Yang, George S; Stott, Jeffery M; Smailus, Duane; Barber, Sarah A; Balasundaram, Miruna; Marra, Marco A; Holt, Robert A

    2005-01-04

    Basic manufacturing principles are becoming increasingly important in high-throughput sequencing facilities where there is a constant drive to increase quality, increase efficiency, and decrease operating costs. While high-throughput centres report failure rates typically on the order of 10%, the causes of sporadic sequencing failures are seldom analyzed in detail and have not, in the past, been formally reported. Here we report the results of a failure mode analysis of our production sequencing facility based on detailed evaluation of 9,216 ESTs generated from two cDNA libraries. Two categories of failures are described; process-related failures (failures due to equipment or sample handling) and template-related failures (failures that are revealed by close inspection of electropherograms and are likely due to properties of the template DNA sequence itself). Preventative action based on a detailed understanding of failure modes is likely to improve the performance of other production sequencing pipelines.

  8. Microseismic Signature of Magma Failure: Testing Failure Forecast in Heterogeneous Material

    NASA Astrophysics Data System (ADS)

    Vasseur, J.; Lavallee, Y.; Hess, K.; Wassermann, J. M.; Dingwell, D. B.

    2012-12-01

    Volcanoes exhibit a range of seismic precursors prior to eruptions. This range of signals derive from different processes, which if quantified, may tell us when and how the volcano will erupt: effusively or explosively. This quantification can be performed in laboratory. Here we investigated the signals associated with the deformation and failure of single-phase silicate liquids compare to mutli-phase magmas containing pores and crystals as heterogeneities. For the past decades, magmas have been simplified as viscoelastic fluids with grossly predictable failure, following an analysis of the stress and strain rate conditions in volcanic conduits. Yet it is clear that the way magmas fail is not unique and evidences increasingly illustrate the role of heterogeneities in the process of magmatic fragmentation. In such multi-phase magmas, failure cannot be predicted using current rheological laws. Microseismicity, as detected in the laboratory by analogous Acoustic Emission (AE), can be used to monitor fracture initiation and propagation, and thus provides invaluable information to characterise the process of brittle failure underlying explosive eruptions. Tri-axial press experiments on different synthetised and natural glass samples have been performed to investigate the acoustic signature of failure. We observed that the failure of single-phase liquids occurs without much strain and is preceded by the constant nucleation, propagation and coalescence of cracks as demonstrated by the monitored AE. In contrast, the failure of multi-phase magmas depends on the applied stress and is strain dependent. The path dependence of magma failure is nonetheless accompanied by supra exponential acceleration in released AEs. Analysis of the released AEs following material Failure Forecast Method (FFM) suggests that the predicability of failure is enhanced by the presence of heterogeneities in magmas. We discuss our observations in terms of volcanic scenarios.

  9. Lifetime evaluation of large format CMOS mixed signal infrared devices

    NASA Astrophysics Data System (ADS)

    Linder, A.; Glines, Eddie

    2015-09-01

    New large scale foundry processes continue to produce reliable products. These new large scale devices continue to use industry best practice to screen for failure mechanisms and validate their long lifetime. The Failure-in-Time analysis in conjunction with foundry qualification information can be used to evaluate large format device lifetimes. This analysis is a helpful tool when zero failure life tests are typical. The reliability of the device is estimated by applying the failure rate to the use conditions. JEDEC publications continue to be the industry accepted methods.

  10. Failure modes and effects analysis for ocular brachytherapy.

    PubMed

    Lee, Yongsook C; Kim, Yongbok; Huynh, Jason Wei-Yeong; Hamilton, Russell J

    The aim of the study was to identify potential failure modes (FMs) having a high risk and to improve our current quality management (QM) program in Collaborative Ocular Melanoma Study (COMS) ocular brachytherapy by undertaking a failure modes and effects analysis (FMEA) and a fault tree analysis (FTA). Process mapping and FMEA were performed for COMS ocular brachytherapy. For all FMs identified in FMEA, risk priority numbers (RPNs) were determined by assigning and multiplying occurrence, severity, and lack of detectability values, each ranging from 1 to 10. FTA was performed for the major process that had the highest ranked FM. Twelve major processes, 121 sub-process steps, 188 potential FMs, and 209 possible causes were identified. For 188 FMs, RPN scores ranged from 1.0 to 236.1. The plaque assembly process had the highest ranked FM. The majority of FMs were attributable to human failure (85.6%), and medical physicist-related failures were the most numerous (58.9% of all causes). After FMEA, additional QM methods were included for the top 10 FMs and 6 FMs with severity values > 9.0. As a result, for these 16 FMs and the 5 major processes involved, quality control steps were increased from 8 (50%) to 15 (93.8%), and major processes having quality assurance steps were increased from 2 to 4. To reduce high risk in current clinical practice, we proposed QM methods. They mainly include a check or verification of procedures/steps and the use of checklists for both ophthalmology and radiation oncology staff, and intraoperative ultrasound-guided plaque positioning for ophthalmology staff. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  11. Functional Fault Model Development Process to Support Design Analysis and Operational Assessment

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Maul, William A.; Hemminger, Joseph A.

    2016-01-01

    A functional fault model (FFM) is an abstract representation of the failure space of a given system. As such, it simulates the propagation of failure effects along paths between the origin of the system failure modes and points within the system capable of observing the failure effects. As a result, FFMs may be used to diagnose the presence of failures in the modeled system. FFMs necessarily contain a significant amount of information about the design, operations, and failure modes and effects. One of the important benefits of FFMs is that they may be qualitative, rather than quantitative and, as a result, may be implemented early in the design process when there is more potential to positively impact the system design. FFMs may therefore be developed and matured throughout the monitored system's design process and may subsequently be used to provide real-time diagnostic assessments that support system operations. This paper provides an overview of a generalized NASA process that is being used to develop and apply FFMs. FFM technology has been evolving for more than 25 years. The FFM development process presented in this paper was refined during NASA's Ares I, Space Launch System, and Ground Systems Development and Operations programs (i.e., from about 2007 to the present). Process refinement took place as new modeling, analysis, and verification tools were created to enhance FFM capabilities. In this paper, standard elements of a model development process (i.e., knowledge acquisition, conceptual design, implementation & verification, and application) are described within the context of FFMs. Further, newer tools and analytical capabilities that may benefit the broader systems engineering process are identified and briefly described. The discussion is intended as a high-level guide for future FFM modelers.

  12. Failure mode and effects analysis using intuitionistic fuzzy hybrid weighted Euclidean distance operator

    NASA Astrophysics Data System (ADS)

    Liu, Hu-Chen; Liu, Long; Li, Ping

    2014-10-01

    Failure mode and effects analysis (FMEA) has shown its effectiveness in examining potential failures in products, process, designs or services and has been extensively used for safety and reliability analysis in a wide range of industries. However, its approach to prioritise failure modes through a crisp risk priority number (RPN) has been criticised as having several shortcomings. The aim of this paper is to develop an efficient and comprehensive risk assessment methodology using intuitionistic fuzzy hybrid weighted Euclidean distance (IFHWED) operator to overcome the limitations and improve the effectiveness of the traditional FMEA. The diversified and uncertain assessments given by FMEA team members are treated as linguistic terms expressed in intuitionistic fuzzy numbers (IFNs). Intuitionistic fuzzy weighted averaging (IFWA) operator is used to aggregate the FMEA team members' individual assessments into a group assessment. IFHWED operator is applied thereafter to the prioritisation and selection of failure modes. Particularly, both subjective and objective weights of risk factors are considered during the risk evaluation process. A numerical example for risk assessment is given to illustrate the proposed method finally.

  13. MO-G-BRE-09: Validating FMEA Against Incident Learning Data: A Study in Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, F; Cao, N; Young, L

    2014-06-15

    Purpose: Though FMEA (Failure Mode and Effects Analysis) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge it has never been validated against actual incident learning data. The objective of this study was to perform an FMEA analysis of an SBRT (Stereotactic Body Radiation Therapy) treatment planning process and validate this against data recorded within an incident learning system. Methods: FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, and dosimetrists. Potential failure modes were identified through a systematic review of the workflow process. Failuremore » modes were rated for severity, occurrence, and detectability on a scale of 1 to 10 and RPN (Risk Priority Number) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that had been active for two years. Differences were identified. Results: FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. FMEA failed to anticipate 13 of these events, among which 3 were registered with severity ratings of severe or critical in the incident learning system. Combining both methods yielded a total of 76 failure modes, and when scored for RPN the 13 events missed by FMEA ranked within the middle half of all failure modes. Conclusion: FMEA, though valuable, is subject to certain limitations, among them the limited ability to anticipate all potential errors for a given process. This FMEA exercise failed to identify a significant number of possible errors (17%). Integration of FMEA with retrospective incident data may be able to render an improved overview of risks within a process.« less

  14. Use of failure mode, effect and criticality analysis to improve safety in the medication administration process.

    PubMed

    Rodriguez-Gonzalez, Carmen Guadalupe; Martin-Barbero, Maria Luisa; Herranz-Alonso, Ana; Durango-Limarquez, Maria Isabel; Hernandez-Sampelayo, Paloma; Sanjurjo-Saez, Maria

    2015-08-01

    To critically evaluate the causes of preventable adverse drug events during the nurse medication administration process in inpatient units with computerized prescription order entry and profiled automated dispensing cabinets in order to prioritize interventions that need to be implemented and to evaluate the impact of specific interventions on the criticality index. This is a failure mode, effects and criticality analysis (FMECA) study. A multidisciplinary consensus committee composed of pharmacists, nurses and doctors evaluated the process of administering medications in a hospital setting in Spain. By analysing the process, all failure modes were identified and criticality was determined by rating severity, frequency and likelihood of failure detection on a scale of 1 to 10, using adapted versions of already published scales. Safety strategies were identified and prioritized. Through consensus, the committee identified eight processes and 40 failure modes, of which 20 were classified as high risk. The sum of the criticality indices was 5254. For the potential high-risk failure modes, 21 different potential causes were found resulting in 24 recommendations. Thirteen recommendations were prioritized and developed over a 24-month period, reducing total criticality from 5254 to 3572 (a 32.0% reduction). The recommendations with a greater impact on criticality were the development of an electronic medication administration record (-582) and the standardization of intravenous drug compounding in the unit (-168). Other improvements, such as barcode medication administration technology (-1033), were scheduled for a longer period of time because of lower feasibility. FMECA is a useful approach that can improve the medication administration process. © 2015 John Wiley & Sons, Ltd.

  15. Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.

    PubMed

    Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente

    2014-07-15

    Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  16. A study of discrete control signal fault conditions in the shuttle DPS

    NASA Technical Reports Server (NTRS)

    Reddi, S. S.; Retter, C. T.

    1976-01-01

    An analysis of the effects of discrete failures on the data processing subsystem is presented. A functional description of each discrete together with a list of software modules that use this discrete are included. A qualitative description of the consequences that may ensue due to discrete failures is given followed by a probabilistic reliability analysis of the data processing subsystem. Based on the investigation conducted, recommendations were made to improve the reliability of the subsystem.

  17. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  18. 40 CFR 68.67 - Process hazard analysis.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) CHEMICAL ACCIDENT PREVENTION PROVISIONS Program 3 Prevention Program § 68.67 Process hazard analysis. (a... instrumentation with alarms, and detection hardware such as hydrocarbon sensors.); (4) Consequences of failure of...

  19. 40 CFR 68.67 - Process hazard analysis.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) CHEMICAL ACCIDENT PREVENTION PROVISIONS Program 3 Prevention Program § 68.67 Process hazard analysis. (a... instrumentation with alarms, and detection hardware such as hydrocarbon sensors.); (4) Consequences of failure of...

  20. Uncertainty analysis as essential step in the establishment of the dynamic Design Space of primary drying during freeze-drying.

    PubMed

    Mortier, Séverine Thérèse F C; Van Bockstal, Pieter-Jan; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2016-06-01

    Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze-drying, there are only two main process variables to be set, i.e. the shelf temperature and the chamber pressure, however preferably in a dynamic way. This manuscript focuses on the essential use of uncertainty analysis for the determination and experimental verification of the dynamic primary drying Design Space for pharmaceutical freeze-drying. Traditionally, the chamber pressure and shelf temperature are kept constant during primary drying, leading to less optimal process conditions. In this paper it is demonstrated how a mechanistic model of the primary drying step gives the opportunity to determine the optimal dynamic values for both process variables during processing, resulting in a dynamic Design Space with a well-known risk of failure. This allows running the primary drying process step as time efficient as possible, hereby guaranteeing that the temperature at the sublimation front does not exceed the collapse temperature. The Design Space is the multidimensional combination and interaction of input variables and process parameters leading to the expected product specifications with a controlled (i.e., high) probability. Therefore, inclusion of parameter uncertainty is an essential part in the definition of the Design Space, although it is often neglected. To quantitatively assess the inherent uncertainty on the parameters of the mechanistic model, an uncertainty analysis was performed to establish the borders of the dynamic Design Space, i.e. a time-varying shelf temperature and chamber pressure, associated with a specific risk of failure. A risk of failure acceptance level of 0.01%, i.e. a 'zero-failure' situation, results in an increased primary drying process time compared to the deterministic dynamic Design Space; however, the risk of failure is under control. Experimental verification revealed that only a risk of failure acceptance level of 0.01% yielded a guaranteed zero-defect quality end-product. The computed process settings with a risk of failure acceptance level of 0.01% resulted in a decrease of more than half of the primary drying time in comparison with a regular, conservative cycle with fixed settings. Copyright © 2016. Published by Elsevier B.V.

  1. Abduction of Toe-excavation Induced Failure Process from LEM and FDM for a Dip Slope with Rock Anchorage in Taiwan

    NASA Astrophysics Data System (ADS)

    Huang, W.-S.; Lin, M.-L.; Liu, H.-C.; Lin, H.-H.

    2012-04-01

    On April 25, 2010, without rainfall and earthquake triggering a massive landslide (200000 m3) covered a 200m stretch of Taiwan's National Freeway No. 3, killing 4 people, burying three cars and destroying a bridge. The failure mode appears to be a dip-slope type failure occurred on a rock anchorage cut slope. The strike of Tertiary sedimentary strata is northeast-southwest and dip 15˚ toward southeast. Based on the investigations of Taiwan Geotechnical Society, there are three possible factors contributing to the failure mechanism as follow:(1) By toe-excavation during construction in 1998, the daylight of the sliding layer had induced the strength reduction in the sliding layer. It also caused the loadings of anchors increased rapidly and approached to their ultimate capacity; (2) Although the excavated area had stabilized soon with rock anchors and backfills, the weathering and groundwater infiltration caused the strength reduction of overlying rock mass; (3) The possible corrosion and age of the ground anchors deteriorate the loading capacity of rock anchors. Considering the strength of sliding layer had reduced from peak to residual strength which was caused by the disturbance of excavation, the limit equilibrium method (LEM) analysis was utilized in the back analysis at first. The results showed the stability condition of slope approached the critical state (F.S.≈1). The efficiency reduction of rock anchors and strength reduction of overlying stratum (sandstone) had been considered in following analysis. The results showed the unstable condition (F.S. <1). This research also utilized the result of laboratory test, geological strength index(GSI) and finite difference method (FDM, FLAC 5.0) to discuss the failure process with the interaction of disturbance of toe-excavation, weathering of rock mass, groundwater infiltration and efficiency reduction of rock anchors on the stability of slope. The analysis indicated that the incremental load of anchors have similar tendency comparing to the monitoring records in toe-excavation stages. This result showed that the strength of the sliding layer was significantly influenced by toe-excavation. The numerical model which calibrated with monitoring records in excavation stage was then used to discuss the failure process after backfilling. The results showed the interaction of different factors into the failure process. Keyword: Dip slope failure, rock anchor, LEM, FDM, GSI, back analysis

  2. Recognising and referring children exposed to domestic abuse: a multi-professional, proactive systems-based evaluation using a modified Failure Mode and Effects Analysis (FMEA).

    PubMed

    Ashley, Laura; Armitage, Gerry; Taylor, Julie

    2017-03-01

    Failure Modes and Effects Analysis (FMEA) is a prospective quality assurance methodology increasingly used in healthcare, which identifies potential vulnerabilities in complex, high-risk processes and generates remedial actions. We aimed, for the first time, to apply FMEA in a social care context to evaluate the process for recognising and referring children exposed to domestic abuse within one Midlands city safeguarding area in England. A multidisciplinary, multi-agency team of 10 front-line professionals undertook the FMEA, using a modified methodology, over seven group meetings. The FMEA included mapping out the process under evaluation to identify its component steps, identifying failure modes (potential errors) and possible causes for each step and generating corrective actions. In this article, we report the output from the FMEA, including illustrative examples of the failure modes and corrective actions generated. We also present an analysis of feedback from the FMEA team and provide future recommendations for the use of FMEA in appraising social care processes and practice. Although challenging, the FMEA was unequivocally valuable for team members and generated a significant number of corrective actions locally for the safeguarding board to consider in its response to children exposed to domestic abuse. © 2016 John Wiley & Sons Ltd.

  3. Practical Implementation of Failure Mode and Effects Analysis for Safety and Efficiency in Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Kelly Cooper, E-mail: kyounge@med.umich.edu; Wang, Yizhen; Thompson, John

    2015-04-01

    Purpose: To improve the safety and efficiency of a new stereotactic radiosurgery program with the application of failure mode and effects analysis (FMEA) performed by a multidisciplinary team of health care professionals. Methods and Materials: Representatives included physicists, therapists, dosimetrists, oncologists, and administrators. A detailed process tree was created from an initial high-level process tree to facilitate the identification of possible failure modes. Group members were asked to determine failure modes that they considered to be the highest risk before scoring failure modes. Risk priority numbers (RPNs) were determined by each group member individually and then averaged. Results: A totalmore » of 99 failure modes were identified. The 5 failure modes with an RPN above 150 were further analyzed to attempt to reduce these RPNs. Only 1 of the initial items that the group presumed to be high-risk (magnetic resonance imaging laterality reversed) was ranked in these top 5 items. New process controls were put in place to reduce the severity, occurrence, and detectability scores for all of the top 5 failure modes. Conclusions: FMEA is a valuable team activity that can assist in the creation or restructuring of a quality assurance program with the aim of improved safety, quality, and efficiency. Performing the FMEA helped group members to see how they fit into the bigger picture of the program, and it served to reduce biases and preconceived notions about which elements of the program were the riskiest.« less

  4. Failure analysis of aluminum alloy components

    NASA Technical Reports Server (NTRS)

    Johari, O.; Corvin, I.; Staschke, J.

    1973-01-01

    Analysis of six service failures in aluminum alloy components which failed in aerospace applications is reported. Identification of fracture surface features from fatigue and overload modes was straightforward, though the specimens were not always in a clean, smear-free condition most suitable for failure analysis. The presence of corrosion products and of chemically attacked or mechanically rubbed areas here hindered precise determination of the cause of crack initiation, which was then indirectly inferred from the scanning electron fractography results. In five failures the crack propagation was by fatigue, though in each case the fatigue crack initiated from a different cause. Some of these causes could be eliminated in future components by better process control. In one failure, the cause was determined to be impact during a crash; the features of impact fracture were distinguished from overload fractures by direct comparisons of the received specimens with laboratory-generated failures.

  5. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  6. [Failure mode and effects analysis to improve quality in clinical trials].

    PubMed

    Mañes-Sevilla, M; Marzal-Alfaro, M B; Romero Jiménez, R; Herranz-Alonso, A; Sanchez Fresneda, M N; Benedi Gonzalez, J; Sanjurjo-Sáez, M

    The failure mode and effects analysis (FMEA) has been used as a tool in risk management and quality improvement. The objective of this study is to identify the weaknesses in processes in the clinical trials area, of a Pharmacy Department (PD) with great research activity, in order to improve the safety of the usual procedures. A multidisciplinary team was created to analyse each of the critical points, identified as possible failure modes, in the development of clinical trial in the PD. For each failure mode, the possible cause and effect were identified, criticality was calculated using the risk priority number and the possible corrective actions were discussed. Six sub-processes were defined in the development of the clinical trials in PD. The FMEA identified 67 failure modes, being the dispensing and prescription/validation sub-processes the most likely to generate errors. All the improvement actions established in the AMFE were implemented in the Clinical Trials area. The FMEA is a useful tool in proactive risk management because it allows us to identify where we are making mistakes and analyze the causes that originate them, to prioritize and to adopt solutions to risk reduction. The FMEA improves process safety and quality in PD. Copyright © 2018 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Failure Analysis of Cracked FS-85 Tubing and ASTAR-811C End Caps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ME Petrichek

    2006-02-09

    Failure analyses were performed on cracked FS-85 tubing and ASTAR-811C and caps which had been fabricated as components of biaxial creep specimens meant to support materials testing for the NR Space program. During the failure analyses of cracked FS-85 tubing, it was determined that the failure potentially could be due to two effects: possible copper contamination from the EDM (electro-discharge machined) recast layer and/or an insufficient solution anneal. to prevent similar failures in the future, a more formal analysis should be done after each processing step to ensure the quality of the material before further processing. During machining of themore » ASTAR-811FC rod to form end caps for biaxial creep specimens, linear defects were observed along the center portion of the end caps. These defects were only found in material that was processed from the top portion of the ingot. The linear defects were attributed to a probable residual ingot pipe that was not removed from the ingot. During the subsequent processing of the ingot to rod, the processing temperatures were not high enough to allow self healing of the ingot's residual pipe defect. To prevent this from occurring in the future, it is necessary to ensure that complete removal of the as-melted ingot pipe is verified by suitable non-destructive evaluation (NDE).« less

  8. The Utility of Failure Modes and Effects Analysis of Consultations in a Tertiary, Academic, Medical Center.

    PubMed

    Niv, Yaron; Itskoviz, David; Cohen, Michal; Hendel, Hagit; Bar-Giora, Yonit; Berkov, Evgeny; Weisbord, Irit; Leviron, Yifat; Isasschar, Assaf; Ganor, Arian

    Failure modes and effects analysis (FMEA) is a tool used to identify potential risks in health care processes. We used the FMEA tool for improving the process of consultation in an academic medical center. A team of 10 staff members-5 physicians, 2 quality experts, 2 organizational consultants, and 1 nurse-was established. The consultation process steps, from ordering to delivering, were computed. Failure modes were assessed for likelihood of occurrence, detection, and severity. A risk priority number (RPN) was calculated. An interventional plan was designed according to the highest RPNs. Thereafter, we compared the percentage of completed computer-based documented consultations before and after the intervention. The team identified 3 main categories of failure modes that reached the highest RPNs: initiation of consultation by a junior staff physician without senior approval, failure to document the consultation in the computerized patient registry, and asking for consultation on the telephone. An interventional plan was designed, including meetings to update knowledge of the consultation request process, stressing the importance of approval by a senior physician, training sessions for closing requests in the patient file, and reporting of telephone requests. The number of electronically documented consultation results and recommendations significantly increased (75%) after intervention. FMEA is an important and efficient tool for improving the consultation process in an academic medical center.

  9. Modelling of Safety Instrumented Systems by using Bernoulli trials: towards the notion of odds on for SIS failures analysis

    NASA Astrophysics Data System (ADS)

    Cauffriez, Laurent

    2017-01-01

    This paper deals with the modeling of a random failures process of a Safety Instrumented System (SIS). It aims to identify the expected number of failures for a SIS during its lifecycle. Indeed, the fact that the SIS is a system being tested periodically gives the idea to apply Bernoulli trials to characterize the random failure process of a SIS and thus to verify if the PFD (Probability of Failing Dangerously) experimentally obtained agrees with the theoretical one. Moreover, the notion of "odds on" found in Bernoulli theory allows engineers and scientists determining easily the ratio between “outcomes with success: failure of SIS” and “outcomes with unsuccess: no failure of SIS” and to confirm that SIS failures occur sporadically. A Stochastic P-temporised Petri net is proposed and serves as a reference model for describing the failure process of a 1oo1 SIS architecture. Simulations of this stochastic Petri net demonstrate that, during its lifecycle, the SIS is rarely in a state in which it cannot perform its mission. Experimental results are compared to Bernoulli trials in order to validate the powerfulness of Bernoulli trials for the modeling of the failures process of a SIS. The determination of the expected number of failures for a SIS during its lifecycle opens interesting research perspectives for engineers and scientists by completing the notion of PFD.

  10. Failure mode and effects analysis drastically reduced potential risks in clinical trial conduct.

    PubMed

    Lee, Howard; Lee, Heechan; Baik, Jungmi; Kim, Hyunjung; Kim, Rachel

    2017-01-01

    Failure mode and effects analysis (FMEA) is a risk management tool to proactively identify and assess the causes and effects of potential failures in a system, thereby preventing them from happening. The objective of this study was to evaluate effectiveness of FMEA applied to an academic clinical trial center in a tertiary care setting. A multidisciplinary FMEA focus group at the Seoul National University Hospital Clinical Trials Center selected 6 core clinical trial processes, for which potential failure modes were identified and their risk priority number (RPN) was assessed. Remedial action plans for high-risk failure modes (RPN >160) were devised and a follow-up RPN scoring was conducted a year later. A total of 114 failure modes were identified with an RPN score ranging 3-378, which was mainly driven by the severity score. Fourteen failure modes were of high risk, 11 of which were addressed by remedial actions. Rescoring showed a dramatic improvement attributed to reduction in the occurrence and detection scores by >3 and >2 points, respectively. FMEA is a powerful tool to improve quality in clinical trials. The Seoul National University Hospital Clinical Trials Center is expanding its FMEA capability to other core clinical trial processes.

  11. Risk analysis of analytical validations by probabilistic modification of FMEA.

    PubMed

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Independent Orbiter Assessment (IOA): Analysis of the DPS subsystem

    NASA Technical Reports Server (NTRS)

    Lowery, H. J.; Haufler, W. A.; Pietz, K. C.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) is presented. The IOA approach features a top-down analysis of the hardware to independently determine failure modes, criticality, and potential critical items. The independent analysis results corresponding to the Orbiter Data Processing System (DPS) hardware are documented. The DPS hardware is required for performing critical functions of data acquisition, data manipulation, data display, and data transfer throughout the Orbiter. Specifically, the DPS hardware consists of the following components: Multiplexer/Demultiplexer (MDM); General Purpose Computer (GPC); Multifunction CRT Display System (MCDS); Data Buses and Data Bus Couplers (DBC); Data Bus Isolation Amplifiers (DBIA); Mass Memory Unit (MMU); and Engine Interface Unit (EIU). The IOA analysis process utilized available DPS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the extensive redundancy built into the DPS the number of critical items are few. Those identified resulted from premature operation and erroneous output of the GPCs.

  13. The Shuttle processing contractors (SPC) reliability program at the Kennedy Space Center - The real world

    NASA Astrophysics Data System (ADS)

    McCrea, Terry

    The Shuttle Processing Contract (SPC) workforce consists of Lockheed Space Operations Co. as prime contractor, with Grumman, Thiokol Corporation, and Johnson Controls World Services as subcontractors. During the design phase, reliability engineering is instrumental in influencing the development of systems that meet the Shuttle fail-safe program requirements. Reliability engineers accomplish this objective by performing FMEA (failure modes and effects analysis) to identify potential single failure points. When technology, time, or resources do not permit a redesign to eliminate a single failure point, the single failure point information is formatted into a change request and presented to senior management of SPC and NASA for risk acceptance. In parallel with the FMEA, safety engineering conducts a hazard analysis to assure that potential hazards to personnel are assessed. The combined effort (FMEA and hazard analysis) is published as a system assurance analysis. Special ground rules and techniques are developed to perform and present the analysis. The reliability program at KSC is vigorously pursued, and has been extremely successful. The ground support equipment and facilities used to launch and land the Space Shuttle maintain an excellent reliability record.

  14. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 1: Aft Skirt analysis

    NASA Technical Reports Server (NTRS)

    Berry, David M.; Stansberry, Mark

    1989-01-01

    Using the ANSYS finite element program, a global model of the aft skirt and a detailed nonlinear model of the failure region was made. The analysis confirmed the area of failure in both STA-2B and STA-3 tests as the forging heat affected zone (HAZ) at the aft ring centerline. The highest hoop strain in the HAZ occurs in this area. However, the analysis does not predict failure as defined by ultimate elongation of the material equal to 3.5 percent total strain. The analysis correlates well with the strain gage data from both the Wyle influence test of the original design aft sjirt and the STA-3 test of the redesigned aft skirt. it is suggested that the sensitivity of the failure area material strength and stress/strain state to material properties and therefore to small manufacturing or processing variables is the most likely cause of failure below the expected material ultimate properties.

  15. Application of Failure Mode and Effect Analysis (FMEA) and cause and effect analysis in conjunction with ISO 22000 to a snails (Helix aspersa) processing plant; A case study.

    PubMed

    Arvanitoyannis, Ioannis S; Varzakas, Theodoros H

    2009-08-01

    Failure Mode and Effect Analysis (FMEA) has been applied for the risk assessment of snails manufacturing. A tentative approach of FMEA application to the snails industry was attempted in conjunction with ISO 22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (snails processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and fishbone diagram). In this work a comparison of ISO22000 analysis with HACCP is carried out over snails processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the RPN per identified processing hazard. Sterilization of tins, bioaccumulation of heavy metals, packaging of shells and poisonous mushrooms, were the processes identified as the ones with the highest RPN (280, 240, 147, 144, respectively) and corrective actions were undertaken. Following the application of corrective actions, a second calculation of RPN values was carried out leading to considerably lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO22000 system of a snails processing industry is considered imperative.

  16. Application of Failure Mode and Effect Analysis (FMEA), cause and effect analysis, and Pareto diagram in conjunction with HACCP to a corn curl manufacturing plant.

    PubMed

    Varzakas, Theodoros H; Arvanitoyannis, Ioannis S

    2007-01-01

    The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of corn curl manufacturing. A tentative approach of FMEA application to the snacks industry was attempted in an effort to exclude the presence of GMOs in the final product. This is of crucial importance both from the ethics and the legislation (Regulations EC 1829/2003; EC 1830/2003; Directive EC 18/2001) point of view. The Preliminary Hazard Analysis and the Fault Tree Analysis were used to analyze and predict the occurring failure modes in a food chain system (corn curls processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and the fishbone diagram). Finally, Pareto diagrams were employed towards the optimization of GMOs detection potential of FMEA.

  17. Low-thrust mission risk analysis, with application to a 1980 rendezvous with the comet Encke

    NASA Technical Reports Server (NTRS)

    Yen, C. L.; Smith, D. B.

    1973-01-01

    A computerized failure process simulation procedure is used to evaluate the risk in a solar electric space mission. The procedure uses currently available thrust-subsystem reliability data and performs approximate simulations of the thrust sybsystem burn operation, the system failure processes, and the retargeting operations. The method is applied to assess the risks in carrying out a 1980 rendezvous mission to the comet Encke. Analysis of the results and evaluation of the effects of various risk factors on the mission show that system component failure rates are the limiting factors in attaining a high mission relability. It is also shown that a well-designed trajectory and system operation mode can be used effectively to partially compensate for unreliable thruster performance.

  18. Failure Modes and Effects Analysis of bilateral same-day cataract surgery

    PubMed Central

    Shorstein, Neal H.; Lucido, Carol; Carolan, James; Liu, Liyan; Slean, Geraldine; Herrinton, Lisa J.

    2017-01-01

    PURPOSE To systematically analyze potential process failures related to bilateral same-day cataract surgery toward the goal of improving patient safety. SETTING Twenty-one Kaiser Permanente surgery centers, Northern California, USA. DESIGN Retrospective cohort study. METHODS Quality experts performed a Failure Modes and Effects Analysis (FMEA) that included an evaluation of sterile processing, pharmaceuticals, perioperative clinic and surgical center visits, and biometry. Potential failures in human factors and communication (modes) were identified. Rates of endophthalmitis, toxic anterior segment syndrome (TASS), and unintended intraocular lens (IOL) implantation were assessed in eyes having bilateral same-day surgery from 2010 through 2014. RESULTS The study comprised 4754 eyes. The analysis identified 15 significant potential failure modes. These included lapses in instrument processing and compounding error of intracameral antibiotic that could lead to endophthalmitis or TASS and ambiguous documentation of IOL selection by surgeons, which could lead to unintended IOL implantation. Of the study sample, 1 eye developed endophthalmitis, 1 eye had unintended IOL implantation (rates, 2 per 10 000; 95% confidence intervals [CI] 0.1–12.0 per 10 000), and no eyes developed TASS (upper 95% CI, 8 per 10 000). Recommendations included improving oversight of cleaning and sterilization practices, separating lots of compounded drugs for each eye, and enhancing IOL verification procedures. CONCLUSIONS Potential failure modes and recommended actions in bilateral same-day cataract surgery were determined using a FMEA. These findings might help improve the reliability and safety of bilateral same-day cataract surgery based on current evidence and standards. PMID:28410711

  19. Modelling Coastal Cliff Recession Based on the GIM-DDD Method

    NASA Astrophysics Data System (ADS)

    Gong, Bin; Wang, Shanyong; Sloan, Scott William; Sheng, Daichao; Tang, Chun'an

    2018-04-01

    The unpredictable and instantaneous collapse behaviour of coastal rocky cliffs may cause damage that extends significantly beyond the area of failure. Gravitational movements that occur during coastal cliff recession involve two major stages: the small deformation stage and the large displacement stage. In this paper, a method of simulating the entire progressive failure process of coastal rocky cliffs is developed based on the gravity increase method (GIM), the rock failure process analysis method and the discontinuous deformation analysis method, and it is referred to as the GIM-DDD method. The small deformation stage, which includes crack initiation, propagation and coalescence processes, and the large displacement stage, which includes block translation and rotation processes during the rocky cliff collapse, are modelled using the GIM-DDD method. In addition, acoustic emissions, stress field variations, crack propagation and failure mode characteristics are further analysed to provide insights that can be used to predict, prevent and minimize potential economic losses and casualties. The calculation and analytical results are consistent with previous studies, which indicate that the developed method provides an effective and reliable approach for performing rocky cliff stability evaluations and coastal cliff recession analyses and has considerable potential for improving the safety and protection of seaside cliff areas.

  20. SCADA alarms processing for wind turbine component failure detection

    NASA Astrophysics Data System (ADS)

    Gonzalez, E.; Reder, M.; Melero, J. J.

    2016-09-01

    Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.

  1. Failure mode and effects analysis: an empirical comparison of failure mode scoring procedures.

    PubMed

    Ashley, Laura; Armitage, Gerry

    2010-12-01

    To empirically compare 2 different commonly used failure mode and effects analysis (FMEA) scoring procedures with respect to their resultant failure mode scores and prioritization: a mathematical procedure, where scores are assigned independently by FMEA team members and averaged, and a consensus procedure, where scores are agreed on by the FMEA team via discussion. A multidisciplinary team undertook a Healthcare FMEA of chemotherapy administration. This included mapping the chemotherapy process, identifying and scoring failure modes (potential errors) for each process step, and generating remedial strategies to counteract them. Failure modes were scored using both an independent mathematical procedure and a team consensus procedure. Almost three-fifths of the 30 failure modes generated were scored differently by the 2 procedures, and for just more than one-third of cases, the score discrepancy was substantial. Using the Healthcare FMEA prioritization cutoff score, almost twice as many failure modes were prioritized by the consensus procedure than by the mathematical procedure. This is the first study to empirically demonstrate that different FMEA scoring procedures can score and prioritize failure modes differently. It found considerable variability in individual team members' opinions on scores, which highlights the subjective and qualitative nature of failure mode scoring. A consensus scoring procedure may be most appropriate for FMEA as it allows variability in individuals' scores and rationales to become apparent and to be discussed and resolved by the team. It may also yield team learning and communication benefits unlikely to result from a mathematical procedure.

  2. Dynamic properties of ceramic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grady, D.E.

    1995-02-01

    The present study offers new data and analysis on the transient shock strength and equation-of-state properties of ceramics. Various dynamic data on nine high strength ceramics are provided with wave profile measurements, through velocity interferometry techniques, the principal observable. Compressive failure in the shock wave front, with emphasis on brittle versus ductile mechanisms of deformation, is examined in some detail. Extensive spall strength data are provided and related to the theoretical spall strength, and to energy-based theories of the spall process. Failure waves, as a mechanism of deformation in the transient shock process, are examined. Strength and equation-of-state analysis ofmore » shock data on silicon carbide, boron carbide, tungsten carbide, silicon dioxide and aluminum nitride is presented with particular emphasis on phase transition properties for the latter two. Wave profile measurements on selected ceramics are investigated for evidence of rate sensitive elastic precursor decay in the shock front failure process.« less

  3. Software dependability in the Tandem GUARDIAN system

    NASA Technical Reports Server (NTRS)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  4. Failure Analysis for Composition of Web Services Represented as Labeled Transition Systems

    NASA Astrophysics Data System (ADS)

    Nadkarni, Dinanath; Basu, Samik; Honavar, Vasant; Lutz, Robyn

    The Web service composition problem involves the creation of a choreographer that provides the interaction between a set of component services to realize a goal service. Several methods have been proposed and developed to address this problem. In this paper, we consider those scenarios where the composition process may fail due to incomplete specification of goal service requirements or due to the fact that the user is unaware of the functionality provided by the existing component services. In such cases, it is desirable to have a composition algorithm that can provide feedback to the user regarding the cause of failure in the composition process. Such feedback will help guide the user to re-formulate the goal service and iterate the composition process. We propose a failure analysis technique for composition algorithms that views Web service behavior as multiple sequences of input/output events. Our technique identifies the possible cause of composition failure and suggests possible recovery options to the user. We discuss our technique using a simple e-Library Web service in the context of the MoSCoE Web service composition framework.

  5. Failure Mode and Effect Analysis for Delivery of Lung Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perks, Julian R., E-mail: julian.perks@ucdmc.ucdavis.edu; Stanic, Sinisa; Stern, Robin L.

    2012-07-15

    Purpose: To improve the quality and safety of our practice of stereotactic body radiation therapy (SBRT), we analyzed the process following the failure mode and effects analysis (FMEA) method. Methods: The FMEA was performed by a multidisciplinary team. For each step in the SBRT delivery process, a potential failure occurrence was derived and three factors were assessed: the probability of each occurrence, the severity if the event occurs, and the probability of detection by the treatment team. A rank of 1 to 10 was assigned to each factor, and then the multiplied ranks yielded the relative risks (risk priority numbers).more » The failure modes with the highest risk priority numbers were then considered to implement process improvement measures. Results: A total of 28 occurrences were derived, of which nine events scored with significantly high risk priority numbers. The risk priority numbers of the highest ranked events ranged from 20 to 80. These included transcription errors of the stereotactic coordinates and machine failures. Conclusion: Several areas of our SBRT delivery were reconsidered in terms of process improvement, and safety measures, including treatment checklists and a surgical time-out, were added for our practice of gantry-based image-guided SBRT. This study serves as a guide for other users of SBRT to perform FMEA of their own practice.« less

  6. Independent Orbiter Assessment (IOA): Assessment of the data processing system FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Lowery, H. J.; Haufler, W. A.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Data Processing System (DPS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baseline with proposed Post 51-L updates included. A resolution of each discrepancy from the comparison is provided through additional analysis as required. The results of that comparison is documented for the Orbiter DPS hardware.

  7. 40 CFR Appendix D to Subpart B of... - SAE J2810 Standard for Recovery Only Equipment for HFC-134a Refrigerant

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) refrigerant to be returned to a refrigerant reclamation facility that will process it to the appropriate ARI... and Assembly Processes (Process FMEA) and Effects Analysis for Machinery (Machinery FMEA). SAE... Manufacturing and Assembly Processes (Process FMEA), and Potential Failure Mode and Effects Analysis for...

  8. 40 CFR Appendix D to Subpart B of... - SAE J2810 Standard for Recovery Only Equipment for HFC-134a Refrigerant

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) refrigerant to be returned to a refrigerant reclamation facility that will process it to the appropriate ARI... and Assembly Processes (Process FMEA) and Effects Analysis for Machinery (Machinery FMEA). SAE... Manufacturing and Assembly Processes (Process FMEA), and Potential Failure Mode and Effects Analysis for...

  9. 40 CFR Appendix D to Subpart B of... - SAE J2810 Standard for Recovery Only Equipment for HFC-134a Refrigerant

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) refrigerant to be returned to a refrigerant reclamation facility that will process it to the appropriate ARI... and Assembly Processes (Process FMEA) and Effects Analysis for Machinery (Machinery FMEA). SAE... Manufacturing and Assembly Processes (Process FMEA), and Potential Failure Mode and Effects Analysis for...

  10. 40 CFR Appendix D to Subpart B of... - SAE J2810 Standard for Recovery Only Equipment for HFC-134a Refrigerant

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) refrigerant to be returned to a refrigerant reclamation facility that will process it to the appropriate ARI... and Assembly Processes (Process FMEA) and Effects Analysis for Machinery (Machinery FMEA). SAE... Manufacturing and Assembly Processes (Process FMEA), and Potential Failure Mode and Effects Analysis for...

  11. 40 CFR Appendix D to Subpart B of... - SAE J2810 Standard for Recovery Only Equipment for HFC-134a Refrigerant

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) refrigerant to be returned to a refrigerant reclamation facility that will process it to the appropriate ARI... and Assembly Processes (Process FMEA) and Effects Analysis for Machinery (Machinery FMEA). SAE... Manufacturing and Assembly Processes (Process FMEA), and Potential Failure Mode and Effects Analysis for...

  12. [Retrieval and failure analysis of surgical implants in Brazil: the need for proper regulation].

    PubMed

    Azevedo, Cesar R de Farias; Hippert, Eduardo

    2002-01-01

    This paper summarizes several cases of metallurgical failure analysis of surgical implants conducted at the Laboratory of Failure Analysis, Instituto de Pesquisas Tecnológicas (IPT), in Brazil. Failures with two stainless steel femoral compression plates, one stainless steel femoral nail plate, one Ti-6Al-4V alloy maxillary reconstruction plate, and five Nitinol wires were investigated. The results showed that the implants were not in accordance with ISO standards and presented evidence of corrosion-assisted fracture. Furthermore, some of the implants presented manufacturing/processing defects which also contributed to their premature failure. Implantation of materials that are not biocompatible may cause several types of adverse effects in the human body and lead to premature implant failure. A review of prevailing health legislation is needed in Brazil, along with the adoption of regulatory mechanisms to assure the quality of surgical implants on the market, providing for compulsory procedures in the reporting and investigation of surgical implants which have failed in service.

  13. Statistics-related and reliability-physics-related failure processes in electronics devices and products

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    2014-05-01

    The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.

  14. Failure mode and effects analysis drastically reduced potential risks in clinical trial conduct

    PubMed Central

    Baik, Jungmi; Kim, Hyunjung; Kim, Rachel

    2017-01-01

    Background Failure mode and effects analysis (FMEA) is a risk management tool to proactively identify and assess the causes and effects of potential failures in a system, thereby preventing them from happening. The objective of this study was to evaluate effectiveness of FMEA applied to an academic clinical trial center in a tertiary care setting. Methods A multidisciplinary FMEA focus group at the Seoul National University Hospital Clinical Trials Center selected 6 core clinical trial processes, for which potential failure modes were identified and their risk priority number (RPN) was assessed. Remedial action plans for high-risk failure modes (RPN >160) were devised and a follow-up RPN scoring was conducted a year later. Results A total of 114 failure modes were identified with an RPN score ranging 3–378, which was mainly driven by the severity score. Fourteen failure modes were of high risk, 11 of which were addressed by remedial actions. Rescoring showed a dramatic improvement attributed to reduction in the occurrence and detection scores by >3 and >2 points, respectively. Conclusions FMEA is a powerful tool to improve quality in clinical trials. The Seoul National University Hospital Clinical Trials Center is expanding its FMEA capability to other core clinical trial processes. PMID:29089745

  15. Failure Analysis of Network Based Accessible Pedestrian Signals in Closed-Loop Operation

    DOT National Transportation Integrated Search

    2011-03-01

    The potential failure modes of a network based accessible pedestrian system were analyzed to determine the limitations and benefits of closed-loop operation. The vulnerabilities of the system are accessed using the industry standard process known as ...

  16. Using failure mode and effects analysis to plan implementation of smart i.v. pump technology.

    PubMed

    Wetterneck, Tosha B; Skibinski, Kathleen A; Roberts, Tanita L; Kleppin, Susan M; Schroeder, Mark E; Enloe, Myra; Rough, Steven S; Hundt, Ann Schoofs; Carayon, Pascale

    2006-08-15

    Failure mode and effects analysis (FMEA) was used to evaluate a smart i.v. pump as it was implemented into a redesigned medication-use process. A multidisciplinary team conducted a FMEA to guide the implementation of a smart i.v. pump that was designed to prevent pump programming errors. The smart i.v. pump was equipped with a dose-error reduction system that included a pre-defined drug library in which dosage limits were set for each medication. Monitoring for potential failures and errors occurred for three months postimplementation of FMEA. Specific measures were used to determine the success of the actions that were implemented as a result of the FMEA. The FMEA process at the hospital identified key failure modes in the medication process with the use of the old and new pumps, and actions were taken to avoid errors and adverse events. I.V. pump software and hardware design changes were also recommended. Thirteen of the 18 failure modes reported in practice after pump implementation had been identified by the team. A beneficial outcome of FMEA was the development of a multidisciplinary team that provided the infrastructure for safe technology implementation and effective event investigation after implementation. With the continual updating of i.v. pump software and hardware after implementation, FMEA can be an important starting place for safe technology choice and implementation and can produce site experts to follow technology and process changes over time. FMEA was useful in identifying potential problems in the medication-use process with the implementation of new smart i.v. pumps. Monitoring for system failures and errors after implementation remains necessary.

  17. Application of ISO 22000 and Failure Mode and Effect Analysis (FMEA) for industrial processing of salmon: a case study.

    PubMed

    Arvanitoyannis, Ioannis S; Varzakas, Theodoros H

    2008-05-01

    The Failure Mode and Effect Analysis (FMEA) model was applied for risk assessment of salmon manufacturing. A tentative approach of FMEA application to the salmon industry was attempted in conjunction with ISO 22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (salmon processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points were identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram and fishbone diagram). In this work, a comparison of ISO 22000 analysis with HACCP is carried out over salmon processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the RPN per identified processing hazard. Fish receiving, casing/marking, blood removal, evisceration, filet-making cooling/freezing, and distribution were the processes identified as the ones with the highest RPN (252, 240, 210, 210, 210, 210, 200 respectively) and corrective actions were undertaken. After the application of corrective actions, a second calculation of RPN values was carried out resulting in substantially lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO 22000 system of a salmon processing industry is anticipated to prove advantageous to industrialists, state food inspectors, and consumers.

  18. Independent Orbiter Assessment (IOA): Assessment of the EPD and C/remote manipulator system FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Robinson, W. W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Electrical Power Distribution and Control (EPD and C)/Remote Manipulator System (RMS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA analysis of the EPD and C/RMS hardware initially generated 345 failure mode worksheets and identified 117 Potential Critical Items (PCIs) before starting the assessment process. These analysis results were compared to the proposed NASA Post 51-L baseline of 132 FMEAs and 66 CIL items.

  19. A Spatial Risk Analysis of Oil Refineries within the United States

    DTIC Science & Technology

    2012-03-01

    regulator and consumer. This is especially true within the energy sector which is composed of electrical power, oil , and gas infrastructure [10...Naphtali, "Analysis of Electrical Power and Oil and Gas Pipeline Failures," in International Federation for Information Processing, E. Goetz and S...61-67, September 1999. [5] J. Simonoff, C. Restrepo, R. Zimmerman, and Z. Naphtali, "Analysis of Electrical Power and Oil and Gas Pipeline Failures

  20. X-framework: Space system failure analysis framework

    NASA Astrophysics Data System (ADS)

    Newman, John Steven

    Space program and space systems failures result in financial losses in the multi-hundred million dollar range every year. In addition to financial loss, space system failures may also represent the loss of opportunity, loss of critical scientific, commercial and/or national defense capabilities, as well as loss of public confidence. The need exists to improve learning and expand the scope of lessons documented and offered to the space industry project team. One of the barriers to incorporating lessons learned include the way in which space system failures are documented. Multiple classes of space system failure information are identified, ranging from "sound bite" summaries in space insurance compendia, to articles in journals, lengthy data-oriented (what happened) reports, and in some rare cases, reports that treat not only the what, but also the why. In addition there are periodically published "corporate crisis" reports, typically issued after multiple or highly visible failures that explore management roles in the failure, often within a politically oriented context. Given the general lack of consistency, it is clear that a good multi-level space system/program failure framework with analytical and predictive capability is needed. This research effort set out to develop such a model. The X-Framework (x-fw) is proposed as an innovative forensic failure analysis approach, providing a multi-level understanding of the space system failure event beginning with the proximate cause, extending to the directly related work or operational processes and upward through successive management layers. The x-fw focus is on capability and control at the process level and examines: (1) management accountability and control, (2) resource and requirement allocation, and (3) planning, analysis, and risk management at each level of management. The x-fw model provides an innovative failure analysis approach for acquiring a multi-level perspective, direct and indirect causation of failures, and generating better and more consistent reports. Through this approach failures can be more fully understood, existing programs can be evaluated and future failures avoided. The x-fw development involved a review of the historical failure analysis and prevention literature, coupled with examination of numerous failure case studies. Analytical approaches included use of a relational failure "knowledge base" for classification and sorting of x-fw elements and attributes for each case. In addition a novel "management mapping" technique was developed as a means of displaying an integrated snapshot of indirect causes within the management chain. Further research opportunities will extend the depth of knowledge available for many of the component level cases. In addition, the x-fw has the potential to expand the scope of space sector lessons learned, and contribute to knowledge management and organizational learning.

  1. Fault tree analysis of most common rolling bearing tribological failures

    NASA Astrophysics Data System (ADS)

    Vencl, Aleksandar; Gašić, Vlada; Stojanović, Blaža

    2017-02-01

    Wear as a tribological process has a major influence on the reliability and life of rolling bearings. Field examinations of bearing failures due to wear indicate possible causes and point to the necessary measurements for wear reduction or elimination. Wear itself is a very complex process initiated by the action of different mechanisms, and can be manifested by different wear types which are often related. However, the dominant type of wear can be approximately determined. The paper presents the classification of most common bearing damages according to the dominant wear type, i.e. abrasive wear, adhesive wear, surface fatigue wear, erosive wear, fretting wear and corrosive wear. The wear types are correlated with the terms used in ISO 15243 standard. Each wear type is illustrated with an appropriate photograph, and for each wear type, appropriate description of causes and manifestations is presented. Possible causes of rolling bearing failure are used for the fault tree analysis (FTA). It was performed to determine the root causes for bearing failures. The constructed fault tree diagram for rolling bearing failure can be useful tool for maintenance engineers.

  2. Safety analysis of occupational exposure of healthcare workers to residual contaminations of cytotoxic drugs using FMECA security approach.

    PubMed

    Le, Laetitia Minh Mai; Reitter, Delphine; He, Sophie; Bonle, Franck Té; Launois, Amélie; Martinez, Diane; Prognon, Patrice; Caudron, Eric

    2017-12-01

    Handling cytotoxic drugs is associated with chemical contamination of workplace surfaces. The potential mutagenic, teratogenic and oncogenic properties of those drugs create a risk of occupational exposure for healthcare workers, from reception of starting materials to the preparation and administration of cytotoxic therapies. The Security Failure Mode Effects and Criticality Analysis (FMECA) was used as a proactive method to assess the risks involved in the chemotherapy compounding process. FMECA was carried out by a multidisciplinary team from 2011 to 2016. Potential failure modes of the process were identified based on the Risk Priority Number (RPN) that prioritizes corrective actions. Twenty-five potential failure modes were identified. Based on RPN results, the corrective actions plan was revised annually to reduce the risk of exposure and improve practices. Since 2011, 16 specific measures were implemented successively. In six years, a cumulative RPN reduction of 626 was observed, with a decrease from 912 to 286 (-69%) despite an increase of cytotoxic compounding activity of around 23.2%. In order to anticipate and prevent occupational exposure, FMECA is a valuable tool to identify, prioritize and eliminate potential failure modes for operators involved in the cytotoxic drug preparation process before the failures occur. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Capturing a failure of an ASIC in-situ, using infrared radiometry and image processing software

    NASA Technical Reports Server (NTRS)

    Ruiz, Ronald P.

    2003-01-01

    Failures in electronic devices can sometimes be tricky to locate-especially if they are buried inside radiation-shielded containers designed to work in outer space. Such was the case with a malfunctioning ASIC (Application Specific Integrated Circuit) that was drawing excessive power at a specific temperature during temperature cycle testing. To analyze the failure, infrared radiometry (thermography) was used in combination with image processing software to locate precisely where the power was being dissipated at the moment the failure took place. The IR imaging software was used to make the image of the target and background, appear as unity. As testing proceeded and the failure mode was reached, temperature changes revealed the precise location of the fault. The results gave the design engineers the information they needed to fix the problem. This paper describes the techniques and equipment used to accomplish this failure analysis.

  4. The use of failure mode and effect analysis in a radiation oncology setting: the Cancer Treatment Centers of America experience.

    PubMed

    Denny, Diane S; Allen, Debra K; Worthington, Nicole; Gupta, Digant

    2014-01-01

    Delivering radiation therapy in an oncology setting is a high-risk process where system failures are more likely to occur because of increasing utilization, complexity, and sophistication of the equipment and related processes. Healthcare failure mode and effect analysis (FMEA) is a method used to proactively detect risks to the patient in a particular healthcare process and correct potential errors before adverse events occur. FMEA is a systematic, multidisciplinary team-based approach to error prevention and enhancing patient safety. We describe our experience of using FMEA as a prospective risk-management technique in radiation oncology at a national network of oncology hospitals in the United States, capitalizing not only on the use of a team-based tool but also creating momentum across a network of collaborative facilities seeking to learn from and share best practices with each other. The major steps of our analysis across 4 sites and collectively were: choosing the process and subprocesses to be studied, assembling a multidisciplinary team at each site responsible for conducting the hazard analysis, and developing and implementing actions related to our findings. We identified 5 areas of performance improvement for which risk-reducing actions were successfully implemented across our enterprise. © 2012 National Association for Healthcare Quality.

  5. Application of ICH Q9 Quality Risk Management Tools for Advanced Development of Hot Melt Coated Multiparticulate Systems.

    PubMed

    Stocker, Elena; Becker, Karin; Hate, Siddhi; Hohl, Roland; Schiemenz, Wolfgang; Sacher, Stephan; Zimmer, Andreas; Salar-Behzadi, Sharareh

    2017-01-01

    This study aimed to apply quality risk management based on the The International Conference on Harmonisation guideline Q9 for the early development stage of hot melt coated multiparticulate systems for oral administration. N-acetylcysteine crystals were coated with a formulation composing tripalmitin and polysorbate 65. The critical quality attributes (CQAs) were initially prioritized using failure mode and effects analysis. The CQAs of the coated material were defined as particle size, taste-masking efficiency, and immediate release profile. The hot melt coated process was characterized via a flowchart, based on the identified potential critical process parameters (CPPs) and their impact on the CQAs. These CPPs were prioritized using a process failure mode, effects, and criticality analysis and their critical impact on the CQAs was experimentally confirmed using a statistical design of experiments. Spray rate, atomization air pressure, and air flow rate were identified as CPPs. Coating amount and content of polysorbate 65 in the coating formulation were identified as critical material attributes. A hazard and critical control points analysis was applied to define control strategies at the critical process points. A fault tree analysis evaluated causes for potential process failures. We successfully demonstrated that a standardized quality risk management approach optimizes the product development sustainability and supports the regulatory aspects. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  6. Independent Orbiter Assessment (IOA): Analysis of the atmospheric revitalization pressure control subsystem

    NASA Technical Reports Server (NTRS)

    Saiidi, M. J.; Duffy, R. E.; Mclaughlin, T. D.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Atmospheric Revitalization and Pressure Control Subsystem (ARPCS) are documented. The ARPCS hardware was categorized into the following subdivisions: (1) Atmospheric Make-up and Control (including the Auxiliary Oxygen Assembly, Oxygen Assembly, and Nitrogen Assembly); and (2) Atmospheric Vent and Control (including the Positive Relief Vent Assembly, Negative Relief Vent Assembly, and Cabin Vent Assembly). The IOA analysis process utilized available ARPCS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  7. Independent Orbiter Assessment (IOA): Analysis of the mechanical actuation subsystem

    NASA Technical Reports Server (NTRS)

    Bacher, J. L.; Montgomery, A. D.; Bradway, M. W.; Slaughter, W. T.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). The IOA analysis process utilized available MAS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  8. Application of ISO22000, failure mode, and effect analysis (FMEA) cause and effect diagrams and pareto in conjunction with HACCP and risk assessment for processing of pastry products.

    PubMed

    Varzakas, Theodoros H

    2011-09-01

    The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of pastry processing. A tentative approach of FMEA application to the pastry industry was attempted in conjunction with ISO22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (pastry processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and fishbone diagram). In this work a comparison of ISO22000 analysis with HACCP is carried out over pastry processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the Risk Priority Number (RPN) per identified processing hazard. Storage of raw materials and storage of final products at -18°C followed by freezing were the processes identified as the ones with the highest RPN (225, 225, and 144 respectively) and corrective actions were undertaken. Following the application of corrective actions, a second calculation of RPN values was carried out leading to considerably lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO22000 system of a pastry processing industry is considered imperative.

  9. Large-scale data analysis of power grid resilience across multiple US service regions

    NASA Astrophysics Data System (ADS)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  10. Failure modes and effects analysis (FMEA) for Gamma Knife radiosurgery.

    PubMed

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Flickinger, John; Arai, Yoshio; Vacsulka, Jonet; Feng, Wenzheng; Monaco, Edward; Niranjan, Ajay; Lunsford, L Dade; Huq, M Saiful

    2017-11-01

    Gamma Knife radiosurgery is a highly precise and accurate treatment technique for treating brain diseases with low risk of serious error that nevertheless could potentially be reduced. We applied the AAPM Task Group 100 recommended failure modes and effects analysis (FMEA) tool to develop a risk-based quality management program for Gamma Knife radiosurgery. A team consisting of medical physicists, radiation oncologists, neurosurgeons, radiation safety officers, nurses, operating room technologists, and schedulers at our institution and an external physicist expert on Gamma Knife was formed for the FMEA study. A process tree and a failure mode table were created for the Gamma Knife radiosurgery procedures using the Leksell Gamma Knife Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detection for failure mode (D) were assigned to each failure mode by 8 professionals on a scale from 1 to 10. An overall risk priority number (RPN) for each failure mode was then calculated from the averaged O, S, and D scores. The coefficient of variation for each O, S, or D score was also calculated. The failure modes identified were prioritized in terms of both the RPN scores and the severity scores. The established process tree for Gamma Knife radiosurgery consists of 10 subprocesses and 53 steps, including a subprocess for frame placement and 11 steps that are directly related to the frame-based nature of the Gamma Knife radiosurgery. Out of the 86 failure modes identified, 40 Gamma Knife specific failure modes were caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the Gamma Knife helmets and plugs, the skull definition tools as well as other features of the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all external beam radiation therapy techniques. The failure modes with the highest hazard scores are related to imperfect frame adaptor attachment, bad fiducial box assembly, unsecured plugs/inserts, overlooked target areas, and undetected machine mechanical failure during the morning QA process. The implementation of the FMEA approach for Gamma Knife radiosurgery enabled deeper understanding of the overall process among all professionals involved in the care of the patient and helped identify potential weaknesses in the overall process. The results of the present study give us a basis for the development of a risk based quality management program for Gamma Knife radiosurgery. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  11. Independent Orbiter Assessment (IOA): Analysis of the active thermal control subsystem

    NASA Technical Reports Server (NTRS)

    Sinclair, S. K.; Parkman, W. E.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical (PCIs) items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Active Thermal Control Subsystem (ATCS) are documented. The major purpose of the ATCS is to remove the heat, generated during normal Shuttle operations from the Orbiter systems and subsystems. The four major components of the ATCS contributing to the heat removal are: Freon Coolant Loops; Radiator and Flow Control Assembly; Flash Evaporator System; and Ammonia Boiler System. In order to perform the analysis, the IOA process utilized available ATCS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 310 failure modes analyzed, 101 were determined to be PCIs.

  12. Independent Orbiter Assessment (IOA): Analysis of the hydraulics/water spray boiler subsystem

    NASA Technical Reports Server (NTRS)

    Duval, J. D.; Davidson, W. R.; Parkman, William E.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Orbiter Hydraulics/Water Spray Boiler Subsystem. The hydraulic system provides hydraulic power to gimbal the main engines, actuate the main engine propellant control valves, move the aerodynamic flight control surfaces, lower the landing gear, apply wheel brakes, steer the nosewheel, and dampen the external tank (ET) separation. Each hydraulic system has an associated water spray boiler which is used to cool the hydraulic fluid and APU lubricating oil. The IOA analysis process utilized available HYD/WSB hardware drawings, schematics and documents for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 430 failure modes analyzed, 166 were determined to be PCIs.

  13. Independent Orbiter Assessment (IOA): Analysis of the remote manipulator system

    NASA Technical Reports Server (NTRS)

    Tangorra, F.; Grasmeder, R. F.; Montgomery, A. D.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Remote Manipulator System (RMS) are documented. The RMS hardware and software are primarily required for deploying and/or retrieving up to five payloads during a single mission, capture and retrieve free-flying payloads, and for performing Manipulator Foot Restraint operations. Specifically, the RMS hardware consists of the following components: end effector; displays and controls; manipulator controller interface unit; arm based electronics; and the arm. The IOA analysis process utilized available RMS hardware drawings, schematics and documents for defining hardware assemblies, components and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 574 failure modes analyzed, 413 were determined to be PCIs.

  14. Failure mode and effect analysis in blood transfusion: a proactive tool to reduce risks.

    PubMed

    Lu, Yao; Teng, Fang; Zhou, Jie; Wen, Aiqing; Bi, Yutian

    2013-12-01

    The aim of blood transfusion risk management is to improve the quality of blood products and to assure patient safety. We utilize failure mode and effect analysis (FMEA), a tool employed for evaluating risks and identifying preventive measures to reduce the risks in blood transfusion. The failure modes and effects occurring throughout the whole process of blood transfusion were studied. Each failure mode was evaluated using three scores: severity of effect (S), likelihood of occurrence (O), and probability of detection (D). Risk priority numbers (RPNs) were calculated by multiplying the S, O, and D scores. The plan-do-check-act cycle was also used for continuous improvement. Analysis has showed that failure modes with the highest RPNs, and therefore the greatest risk, were insufficient preoperative assessment of the blood product requirement (RPN, 245), preparation time before infusion of more than 30 minutes (RPN, 240), blood transfusion reaction occurring during the transfusion process (RPN, 224), blood plasma abuse (RPN, 180), and insufficient and/or incorrect clinical information on request form (RPN, 126). After implementation of preventative measures and reassessment, a reduction in RPN was detected with each risk. The failure mode with the second highest RPN, namely, preparation time before infusion of more than 30 minutes, was shown in detail to prove the efficiency of this tool. FMEA evaluation model is a useful tool in proactively analyzing and reducing the risks associated with the blood transfusion procedure. © 2013 American Association of Blood Banks.

  15. SU-F-T-245: The Investigation of Failure Mode and Effects Analysis and PDCA for the Radiotherapy Risk Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, J; Wang, J; P, J

    2016-06-15

    Purpose: To optimize the clinical processes of radiotherapy and to reduce the radiotherapy risks by implementing the powerful risk management tools of failure mode and effects analysis(FMEA) and PDCA(plan-do-check-act). Methods: A multidiciplinary QA(Quality Assurance) team from our department consisting of oncologists, physicists, dosimetrists, therapists and administrator was established and an entire workflow QA process management using FMEA and PDCA tools was implemented for the whole treatment process. After the primary process tree was created, the failure modes and Risk priority numbers(RPNs) were determined by each member, and then the RPNs were averaged after team discussion. Results: 3 of 9 failuremore » modes with RPN above 100 in the practice were identified in the first PDCA cycle, which were further analyzed to investigate the RPNs: including of patient registration error, prescription error and treating wrong patient. New process controls reduced the occurrence, or detectability scores from the top 3 failure modes. Two important corrective actions reduced the highest RPNs from 300 to 50, and the error rate of radiotherapy decreased remarkably. Conclusion: FMEA and PDCA are helpful in identifying potential problems in the radiotherapy process, which was proven to improve the safety, quality and efficiency of radiation therapy in our department. The implementation of the FMEA approach may improve the understanding of the overall process of radiotherapy while may identify potential flaws in the whole process. Further more, repeating the PDCA cycle can bring us closer to the goal: higher safety and accuracy radiotherapy.« less

  16. MO-D-213-02: Quality Improvement Through a Failure Mode and Effects Analysis of Pediatric External Beam Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, J; Lukose, R; Bronson, J

    2015-06-15

    Purpose: To conduct a failure mode and effects analysis (FMEA) as per AAPM Task Group 100 on clinical processes associated with teletherapy, and the development of mitigations for processes with identified high risk. Methods: A FMEA was conducted on clinical processes relating to teletherapy treatment plan development and delivery. Nine major processes were identified for analysis. These steps included CT simulation, data transfer, image registration and segmentation, treatment planning, plan approval and preparation, and initial and subsequent treatments. Process tree mapping was utilized to identify the steps contained within each process. Failure modes (FM) were identified and evaluated with amore » scale of 1–10 based upon three metrics: the severity of the effect, the probability of occurrence, and the detectability of the cause. The analyzed metrics were scored as follows: severity – no harm = 1, lethal = 10; probability – not likely = 1, certainty = 10; detectability – always detected = 1, undetectable = 10. The three metrics were combined multiplicatively to determine the risk priority number (RPN) which defined the overall score for each FM and the order in which process modifications should be deployed. Results: Eighty-nine procedural steps were identified with 186 FM accompanied by 193 failure effects with 213 potential causes. Eighty-one of the FM were scored with a RPN > 10, and mitigations were developed for FM with RPN values exceeding ten. The initial treatment had the most FM (16) requiring mitigation development followed closely by treatment planning, segmentation, and plan preparation with fourteen each. The maximum RPN was 400 and involved target delineation. Conclusion: The FMEA process proved extremely useful in identifying previously unforeseen risks. New methods were developed and implemented for risk mitigation and error prevention. Similar to findings reported for adult patients, the process leading to the initial treatment has an associated high risk.« less

  17. Reliability growth modeling analysis of the space shuttle main engines based upon the Weibull process

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1990-01-01

    The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.

  18. Evaluating the operational risks of biomedical waste using failure mode and effects analysis.

    PubMed

    Chen, Ying-Chu; Tsai, Pei-Yi

    2017-06-01

    The potential problems and risks of biomedical waste generation have become increasingly apparent in recent years. This study applied a failure mode and effects analysis to evaluate the operational problems and risks of biomedical waste. The microbiological contamination of biomedical waste seldom receives the attention of researchers. In this study, the biomedical waste lifecycle was divided into seven processes: Production, classification, packaging, sterilisation, weighing, storage, and transportation. Twenty main failure modes were identified in these phases and risks were assessed based on their risk priority numbers. The failure modes in the production phase accounted for the highest proportion of the risk priority number score (27.7%). In the packaging phase, the failure mode 'sharp articles not placed in solid containers' had the highest risk priority number score, mainly owing to its high severity rating. The sterilisation process is the main difference in the treatment of infectious and non-infectious biomedical waste. The failure modes in the sterilisation phase were mainly owing to human factors (mostly related to operators). This study increases the understanding of the potential problems and risks associated with biomedical waste, thereby increasing awareness of how to improve the management of biomedical waste to better protect workers, the public, and the environment.

  19. Composite Structural Analysis of Flat-Back Shaped Blade for Multi-MW Class Wind Turbine

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Bang, Hyung-Joon; Shin, Hyung-Ki; Jang, Moon-Seok

    2014-06-01

    This paper provides an overview of failure mode estimation based on 3D structural finite element (FE) analysis of the flat-back shaped wind turbine blade. Buckling stability, fiber failure (FF), and inter-fiber failure (IFF) analyses were performed to account for delamination or matrix failure of composite materials and to predict the realistic behavior of the entire blade region. Puck's fracture criteria were used for IFF evaluation. Blade design loads applicable to multi-megawatt (MW) wind turbine systems were calculated according to the Germanischer Lloyd (GL) guideline and the International Electrotechnical Commission (IEC) 61400-1 standard, under Class IIA wind conditions. After the post-processing of final load results, a number of principal load cases were selected and converted into applied forces at the each section along the blade's radius of the FE model. Nonlinear static analyses were performed for laminate failure, FF, and IFF check. For buckling stability, linear eigenvalue analysis was performed. As a result, we were able to estimate the failure mode and locate the major weak point.

  20. Reliability Centred Maintenance (RCM) Analysis of Laser Machine in Filling Lithos at PT X

    NASA Astrophysics Data System (ADS)

    Suryono, M. A. E.; Rosyidi, C. N.

    2018-03-01

    PT. X used automated machines which work for sixteen hours per day. Therefore, the machines should be maintained to keep the availability of the machines. The aim of this research is to determine maintenance tasks according to the cause of component’s failure using Reliability Centred Maintenance (RCM) and determine the amount of optimal inspection frequency which must be performed to the machine at filling lithos process. In this research, RCM is used as an analysis tool to determine the critical component and find optimal inspection frequencies to maximize machine’s reliability. From the analysis, we found that the critical machine in filling lithos process is laser machine in Line 2. Then we proceed to determine the cause of machine’s failure. Lastube component has the highest Risk Priority Number (RPN) among other components such as power supply, lens, chiller, laser siren, encoder, conveyor, and mirror galvo. Most of the components have operational consequences and the others have hidden failure consequences and safety consequences. Time-directed life-renewal task, failure finding task, and servicing task can be used to overcome these consequences. The results of data analysis show that the inspection must be performed once a month for laser machine in the form of preventive maintenance to lowering the downtime.

  1. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    NASA Astrophysics Data System (ADS)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  2. SU-E-T-420: Failure Effects Mode Analysis for Trigeminal Neuralgia Frameless Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howe, J

    2015-06-15

    Purpose: Functional radiosurgery has been used successfully in the treatment of trigeminal neuralgia but presents significant challenges to ensuring the high prescription dose is delivered accurately. A review of existing practice should help direct the focus of quality improvement for this treatment regime. Method: Failure modes and effects analysis was used to identify the processes in preparing radiosurgery treatment for TN. The map was developed by a multidisciplinary team including: neurosurgeon, radiation oncology, physicist and therapist. Potential failure modes were identified for each step in the process map as well as potential causes and end effect. A risk priority numbermore » was assigned to each cause. Results: The process map identified 66 individual steps (see attached supporting document). Corrective actions were developed for areas of high risk priority number. Wrong site treatment is at higher risk for trigeminal neuralgia treatment due to the lack of site specific pathologic imaging on MR and CT – additional site specific checks were implemented to minimize the risk of wrong site treatment. Failed collision checks resulted from an insufficient collision model in the treatment planning system and a plan template was developed to address this problem. Conclusion: Failure modes and effects analysis is an effective tool for developing quality improvement in high risk radiotherapy procedures such as functional radiosurgery.« less

  3. Application of Six Sigma methodology to a diagnostic imaging process.

    PubMed

    Taner, Mehmet Tolga; Sezen, Bulent; Atwat, Kamal M

    2012-01-01

    This paper aims to apply the Six Sigma methodology to improve workflow by eliminating the causes of failure in the medical imaging department of a private Turkish hospital. Implementation of the design, measure, analyse, improve and control (DMAIC) improvement cycle, workflow chart, fishbone diagrams and Pareto charts were employed, together with rigorous data collection in the department. The identification of root causes of repeat sessions and delays was followed by failure, mode and effect analysis, hazard analysis and decision tree analysis. The most frequent causes of failure were malfunction of the RIS/PACS system and improper positioning of patients. Subsequent to extensive training of professionals, the sigma level was increased from 3.5 to 4.2. The data were collected over only four months. Six Sigma's data measurement and process improvement methodology is the impetus for health care organisations to rethink their workflow and reduce malpractice. It involves measuring, recording and reporting data on a regular basis. This enables the administration to monitor workflow continuously. The improvements in the workflow under study, made by determining the failures and potential risks associated with radiologic care, will have a positive impact on society in terms of patient safety. Having eliminated repeat examinations, the risk of being exposed to more radiation was also minimised. This paper supports the need to apply Six Sigma and present an evaluation of the process in an imaging department.

  4. Use of failure mode effect analysis (FMEA) to improve medication management process.

    PubMed

    Jain, Khushboo

    2017-03-13

    Purpose Medication management is a complex process, at high risk of error with life threatening consequences. The focus should be on devising strategies to avoid errors and make the process self-reliable by ensuring prevention of errors and/or error detection at subsequent stages. The purpose of this paper is to use failure mode effect analysis (FMEA), a systematic proactive tool, to identify the likelihood and the causes for the process to fail at various steps and prioritise them to devise risk reduction strategies to improve patient safety. Design/methodology/approach The study was designed as an observational analytical study of medication management process in the inpatient area of a multi-speciality hospital in Gurgaon, Haryana, India. A team was made to study the complex process of medication management in the hospital. FMEA tool was used. Corrective actions were developed based on the prioritised failure modes which were implemented and monitored. Findings The percentage distribution of medication errors as per the observation made by the team was found to be maximum of transcription errors (37 per cent) followed by administration errors (29 per cent) indicating the need to identify the causes and effects of their occurrence. In all, 11 failure modes were identified out of which major five were prioritised based on the risk priority number (RPN). The process was repeated after corrective actions were taken which resulted in about 40 per cent (average) and around 60 per cent reduction in the RPN of prioritised failure modes. Research limitations/implications FMEA is a time consuming process and requires a multidisciplinary team which has good understanding of the process being analysed. FMEA only helps in identifying the possibilities of a process to fail, it does not eliminate them, additional efforts are required to develop action plans and implement them. Frank discussion and agreement among the team members is required not only for successfully conducing FMEA but also for implementing the corrective actions. Practical implications FMEA is an effective proactive risk-assessment tool and is a continuous process which can be continued in phases. The corrective actions taken resulted in reduction in RPN, subjected to further evaluation and usage by others depending on the facility type. Originality/value The application of the tool helped the hospital in identifying failures in medication management process, thereby prioritising and correcting them leading to improvement.

  5. Failure mode and effect analysis: improving intensive care unit risk management processes.

    PubMed

    Askari, Roohollah; Shafii, Milad; Rafiei, Sima; Abolhassani, Mohammad Sadegh; Salarikhah, Elaheh

    2017-04-18

    Purpose Failure modes and effects analysis (FMEA) is a practical tool to evaluate risks, discover failures in a proactive manner and propose corrective actions to reduce or eliminate potential risks. The purpose of this paper is to apply FMEA technique to examine the hazards associated with the process of service delivery in intensive care unit (ICU) of a tertiary hospital in Yazd, Iran. Design/methodology/approach This was a before-after study conducted between March 2013 and December 2014. By forming a FMEA team, all potential hazards associated with ICU services - their frequency and severity - were identified. Then risk priority number was calculated for each activity as an indicator representing high priority areas that need special attention and resource allocation. Findings Eight failure modes with highest priority scores including endotracheal tube defect, wrong placement of endotracheal tube, EVD interface, aspiration failure during suctioning, chest tube failure, tissue injury and deep vein thrombosis were selected for improvement. Findings affirmed that improvement strategies were generally satisfying and significantly decreased total failures. Practical implications Application of FMEA in ICUs proved to be effective in proactively decreasing the risk of failures and corrected the control measures up to acceptable levels in all eight areas of function. Originality/value Using a prospective risk assessment approach, such as FMEA, could be beneficial in dealing with potential failures through proposing preventive actions in a proactive manner. The method could be used as a tool for healthcare continuous quality improvement so that the method identifies both systemic and human errors, and offers practical advice to deal effectively with them.

  6. A quality risk management model approach for cell therapy manufacturing.

    PubMed

    Lopez, Fabio; Di Bartolo, Chiara; Piazza, Tommaso; Passannanti, Antonino; Gerlach, Jörg C; Gridelli, Bruno; Triolo, Fabio

    2010-12-01

    International regulatory authorities view risk management as an essential production need for the development of innovative, somatic cell-based therapies in regenerative medicine. The available risk management guidelines, however, provide little guidance on specific risk analysis approaches and procedures applicable in clinical cell therapy manufacturing. This raises a number of problems. Cell manufacturing is a poorly automated process, prone to operator-introduced variations, and affected by heterogeneity of the processed organs/tissues and lot-dependent variability of reagent (e.g., collagenase) efficiency. In this study, the principal challenges faced in a cell-based product manufacturing context (i.e., high dependence on human intervention and absence of reference standards for acceptable risk levels) are identified and addressed, and a risk management model approach applicable to manufacturing of cells for clinical use is described for the first time. The use of the heuristic and pseudo-quantitative failure mode and effect analysis/failure mode and critical effect analysis risk analysis technique associated with direct estimation of severity, occurrence, and detection is, in this specific context, as effective as, but more efficient than, the analytic hierarchy process. Moreover, a severity/occurrence matrix and Pareto analysis can be successfully adopted to identify priority failure modes on which to act to mitigate risks. The application of this approach to clinical cell therapy manufacturing in regenerative medicine is also discussed. © 2010 Society for Risk Analysis.

  7. Ares-I-X Vehicle Preliminary Range Safety Malfunction Turn Analysis

    NASA Technical Reports Server (NTRS)

    Beaty, James R.; Starr, Brett R.; Gowan, John W., Jr.

    2008-01-01

    Ares-I-X is the designation given to the flight test version of the Ares-I rocket (also known as the Crew Launch Vehicle - CLV) being developed by NASA. As part of the preliminary flight plan approval process for the test vehicle, a range safety malfunction turn analysis was performed to support the launch area risk assessment and vehicle destruct criteria development processes. Several vehicle failure scenarios were identified which could cause the vehicle trajectory to deviate from its normal flight path, and the effects of these failures were evaluated with an Ares-I-X 6 degrees-of-freedom (6-DOF) digital simulation, using the Program to Optimize Simulated Trajectories Version 2 (POST2) simulation framework. The Ares-I-X simulation analysis provides output files containing vehicle state information, which are used by other risk assessment and vehicle debris trajectory simulation tools to determine the risk to personnel and facilities in the vicinity of the launch area at Kennedy Space Center (KSC), and to develop the vehicle destruct criteria used by the flight test range safety officer. The simulation analysis approach used for this study is described, including descriptions of the failure modes which were considered and the underlying assumptions and ground rules of the study, and preliminary results are presented, determined by analysis of the trajectory deviation of the failure cases, compared with the expected vehicle trajectory.

  8. Failure mode and effects analysis: too little for too much?

    PubMed

    Dean Franklin, Bryony; Shebl, Nada Atef; Barber, Nick

    2012-07-01

    Failure mode and effects analysis (FMEA) is a structured prospective risk assessment method that is widely used within healthcare. FMEA involves a multidisciplinary team mapping out a high-risk process of care, identifying the failures that can occur, and then characterising each of these in terms of probability of occurrence, severity of effects and detectability, to give a risk priority number used to identify failures most in need of attention. One might assume that such a widely used tool would have an established evidence base. This paper considers whether or not this is the case, examining the evidence for the reliability and validity of its outputs, the mathematical principles behind the calculation of a risk prioirty number, and variation in how it is used in practice. We also consider the likely advantages of this approach, together with the disadvantages in terms of the healthcare professionals' time involved. We conclude that although FMEA is popular and many published studies have reported its use within healthcare, there is little evidence to support its use for the quantitative prioritisation of process failures. It lacks both reliability and validity, and is very time consuming. We would not recommend its use as a quantitative technique to prioritise, promote or study patient safety interventions. However, the stage of FMEA involving multidisciplinary mapping process seems valuable and work is now needed to identify the best way of converting this into plans for action.

  9. Introspective Reasoning Models for Multistrategy Case-Based and Explanation

    DTIC Science & Technology

    1997-03-10

    symptoms and diseases to causal 30 principles about diseases and first-principle analysis grounded in basic science. Based on research in process...the symptoms of the failure to conclusion that the process which posts learning goals a causal explanation of the failure. Secondl,,. the learner...the vernacular, a "jones" is a drug habit accompanied the faucet for water. Therefore, the story can end with by withdrawal symptoms . The verb "to jones

  10. Environmental isolation task

    NASA Technical Reports Server (NTRS)

    Coulbert, C. D.

    1982-01-01

    The failure-analysis process was organized into a more specific set of long-term degradation steps so that material property change can be differentiated from module damage and module failure. Increasing module performance and life are discussed. A polymeric aging computer model is discussed. Early detection of polymer surface reactions due to aging is reported.

  11. All-inkjet-printed thin-film transistors: manufacturing process reliability by root cause analysis.

    PubMed

    Sowade, Enrico; Ramon, Eloi; Mitra, Kalyan Yoti; Martínez-Domingo, Carme; Pedró, Marta; Pallarès, Jofre; Loffredo, Fausta; Villani, Fulvia; Gomes, Henrique L; Terés, Lluís; Baumann, Reinhard R

    2016-09-21

    We report on the detailed electrical investigation of all-inkjet-printed thin-film transistor (TFT) arrays focusing on TFT failures and their origins. The TFT arrays were manufactured on flexible polymer substrates in ambient condition without the need for cleanroom environment or inert atmosphere and at a maximum temperature of 150 °C. Alternative manufacturing processes for electronic devices such as inkjet printing suffer from lower accuracy compared to traditional microelectronic manufacturing methods. Furthermore, usually printing methods do not allow the manufacturing of electronic devices with high yield (high number of functional devices). In general, the manufacturing yield is much lower compared to the established conventional manufacturing methods based on lithography. Thus, the focus of this contribution is set on a comprehensive analysis of defective TFTs printed by inkjet technology. Based on root cause analysis, we present the defects by developing failure categories and discuss the reasons for the defects. This procedure identifies failure origins and allows the optimization of the manufacturing resulting finally to a yield improvement.

  12. Application of failure mode and effects analysis to treatment planning in scanned proton beam radiotherapy

    PubMed Central

    2013-01-01

    Background A multidisciplinary and multi-institutional working group applied the Failure Mode and Effects Analysis (FMEA) approach to the actively scanned proton beam radiotherapy process implemented at CNAO (Centro Nazionale di Adroterapia Oncologica), aiming at preventing accidental exposures to the patient. Methods FMEA was applied to the treatment planning stage and consisted of three steps: i) identification of the involved sub-processes; ii) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system, iii) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. Results Thirty-four sub-processes were identified, twenty-two of them were judged to be potentially prone to one or more failure modes. A total of forty-four failure modes were recognized, 52% of them characterized by an RPN score equal to 80 or higher. The threshold of 125 for RPN was exceeded in five cases only. The most critical sub-process appeared related to the delineation and correction of artefacts in planning CT data. Failures associated to that sub-process were inaccurate delineation of the artefacts and incorrect proton stopping power assignment to body regions. Other significant failure modes consisted of an outdated representation of the patient anatomy, an improper selection of beam direction and of the physical beam model or dose calculation grid. The main effects of these failures were represented by wrong dose distribution (i.e. deviating from the planned one) delivered to the patient. Additional strategies for risk mitigation, easily and immediately applicable, consisted of a systematic information collection about any known implanted prosthesis directly from each patient and enforcing a short interval time between CT scan and treatment start. Moreover, (i) the investigation of dedicated CT image reconstruction algorithms, (ii) further evaluation of treatment plan robustness and (iii) implementation of independent methods for dose calculation (such as Monte Carlo simulations) may represent novel solutions to increase patient safety. Conclusions FMEA is a useful tool for prospective evaluation of patient safety in proton beam radiotherapy. The application of this method to the treatment planning stage lead to identify strategies for risk mitigation in addition to the safety measures already adopted in clinical practice. PMID:23705626

  13. Independent Orbiter Assessment (IOA): Analysis of the guidance, navigation, and control subsystem

    NASA Technical Reports Server (NTRS)

    Trahan, W. H.; Odonnell, R. A.; Pietz, K. C.; Hiott, J. M.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) is presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Guidance, Navigation, and Control (GNC) Subsystem hardware are documented. The function of the GNC hardware is to respond to guidance, navigation, and control software commands to effect vehicle control and to provide sensor and controller data to GNC software. Some of the GNC hardware for which failure modes analysis was performed includes: hand controllers; Rudder Pedal Transducer Assembly (RPTA); Speed Brake Thrust Controller (SBTC); Inertial Measurement Unit (IMU); Star Tracker (ST); Crew Optical Alignment Site (COAS); Air Data Transducer Assembly (ADTA); Rate Gyro Assemblies; Accelerometer Assembly (AA); Aerosurface Servo Amplifier (ASA); and Ascent Thrust Vector Control (ATVC). The IOA analysis process utilized available GNC hardware drawings, workbooks, specifications, schematics, and systems briefs for defining hardware assemblies, components, and circuits. Each hardware item was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  14. Independent Orbiter Assessment (IOA): Analysis of the manned maneuvering unit

    NASA Technical Reports Server (NTRS)

    Bailey, P. S.

    1986-01-01

    Results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve indepedence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Manned Maneuvering Unit (MMU) hardware. The MMU is a propulsive backpack, operated through separate hand controllers that input the pilot's translational and rotational maneuvering commands to the control electronics and then to the thrusters. The IOA analysis process utilized available MMU hardware drawings and schematics for defining hardware subsystems, assemblies, components, and hardware items. Final levels of detail were evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the worst case severity of the effect for each identified failure mode. The IOA analysis of the MMU found that the majority of the PCIs identified are resultant from the loss of either the propulsion or control functions, or are resultant from inability to perform an immediate or future mission. The five most severe criticalities identified are all resultant from failures imposed on the MMU hand controllers which have no redundancy within the MMU.

  15. Effect of Microscopic Damage Events on Static and Ballistic Impact Strength of Triaxial Braid Composites

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary d.; Goldberg, Robert K.

    2008-01-01

    In previous work, the ballistic impact resistance of triaxial braided carbon/epoxy composites made with large flat tows (12k and 24k) was examined by impacting 2 X2 X0.125" composite panels with gelatin projectiles. Several high strength, intermediate modulus carbon fibers were used in combination with both untoughened and toughened matrix materials. A wide range of penetration thresholds were measured for the various fiber/matrix combinations. However, there was no clear relationship between the penetration threshold and the properties of the constituents. During some of these experiments high speed cameras were used to view the failure process, and full-field strain measurements were made to determine the strain at the onset of failure. However, these experiments provided only limited insight into the microscopic failure processes responsible for the wide range of impact resistance observed. In order to investigate potential microscopic failure processes in more detail, quasi-static tests were performed in tension, compression, and shear. Full-field strain measurement techniques were used to identify local regions of high strain resulting from microscopic failures. Microscopic failure events near the specimen surface, such as splitting of fiber bundles in surface plies, were easily identified. Subsurface damage, such as fiber fracture or fiber bundle splitting, could be identified by its effect on in-plane surface strains. Subsurface delamination could be detected as an out-of-plane deflection at the surface. Using this data, failure criteria could be established at the fiber tow level for use in analysis. An analytical formulation was developed to allow the microscopic failure criteria to be used in place of macroscopic properties as input to simulations performed using the commercial explicit finite element code, LS-DYNA. The test methods developed to investigate microscopic failure will be presented along with methods for determining local failure criteria that can be used in analysis. Results of simulations performed using LS-DYNA will be presented to illustrate the capabilities and limitations for simulating failure during quasi-static deformation and during ballistic impact of large unit cell size triaxial braid composites.

  16. NASA's Evolutionary Xenon Thruster (NEXT) Power Processing Unit (PPU) Capacitor Failure Root Cause Analysis

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Pinero, Luis; Schneidegger, Robert; Dunning, John; Birchenough, Art

    2012-01-01

    The NASA's Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hours and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hours of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.

  17. NASA's Evolutionary Xenon Thruster (NEXT) Power Processing Unit (PPU) Capacitor Failure Root Cause Analysis

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Scheidegger, Robert J.; Pinero, Luis R.; Birchenough, Arthur J.; Dunning, John W.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hr and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location-the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hr of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.

  18. Independent Orbiter Assessment (IOA): Analysis of the electrical power generation/fuel cell powerplant subsystem

    NASA Technical Reports Server (NTRS)

    Brown, K. L.; Bertsch, P. J.

    1986-01-01

    Results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Generation (EPG)/Fuel Cell Powerplant (FCP) hardware. The EPG/FCP hardware is required for performing functions of electrical power generation and product water distribution in the Orbiter. Specifically, the EPG/FCP hardware consists of the following divisions: (1) Power Section Assembly (PSA); (2) Reactant Control Subsystem (RCS); (3) Thermal Control Subsystem (TCS); and (4) Water Removal Subsystem (WRS). The IOA analysis process utilized available EPG/FCP hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  19. Independent Orbiter Assessment (IOA): Analysis of the orbital maneuvering system

    NASA Technical Reports Server (NTRS)

    Prust, C. D.; Paul, D. J.; Burkemper, V. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbital Maneuvering System (OMS) hardware are documented. The OMS provides the thrust to perform orbit insertion, orbit circularization, orbit transfer, rendezvous, and deorbit. The OMS is housed in two independent pods located one on each side of the tail and consists of the following subsystems: Helium Pressurization; Propellant Storage and Distribution; Orbital Maneuvering Engine; and Electrical Power Distribution and Control. The IOA analysis process utilized available OMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluted and analyzed for possible failure modes and effects. Criticality was asigned based upon the severity of the effect for each failure mode.

  20. Conversion of Questionnaire Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less

  1. [Applying healthcare failure mode and effect analysis to improve the surgical specimen transportation process and rejection rate].

    PubMed

    Hu, Pao-Hsueh; Hu, Hsiao-Chen; Huang, Hui-Ju; Chao, Hui-Lin; Lei, Ei-Fang

    2014-04-01

    Because surgical pathology specimens are crucial to the diagnosis and treatment of disease, it is critical that they be collected and transported safely and securely. Due to recent near-miss events in our department, we used the healthcare failure model and effect analysis to identify 14 potential perils in the specimen collection and transportation process. Improvement and prevention strategies were developed accordingly to improve quality of care. Using health care failure mode and effect analysis (HFMEA) may improve the surgical specimen transportation process and reduce the rate of surgical specimen rejection. Rectify standard operating procedures for surgical pathology specimen collection and transportation. Create educational videos and posters. Rectify methods of specimen verification. Organize and create an online and instantaneous management system for specimen tracking and specimen rejection. Implementation of the new surgical specimen transportation process effectively eliminated the 14 identified potential perils. In addition, the specimen rejection fell from 0.86% to 0.03%. This project was applied to improve the specimen transportation process, enhance interdisciplinary cooperation, and improve the patient-centered healthcare system. The creation and implementation of an online information system significantly facilitates specimen tracking, hospital cost reductions, and patient safety improvements. The success in our department is currently being replicated across all departments in our hospital that transport specimens. Our experience and strategy may be applied to inter-hospital specimen transportation in the future.

  2. Mitochondrial proteome remodelling in pressure overload-induced heart failure: the role of mitochondrial oxidative stress

    PubMed Central

    Dai, Dao-Fu; Hsieh, Edward J.; Liu, Yonggang; Chen, Tony; Beyer, Richard P.; Chin, Michael T.; MacCoss, Michael J.; Rabinovitch, Peter S.

    2012-01-01

    Aims We investigate the role of mitochondrial oxidative stress in mitochondrial proteome remodelling using mouse models of heart failure induced by pressure overload. Methods and results We demonstrate that mice overexpressing catalase targeted to mitochondria (mCAT) attenuate pressure overload-induced heart failure. An improved method of label-free unbiased analysis of the mitochondrial proteome was applied to the mouse model of heart failure induced by transverse aortic constriction (TAC). A total of 425 mitochondrial proteins were compared between wild-type and mCAT mice receiving TAC or sham surgery. The changes in the mitochondrial proteome in heart failure included decreased abundance of proteins involved in fatty acid metabolism, an increased abundance of proteins in glycolysis, apoptosis, mitochondrial unfolded protein response and proteolysis, transcription and translational control, and developmental processes as well as responses to stimuli. Overexpression of mCAT better preserved proteins involved in fatty acid metabolism and attenuated the increases in apoptotic and proteolytic enzymes. Interestingly, gene ontology analysis also showed that monosaccharide metabolic processes and protein folding/proteolysis were only overrepresented in mCAT but not in wild-type mice in response to TAC. Conclusion This is the first study to demonstrate that scavenging mitochondrial reactive oxygen species (ROS) by mCAT not only attenuates most of the mitochondrial proteome changes in heart failure, but also induces a subset of unique alterations. These changes represent processes that are adaptive to the increased work and metabolic requirements of pressure overload, but which are normally inhibited by overproduction of mitochondrial ROS. PMID:22012956

  3. Clinical risk analysis with failure mode and effect analysis (FMEA) model in a dialysis unit.

    PubMed

    Bonfant, Giovanna; Belfanti, Pietro; Paternoster, Giuseppe; Gabrielli, Danila; Gaiter, Alberto M; Manes, Massimo; Molino, Andrea; Pellu, Valentina; Ponzetti, Clemente; Farina, Massimo; Nebiolo, Pier E

    2010-01-01

    The aim of clinical risk management is to improve the quality of care provided by health care organizations and to assure patients' safety. Failure mode and effect analysis (FMEA) is a tool employed for clinical risk reduction. We applied FMEA to chronic hemodialysis outpatients. FMEA steps: (i) process study: we recorded phases and activities. (ii) Hazard analysis: we listed activity-related failure modes and their effects; described control measures; assigned severity, occurrence and detection scores for each failure mode and calculated the risk priority numbers (RPNs) by multiplying the 3 scores. Total RPN is calculated by adding single failure mode RPN. (iii) Planning: we performed a RPNs prioritization on a priority matrix taking into account the 3 scores, and we analyzed failure modes causes, made recommendations and planned new control measures. (iv) Monitoring: after failure mode elimination or reduction, we compared the resulting RPN with the previous one. Our failure modes with the highest RPN came from communication and organization problems. Two tools have been created to ameliorate information flow: "dialysis agenda" software and nursing datasheets. We scheduled nephrological examinations, and we changed both medical and nursing organization. Total RPN value decreased from 892 to 815 (8.6%) after reorganization. Employing FMEA, we worked on a few critical activities, and we reduced patients' clinical risk. A priority matrix also takes into account the weight of the control measures: we believe this evaluation is quick, because of simple priority selection, and that it decreases action times.

  4. Riding the Right Wavelet: Quantifying Scale Transitions in Fractured Rocks

    NASA Astrophysics Data System (ADS)

    Rizzo, Roberto E.; Healy, David; Farrell, Natalie J.; Heap, Michael J.

    2017-12-01

    The mechanics of brittle failure is a well-described multiscale process that involves a rapid transition from distributed microcracks to localization along a single macroscopic rupture plane. However, considerable uncertainty exists regarding both the length scale at which this transition occurs and the underlying causes that prompt this shift from a distributed to a localized assemblage of cracks or fractures. For the first time, we used an image analysis tool developed to investigate orientation changes at different scales in images of fracture patterns in faulted materials, based on a two-dimensional continuous wavelet analysis. We detected the abrupt change in the fracture pattern from distributed tensile microcracks to localized shear failure in a fracture network produced by triaxial deformation of a sandstone core plug. The presented method will contribute to our ability of unraveling the physical processes at the base of catastrophic rock failure, including the nucleation of earthquakes, landslides, and volcanic eruptions.

  5. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  6. Electrical failure debug using interlayer profiling method

    NASA Astrophysics Data System (ADS)

    Yang, Thomas; Shen, Yang; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh

    2017-03-01

    It is very well known that as technology nodes move to smaller sizes, the number of design rules increases while design structures become more regular and the process manufacturing steps have increased as well. Normal inspection tools can only monitor hard failures on a single layer. For electrical failures that happen due to inter layers misalignments, we can only detect them through testing. This paper will present a working flow for using pattern analysis interlayer profiling techniques to turn multiple layer physical info into group linked parameter values. Using this data analysis flow combined with an electrical model allows us to find critical regions on a layout for yield learning.

  7. Human factors process failure modes and effects analysis (HF PFMEA) software tool

    NASA Technical Reports Server (NTRS)

    Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)

    2011-01-01

    Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.

  8. Systems Theoretic Process Analysis Applied to an Offshore Supply Vessel Dynamic Positioning System

    DTIC Science & Technology

    2016-06-01

    additional safety issues that were either not identified or inadequately mitigated through the use of Fault Tree Analysis and Failure Modes and...Techniques ...................................................................................................... 15 1.3.1. Fault Tree Analysis...49 3.2. Fault Tree Analysis Comparison

  9. A cascading failure analysis tool for post processing TRANSCARE simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This is a MATLAB-based tool to post process simulation results in the EPRI software TRANSCARE, for massive cascading failure analysis following severe disturbances. There are a few key modules available in this tool, including: 1. automatically creating a contingency list to run TRANSCARE simulations, including substation outages above a certain kV threshold, N-k (1, 2 or 3) generator outages and branche outages; 2. read in and analyze a CKO file of PCG definition, an initiating event list, and a CDN file; 3. post process all the simulation results saved in a CDN file and perform critical event corridor analysis; 4.more » provide a summary of TRANSCARE simulations; 5. Identify the most frequently occurring event corridors in the system; and 6. Rank the contingencies using a user defined security index to quantify consequences in terms of total load loss, total number of cascades, etc.« less

  10. Failure mode and effects analysis outputs: are they valid?

    PubMed

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA's validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues.

  11. The struggling student: a thematic analysis from the self-regulated learning perspective.

    PubMed

    Patel, Rakesh; Tarrant, Carolyn; Bonas, Sheila; Yates, Janet; Sandars, John

    2015-04-01

    Students who engage in self-regulated learning (SRL) are more likely to achieve academic success compared with students who have deficits in SRL and tend to struggle with academic performance. Understanding how poor SRL affects the response to failure at assessment will inform the development of better remediation. Semi-structured interviews were conducted with 55 students who had failed the final re-sit assessment at two medical schools in the UK to explore their use of SRL processes. A thematic analysis approach was used to identify the factors, from an SRL perspective, that prevented students from appropriately and adaptively overcoming failure, and confined them to a cycle of recurrent failure. Struggling students did not utilise key SRL processes, which caused them to make inappropriate choices of learning strategies for written and clinical formats of assessment, and to use maladaptive strategies for coping with failure. Their normalisation of the experience and external attribution of failure represented barriers to their taking up of formal support and seeking informal help from peers. This study identified that struggling students had problems with SRL, which caused them to enter a cycle of failure as a result of their limited attempts to access formal and informal support. Implications for how medical schools can create a culture that supports the seeking of help and the development of SRL, and improves remediation for struggling students, are discussed. © 2015 John Wiley & Sons Ltd.

  12. ADM guidance-Ceramics: guidance to the use of fractography in failure analysis of brittle materials.

    PubMed

    Scherrer, Susanne S; Lohbauer, Ulrich; Della Bona, Alvaro; Vichi, Alessandro; Tholey, Michael J; Kelly, J Robert; van Noort, Richard; Cesar, Paulo Francisco

    2017-06-01

    To provide background information and guidance as to how to use fractography accurately, a powerful tool for failure analysis of dental ceramic structures. An extended palette of qualitative and quantitative fractography is provided, both for in vivo and in vitro fracture surface analyses. As visual support, this guidance document will provide micrographs of typical critical ceramic processing flaws, differentiating between pre- versus post sintering cracks, grinding damage related failures and occlusal contact wear origins and of failures due to surface degradation. The documentation emphasizes good labeling of crack features, precise indication of the direction of crack propagation (dcp), identification of the fracture origin, the use of fractographic photomontage of critical flaws or flaw labeling on strength data graphics. A compilation of recommendations for specific applications of fractography in Dentistry is also provided. This guidance document will contribute to a more accurate use of fractography and help researchers to better identify, describe and understand the causes of failure, for both clinical and laboratory-scale situations. If adequately performed at a large scale, fractography will assist in optimizing the methods of processing and designing of restorative materials and components. Clinical failures may be better understood and consequently reduced by sending out the correct message regarding the fracture origin in clinical trials. Copyright © 2017 The Academy of Dental Materials. All rights reserved.

  13. Intergranular degradation assessment via random grain boundary network analysis

    DOEpatents

    Kumar, Mukul; Schwartz, Adam J.; King, Wayne E.

    2002-01-01

    A method is disclosed for determining the resistance of polycrystalline materials to intergranular degradation or failure (IGDF), by analyzing the random grain boundary network connectivity (RGBNC) microstructure. Analysis of the disruption of the RGBNC microstructure may be assess the effectiveness of materials processing in increasing IGDF resistance. Comparison of the RGBNC microstructures of materials exposed to extreme operating conditions to unexposed materials may be used to diagnose and predict possible onset of material failure due to

  14. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  15. A Big Data Analysis Approach for Rail Failure Risk Assessment.

    PubMed

    Jamshidi, Ali; Faghih-Roohi, Shahrzad; Hajizadeh, Siamak; Núñez, Alfredo; Babuska, Robert; Dollevoet, Rolf; Li, Zili; De Schutter, Bart

    2017-08-01

    Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  16. Application of failure mode and effect analysis in an assisted reproduction technology laboratory.

    PubMed

    Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola

    2016-08-01

    Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  17. Trends in non-stationary signal processing techniques applied to vibration analysis of wind turbine drive train - A contemporary survey

    NASA Astrophysics Data System (ADS)

    Uma Maheswari, R.; Umamaheswari, R.

    2017-02-01

    Condition Monitoring System (CMS) substantiates potential economic benefits and enables prognostic maintenance in wind turbine-generator failure prevention. Vibration Monitoring and Analysis is a powerful tool in drive train CMS, which enables the early detection of impending failure/damage. In variable speed drives such as wind turbine-generator drive trains, the vibration signal acquired is of non-stationary and non-linear. The traditional stationary signal processing techniques are inefficient to diagnose the machine faults in time varying conditions. The current research trend in CMS for drive-train focuses on developing/improving non-linear, non-stationary feature extraction and fault classification algorithms to improve fault detection/prediction sensitivity and selectivity and thereby reducing the misdetection and false alarm rates. In literature, review of stationary signal processing algorithms employed in vibration analysis is done at great extent. In this paper, an attempt is made to review the recent research advances in non-linear non-stationary signal processing algorithms particularly suited for variable speed wind turbines.

  18. Typical uses of NASTRAN in a petrochemical industry

    NASA Technical Reports Server (NTRS)

    Winter, J. R.

    1978-01-01

    NASTRAN was principally used to perform failure analysis and redesign process equipment. It was also employed in the evaluation of vendor designs and proposed design modifications to existing process equipment. Stress analysis of forced draft fans, distillation trays, metal stacks, jacketed pipes, heat exchangers, large centrifugal fans, and agitator support structures are described.

  19. Spatio-temporal changes in river bank mass failures in the Lockyer Valley, Queensland, Australia

    NASA Astrophysics Data System (ADS)

    Thompson, Chris; Croke, Jacky; Grove, James; Khanal, Giri

    2013-06-01

    Wet-flow river bank failure processes are poorly understood relative to the more commonly studied processes of fluvial entrainment and gravity-induced mass failures. Using high resolution topographic data (LiDAR) and near coincident aerial photography, this study documents the downstream distribution of river bank mass failures which occurred as a result of a catastrophic flood in the Lockyer Valley in January 2011. In addition, this distribution is compared with wet flow mass failure features from previous large floods. The downstream analysis of these two temporal data sets indicated that they occur across a range of river lengths, catchment areas, bank heights and angles and do not appear to be scale-dependent or spatially restricted to certain downstream zones. The downstream trends of each bank failure distribution show limited spatial overlap with only 17% of wet flows common to both distributions. The modification of these features during the catastrophic flood of January 2011 also indicated that such features tend to form at some 'optimum' shape and show limited evidence of subsequent enlargement even when flow and energy conditions within the banks and channel were high. Elevation changes indicate that such features show evidence for infilling during subsequent floods. The preservation of these features in the landscape for a period of at least 150 years suggests that the seepage processes dominant in their initial formation appear to have limited role in their continuing enlargement over time. No evidence of gully extension or headwall retreat is evident. It is estimated that at least 12 inundation events would be required to fill these failures based on the average net elevation change recorded for the 2011 event. Existing conceptual models of downstream bank erosion process zones may need to consider a wider array of mass failure processes to accommodate for wet flow failures.

  20. Failure analysis of single-bolted joint for lightweight composite laminates and metal plate

    NASA Astrophysics Data System (ADS)

    Li, Linjie; Qu, Junli; Liu, Xiangdong

    2018-01-01

    A three-dimensional progressive damage model was developed in ANSYS to predict the damage accumulation of single bolted joint in composite laminates under in-plane tensile loading. First, we describe the formulation and algorithm of this model. Second, we calculate the failure loads of the joint in fibre reinforced epoxy laminated composite plates and compare it with the experiment results, which validates that our model can appropriately simulate the ultimate tensile strength of the joints and the whole process of failure of structure. Finally, this model is applied to study the failure process of the light-weight composite material (USN125). The study also has a great potential to provide a strong basis for bolted joints design in composite Laminates as well as a simple tool for comparing different laminate geometries and bolt arrangements.

  1. 3-Dimensional Root Cause Diagnosis via Co-analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Ziming; Lan, Zhiling; Yu, Li

    2012-01-01

    With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less

  2. A failure modes and effects analysis study for gynecologic high-dose-rate brachytherapy.

    PubMed

    Mayadev, Jyoti; Dieterich, Sonja; Harse, Rick; Lentz, Susan; Mathai, Mathew; Boddu, Sunita; Kern, Marianne; Courquin, Jean; Stern, Robin L

    2015-01-01

    To improve the quality of our gynecologic brachytherapy practice and reduce reportable events, we performed a process analysis after the failure modes and effects analysis (FMEA). The FMEA included a multidisciplinary team specifically targeting the tandem and ring brachytherapy procedure. The treatment process was divided into six subprocesses and failure modes (FMs). A scoring guideline was developed based on published FMEA studies and assigned through team consensus. FMs were ranked according to overall and severity scores. FM ranking >5% of the highest risk priority number (RPN) score was selected for in-depth analysis. The efficiency of each existing quality assurance to detect each FM was analyzed. We identified 170 FMs, and 99 were scored. RPN scores ranged from 1 to 192. Of the 13 highest-ranking FMs with RPN scores >80, half had severity scores of 8 or 9, with no mode having severity of 10. Of these FM, the originating process steps were simulation (5), treatment planning (5), treatment delivery (2), and insertion (1). Our high-ranking FM focused on communication and the potential for applicator movement. Evaluation of the efficiency and the comprehensiveness of our quality assurance program showed coverage of all but three of the top 49 FMs ranked by RPN. This is the first reported FMEA process for a comprehensive gynecologic brachytherapy procedure overview. We were able to identify FMs that could potentially and severely impact the patient's treatment. We continue to adjust our quality assurance program based on the results of our FMEA analysis. Published by Elsevier Inc.

  3. Incidence of patient safety events and process-related human failures during intra-hospital transportation of patients: retrospective exploration from the institutional incident reporting system.

    PubMed

    Yang, Shu-Hui; Jerng, Jih-Shuin; Chen, Li-Chin; Li, Yu-Tsu; Huang, Hsiao-Fang; Wu, Chao-Ling; Chan, Jing-Yuan; Huang, Szu-Fen; Liang, Huey-Wen; Sun, Jui-Sheng

    2017-11-03

    Intra-hospital transportation (IHT) might compromise patient safety because of different care settings and higher demand on the human operation. Reports regarding the incidence of IHT-related patient safety events and human failures remain limited. To perform a retrospective analysis of IHT-related events, human failures and unsafe acts. A hospital-wide process for the IHT and database from the incident reporting system in a medical centre in Taiwan. All eligible IHT-related patient safety events between January 2010 to December 2015 were included. Incidence rate of IHT-related patient safety events, human failure modes, and types of unsafe acts. There were 206 patient safety events in 2 009 013 IHT sessions (102.5 per 1 000 000 sessions). Most events (n=148, 71.8%) did not involve patient harm, and process events (n=146, 70.9%) were most common. Events at the location of arrival (n=101, 49.0%) were most frequent; this location accounted for 61.0% and 44.2% of events with patient harm and those without harm, respectively (p<0.001). Of the events with human failures (n=186), the most common related process step was the preparation of the transportation team (n=91, 48.9%). Contributing unsafe acts included perceptual errors (n=14, 7.5%), decision errors (n=56, 30.1%), skill-based errors (n=48, 25.8%), and non-compliance (n=68, 36.6%). Multivariate analysis showed that human failure found in the arrival and hand-off sub-process (OR 4.84, p<0.001) was associated with increased patient harm, whereas the presence of omission (OR 0.12, p<0.001) was associated with less patient harm. This study shows a need to reduce human failures to prevent patient harm during intra-hospital transportation. We suggest that the transportation team pay specific attention to the sub-process at the location of arrival and prevent errors other than omissions. Long-term monitoring of IHT-related events is also warranted. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Inter-computer communication architecture for a mixed redundancy distributed system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Adams, Stuart J.

    1987-01-01

    The triply redundant intercomputer network for the Advanced Information Processing System (AIPS), an architecture developed to serve as the core avionics system for a broad range of aerospace vehicles, is discussed. The AIPS intercomputer network provides a high-speed, Byzantine-fault-resilient communication service between processing sites, even in the presence of arbitrary failures of simplex and duplex processing sites on the IC network. The IC network contention poll has evolved from the Laning Poll. An analysis of the failure modes and effects and a simulation of the AIPS contention poll, demonstrate the robustness of the system.

  5. Parts, Materials, and Processes Experience Summary. Volume 1; [Catalog of ALERT and Other Information on Basic Design, Reliability, Quality and Applications Programs

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The ALERT program, a system for communicating common problems with parts, materials, and processes, is condensed and catalogued. Expanded information on selected topics is provided by relating the problem area (failure) to the cause, the investigations and findings, the suggestions for avoidance (inspections, screening tests, proper part applications), and failure analysis procedures. The basic objective of ALERT is the avoidance of the recurrence of parts, materials, and processed problems, thus improving the reliability of equipment produced for and used by the government.

  6. Comprehensive protocol of traceability during IVF: the result of a multicentre failure mode and effect analysis.

    PubMed

    Rienzi, L; Bariani, F; Dalla Zorza, M; Albani, E; Benini, F; Chamayou, S; Minasi, M G; Parmegiani, L; Restelli, L; Vizziello, G; Costa, A Nanni

    2017-08-01

    Can traceability of gametes and embryos be ensured during IVF? The use of a simple and comprehensive traceability system that includes the most susceptible phases during the IVF process minimizes the risk of mismatches. Mismatches in IVF are very rare but unfortunately possible with dramatic consequences for both patients and health care professionals. Traceability is thus a fundamental aspect of the treatment. A clear process of patient and cell identification involving witnessing protocols has to be in place in every unit. To identify potential failures in the traceability process and to develop strategies to mitigate the risk of mismatches, previously failure mode and effects analysis (FMEA) has been used effectively. The FMEA approach is however a subjective analysis, strictly related to specific protocols and thus the results are not always widely applicable. To reduce subjectivity and to obtain a widespread comprehensive protocol of traceability, a multicentre centrally coordinated FMEA was performed. Seven representative Italian centres (three public and four private) were selected. The study had a duration of 21 months (from April 2015 to December 2016) and was centrally coordinated by a team of experts: a risk analysis specialist, an expert embryologist and a specialist in human factor. Principal investigators of each centre were first instructed about proactive risk assessment and FMEA methodology. A multidisciplinary team to perform the FMEA analysis was then formed in each centre. After mapping the traceability process, each team identified the possible causes of mistakes in their protocol. A risk priority number (RPN) for each identified potential failure mode was calculated. The results of the FMEA analyses were centrally investigated and consistent corrective measures suggested. The teams performed new FMEA analyses after the recommended implementations. In each centre, this study involved: the laboratory director, the Quality Control & Quality Assurance responsible, Embryologist(s), Gynaecologist(s), Nurse(s) and Administration. The FMEA analyses were performed according to the Joint Commission International. The FMEA teams identified seven main process phases: oocyte collection, sperm collection, gamete processing, insemination, embryo culture, embryo transfer and gamete/embryo cryopreservation. A mean of 19.3 (SD ± 5.8) associated process steps and 41.9 (SD ± 12.4) possible failure modes were recognized per centre. A RPN ≥15 was calculated in a mean of 6.4 steps (range 2-12, SD ± 3.60). A total of 293 failure modes were centrally analysed 45 of which were considered at medium/high risk. After consistent corrective measures implementation and re-evaluation, a significant reduction in the RPNs in all centres (RPN <15 for all steps) was observed. A simple and comprehensive traceability system was designed as the result of the seven FMEA analyses. The validity of FMEA is in general questionable due to the subjectivity of the judgments. The design of this study has however minimized this risk by introducing external experts for the analysis of the FMEA results. Specific situations such as sperm/oocyte donation, import/export and pre-implantation genetic testing were not taken into consideration. Finally, this study is only limited to the analysis of failure modes that may lead to mismatches, other possible procedural mistakes are not accounted for. Every single IVF centre should have a clear and reliable protocol for identification of patients and traceability of cells during manipulation. The results of this study can support IVF groups in better recognizing critical steps in their protocols, understanding identification and witnessing process, and in turn enhancing safety by introducing validated corrective measures. This study was designed by the Italian Society of Embryology Reproduction and Research (SIERR) and funded by the Italian National Transplant Centre (CNT) of the Italian National Institute of Health (ISS). The authors have no conflicts of interest. N/A. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  7. A systems engineering approach to automated failure cause diagnosis in space power systems

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Faymon, Karl A.

    1987-01-01

    Automatic failure-cause diagnosis is a key element in autonomous operation of space power systems such as Space Station's. A rule-based diagnostic system has been developed for determining the cause of degraded performance. The knowledge required for such diagnosis is elicited from the system engineering process by using traditional failure analysis techniques. Symptoms, failures, causes, and detector information are represented with structured data; and diagnostic procedural knowledge is represented with rules. Detected symptoms instantiate failure modes and possible causes consistent with currently held beliefs about the likelihood of the cause. A diagnosis concludes with an explanation of the observed symptoms in terms of a chain of possible causes and subcauses.

  8. Enhancing the Possibility of Success by Measuring the Probability of Failure in an Educational Program.

    ERIC Educational Resources Information Center

    Brookhart, Susan M.; And Others

    1997-01-01

    Process Analysis is described as a method for identifying and measuring the probability of events that could cause the failure of a program, resulting in a cause-and-effect tree structure of events. The method is illustrated through the evaluation of a pilot instructional program at an elementary school. (SLD)

  9. A Monte Carlo Risk Analysis of Life Cycle Cost Prediction.

    DTIC Science & Technology

    1975-09-01

    process which occurs with each FLU failure. With this in mind there is no alternative other than the binomial distribution. 24 GOR/SM/75D-6 With all of...Weibull distribution of failures as selected by user. For each failure of the ith FLU, the model then samples from the binomial distribution to deter- mine...which is sampled from the binomial . Neither of the two conditions for normality are met, i.e., that RTS Ie close to .5 and the number of samples close

  10. Investigation of improving MEMS-type VOA reliability

    NASA Astrophysics Data System (ADS)

    Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.

    2003-12-01

    MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).

  11. Investigation of improving MEMS-type VOA reliability

    NASA Astrophysics Data System (ADS)

    Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.

    2004-01-01

    MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).

  12. Reliability analysis of structures under periodic proof tests in service

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.

  13. Integration of Value Stream Map and Healthcare Failure Mode and Effect Analysis into Six Sigma Methodology to Improve Process of Surgical Specimen Handling.

    PubMed

    Hung, Sheng-Hui; Wang, Pa-Chun; Lin, Hung-Chun; Chen, Hung-Ying; Su, Chao-Ton

    2015-01-01

    Specimen handling is a critical patient safety issue. Problematic handling process, such as misidentification (of patients, surgical site, and specimen counts), specimen loss, or improper specimen preparation can lead to serious patient harms and lawsuits. Value stream map (VSM) is a tool used to find out non-value-added works, enhance the quality, and reduce the cost of the studied process. On the other hand, healthcare failure mode and effect analysis (HFMEA) is now frequently employed to avoid possible medication errors in healthcare process. Both of them have a goal similar to Six Sigma methodology for process improvement. This study proposes a model that integrates VSM and HFMEA into the framework, which mainly consists of define, measure, analyze, improve, and control (DMAIC), of Six Sigma. A Six Sigma project for improving the process of surgical specimen handling in a hospital was conducted to demonstrate the effectiveness of the proposed model.

  14. Plasma process control with optical emission spectroscopy

    NASA Astrophysics Data System (ADS)

    Ward, P. P.

    Plasma processes for cleaning, etching and desmear of electronic components and printed wiring boards (PWB) are difficult to predict and control. Non-uniformity of most plasma processes and sensitivity to environmental changes make it difficult to maintain process stability from day to day. To assure plasma process performance, weight loss coupons or post-plasma destructive testing must be used. The problem with these techniques is that they are not real-time methods and do not allow for immediate diagnosis and process correction. These methods often require scrapping some fraction of a batch to insure the integrity of the rest. Since these methods verify a successful cycle with post-plasma diagnostics, poor test results often determine that a batch is substandard and the resulting parts unusable. Both of these methods are a costly part of the overall fabrication cost. A more efficient method of testing would allow for constant monitoring of plasma conditions and process control. Process failures should be detected before the parts being treated. are damaged. Real time monitoring would allow for instantaneous corrections. Multiple site monitoring would allow for process mapping within one system or simultaneous monitoring of multiple systems. Optical emission spectroscopy conducted external to the plasma apparatus would allow for this sort of multifunctional analysis without perturbing the glow discharge. In this paper, optical emission spectroscopy for non-intrusive, in situ process control will be explored. A discussion of this technique as it applies towards process control, failure analysis and endpoint determination will be conducted. Methods for identifying process failures, progress and end of etch back and desmear processes will be discussed.

  15. Local-global analysis of crack growth in continuously reinfoced ceramic matrix composites

    NASA Technical Reports Server (NTRS)

    Ballarini, Roberto; Ahmed, Shamim

    1989-01-01

    This paper describes the development of a mathematical model for predicting the strength and micromechanical failure characteristics of continuously reinforced ceramic matrix composites. The local-global analysis models the vicinity of a propagating crack tip as a local heterogeneous region (LHR) consisting of spring-like representation of the matrix, fibers and interfaces. Parametric studies are conducted to investigate the effects of LHR size, component properties, and interface conditions on the strength and sequence of the failure processes in the unidirectional composite system.

  16. Interference fits and stress-corrosion failure. [aircraft parts fatigue life analysis

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Carter, A. E.

    1976-01-01

    It is pointed out that any proper design of interference fit fastener, interference fit bushings, or stress coining processes should consider both the stress-corrosion susceptibility and fatigue-life improvement together. Investigations leading to such a methodology are discussed. A service failure analysis of actual aircraft parts is considered along with the stress-corrosion susceptibility of cold-working interference fit bushings. The optimum design of the amount of interference is considered, giving attention to stress formulas and aspects of design methodology.

  17. All-inkjet-printed thin-film transistors: manufacturing process reliability by root cause analysis

    PubMed Central

    Sowade, Enrico; Ramon, Eloi; Mitra, Kalyan Yoti; Martínez-Domingo, Carme; Pedró, Marta; Pallarès, Jofre; Loffredo, Fausta; Villani, Fulvia; Gomes, Henrique L.; Terés, Lluís; Baumann, Reinhard R.

    2016-01-01

    We report on the detailed electrical investigation of all-inkjet-printed thin-film transistor (TFT) arrays focusing on TFT failures and their origins. The TFT arrays were manufactured on flexible polymer substrates in ambient condition without the need for cleanroom environment or inert atmosphere and at a maximum temperature of 150 °C. Alternative manufacturing processes for electronic devices such as inkjet printing suffer from lower accuracy compared to traditional microelectronic manufacturing methods. Furthermore, usually printing methods do not allow the manufacturing of electronic devices with high yield (high number of functional devices). In general, the manufacturing yield is much lower compared to the established conventional manufacturing methods based on lithography. Thus, the focus of this contribution is set on a comprehensive analysis of defective TFTs printed by inkjet technology. Based on root cause analysis, we present the defects by developing failure categories and discuss the reasons for the defects. This procedure identifies failure origins and allows the optimization of the manufacturing resulting finally to a yield improvement. PMID:27649784

  18. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control/electrical power generation subsystem

    NASA Technical Reports Server (NTRS)

    Patton, Jeff A.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C)/Electrical Power Generation (EPG) hardware. The EPD and C/EPG hardware is required for performing critical functions of cryogenic reactant storage, electrical power generation and product water distribution in the Orbiter. Specifically, the EPD and C/EPG hardware consists of the following components: Power Section Assembly (PSA); Reactant Control Subsystem (RCS); Thermal Control Subsystem (TCS); Water Removal Subsystem (WRS); and Power Reactant Storage and Distribution System (PRSDS). The IOA analysis process utilized available EPD and C/EPG hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  19. Simulation Assisted Risk Assessment Applied to Launch Vehicle Conceptual Design

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie; Gee, Ken; Lawrence, Scott

    2008-01-01

    A simulation-based risk assessment approach is presented and is applied to the analysis of abort during the ascent phase of a space exploration mission. The approach utilizes groupings of launch vehicle failures, referred to as failure bins, which are mapped to corresponding failure environments. Physical models are used to characterize the failure environments in terms of the risk due to blast overpressure, resulting debris field, and the thermal radiation due to a fireball. The resulting risk to the crew is dynamically modeled by combining the likelihood of each failure, the severity of the failure environments as a function of initiator and time of the failure, the robustness of the crew module, and the warning time available due to early detection. The approach is shown to support the launch vehicle design process by characterizing the risk drivers and identifying regions where failure detection would significantly reduce the risk to the crew.

  20. A Study of Specific Fracture Energy at Percussion Drilling

    NASA Astrophysics Data System (ADS)

    A, Shadrina; T, Kabanova; V, Krets; L, Saruev

    2014-08-01

    The paper presents experimental studies of rock failure provided by percussion drilling. Quantification and qualitative analysis were carried out to estimate critical values of rock failure depending on the hammer pre-impact velocity, types of drill bits and cylindrical hammer parameters (weight, length, diameter), and turn angle of a drill bit. Obtained data in this work were compared with obtained results by other researchers. The particle-size distribution in granite-cutting sludge was analyzed in this paper. Statistical approach (Spearmen's rank-order correlation, multiple regression analysis with dummy variables, Kruskal-Wallis nonparametric test) was used to analyze the drilling process. Experimental data will be useful for specialists engaged in simulation and illustration of rock failure.

  1. Life Cost Based FMEA Manual: A Step by Step Guide to Carrying Out a Cost-based Failure Modes and Effects Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhee, Seung; Spencer, Cherrill; /Stanford U. /SLAC

    2009-01-23

    Failure occurs when one or more of the intended functions of a product are no longer fulfilled to the customer's satisfaction. The most critical product failures are those that escape design reviews and in-house quality inspection and are found by the customer. The product may work for a while until its performance degrades to an unacceptable level or it may have not worked even before customer took possession of the product. The end results of failures which may lead to unsafe conditions or major losses of the main function are rated high in severity. Failure Modes and Effects Analysis (FMEA)more » is a tool widely used in the automotive, aerospace, and electronics industries to identify, prioritize, and eliminate known potential failures, problems, and errors from systems under design, before the product is released (Stamatis, 1997). Several industrial FMEA standards such as those published by the Society of Automotive Engineers, US Department of Defense, and the Automotive Industry Action Group employ the Risk Priority Number (RPN) to measure risk and severity of failures. The Risk Priority Number (RPN) is a product of 3 indices: Occurrence (O), Severity (S), and Detection (D). In a traditional FMEA process design engineers typically analyze the 'root cause' and 'end-effects' of potential failures in a sub-system or component and assign penalty points through the O, S, D values to each failure. The analysis is organized around categories called failure modes, which link the causes and effects of failures. A few actions are taken upon completing the FMEA worksheet. The RPN column generally will identify the high-risk areas. The idea of performing FMEA is to eliminate or reduce known and potential failures before they reach the customers. Thus, a plan of action must be in place for the next task. Not all failures can be resolved during the product development cycle, thus prioritization of actions must be made within the design group. One definition of detection difficulty (D) is how well the organization controls the development process. Another definition relates to the detectability of a particular failure in the product when it is in the hands of the customer. The former asks 'What is the chance of catching the problem before we give it to the customer'? The latter asks 'What is the chance of the customer catching the problem before the problem results in a catastrophic failure?' (Palady, 1995) These differing definitions confuse the FMEA users when one tries to determine detection difficulty. Are we trying to measure how easy it is to detect where a failure has occurred or when it has occurred? Or are we trying to measure how easy or difficult it is to prevent failures? Ordinal scale variables are used to rank-order industries such as, hotels, restaurants, and movies (Note that a 4 star hotel is not necessarily twice as good as a 2 star hotel). Ordinal values preserve rank in a group of items, but the distance between the values cannot be measured since a distance function does not exist. Thus, the product or sum of ordinal variables loses its rank since each parameter has different scales. The RPN is a product of 3 independent ordinal variables, it can indicate that some failure types are 'worse' than others, but give no quantitative indication of their relative effects. To resolve the ambiguity of measuring detection difficulty and the irrational logic of multiplying 3 ordinal indices, a new methodology was created to overcome these shortcomings, Life Cost-Based FMEA. Life Cost-Based FMEA measures failure/risk in terms of monetary cost. Cost is a universal parameter that can be easily related to severity by engineers and others. Thus, failure cost can be estimated using the following simplest form: Expected Failure Cost = {sup n}{Sigma}{sub i=1}p{sub i}c{sub i}, p: Probability of a particular failure occurring; c: Monetary cost associated with that particular failure; and n: Total number of failure scenarios. FMEA is most effective when there are inputs into it from all concerned disciplines of the product development team. However, FMEA is a long process and can become tedious and won't be effective if too many people participate. An ideal team should have 3 to 4 people from: design, manufacturing, and service departments if possible. Depending on how complex the system is, the entire process can take anywhere from one to four weeks working full time. Thus, it is important to agree to the time commitment before starting the analysis else, anxious managers might stop the procedure before it is completed.« less

  2. Independent Orbiter Assessment (IOA): Analysis of the body flap subsystem

    NASA Technical Reports Server (NTRS)

    Wilson, R. E.; Riccio, J. R.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Body Flap (BF) subsystem hardware are documented. The BF is a large aerosurface located at the trailing edge of the lower aft fuselage of the Orbiter. The proper function of the BF is essential during the dynamic flight phases of ascent and entry. During the ascent phase of flight, the BF trails in a fixed position. For entry, the BF provides elevon load relief, trim control, and acts as a heat shield for the main engines. Specifically, the BF hardware comprises the following components: Power Drive Unit (PDU), rotary actuators, and torque tubes. The IOA analysis process utilized available BF hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 35 failure modes analyzed, 19 were determined to be PCIs.

  3. Semiparametric regression analysis of failure time data with dependent interval censoring.

    PubMed

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-09-20

    Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Failure mode and effects analysis: a comparison of two common risk prioritisation methods.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Nannicelli, Anna P; Brown, Alexandra R; Ladner, Daniela P; Holl, Jane L

    2016-05-01

    Failure mode and effects analysis (FMEA) is a method of risk assessment increasingly used in healthcare over the past decade. The traditional method, however, can require substantial time and training resources. The goal of this study is to compare a simplified scoring method with the traditional scoring method to determine the degree of congruence in identifying high-risk failures. An FMEA of the operating room (OR) to intensive care unit (ICU) handoff was conducted. Failures were scored and ranked using both the traditional risk priority number (RPN) and criticality-based method, and a simplified method, which designates failures as 'high', 'medium' or 'low' risk. The degree of congruence was determined by first identifying those failures determined to be critical by the traditional method (RPN≥300), and then calculating the per cent congruence with those failures designated critical by the simplified methods (high risk). In total, 79 process failures among 37 individual steps in the OR to ICU handoff process were identified. The traditional method yielded Criticality Indices (CIs) ranging from 18 to 72 and RPNs ranging from 80 to 504. The simplified method ranked 11 failures as 'low risk', 30 as medium risk and 22 as high risk. The traditional method yielded 24 failures with an RPN ≥300, of which 22 were identified as high risk by the simplified method (92% agreement). The top 20% of CI (≥60) included 12 failures, of which six were designated as high risk by the simplified method (50% agreement). These results suggest that the simplified method of scoring and ranking failures identified by an FMEA can be a useful tool for healthcare organisations with limited access to FMEA expertise. However, the simplified method does not result in the same degree of discrimination in the ranking of failures offered by the traditional method. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. Independent Orbiter Assessment (IOA): FMEA/CIL assessment

    NASA Technical Reports Server (NTRS)

    Saiidi, Mo J.; Swain, L. J.; Compton, J. M.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. Direction was given by the Orbiter and GFE Projects Office to perform the hardware analysis and assessment using the instructions and ground rules defined in NSTS 22206. The IOA analysis features a top-down approach to determine hardware failure modes, criticality, and potential critical items. To preserve independence, the anlaysis was accomplished without reliance upon the results contained within the NASA and prime contractor FMEA/CIL documentation. The assessment process compares the independently derived failure modes and criticality assignments to the proposed NASA Post 51-L FMEA/CIL documentation. When possible, assessment issues are discussed and resolved with the NASA subsystem managers. The assessment results for each subsystem are summarized. The most important Orbiter assessment finding was the previously unknown stuck autopilot push-button criticality 1/1 failure mode, having a worst case effect of loss of crew/vehicle when a microwave landing system is not active.

  6. MO-E-9A-01: Risk Based Quality Management: TG100 In Action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huq, M; Palta, J; Dunscombe, P

    2014-06-15

    One of the goals of quality management in radiation therapy is to gain high confidence that patients will receive the prescribed treatment correctly. To accomplish these goals professional societies such as the American Association of Physicists in Medicine (AAPM) has published many quality assurance (QA), quality control (QC), and quality management (QM) guidance documents. In general, the recommendations provided in these documents have emphasized on performing device-specific QA at the expense of process flow and protection of the patient against catastrophic errors. Analyses of radiation therapy incidents find that they are most often caused by flaws in the overall therapymore » process, from initial consult through final treatment, than by isolated hardware or computer failures detectable by traditional physics QA. This challenge is shared by many intrinsically hazardous industries. Risk assessment tools and analysis techniques have been developed to define, identify, and eliminate known and/or potential failures, problems, or errors, from a system, process and/or service before they reach the customer. These include, but are not limited to, process mapping, failure modes and effects analysis (FMEA), fault tree analysis (FTA), and establishment of a quality management program that best avoids the faults and risks that have been identified in the overall process. These tools can be easily adapted to radiation therapy practices because of their simplicity and effectiveness to provide efficient ways to enhance the safety and quality of treatment processes. Task group 100 (TG100) of AAPM has developed a risk-based quality management program that uses these tools. This session will be devoted to a discussion of these tools and how these tools can be used in a given radiotherapy clinic to develop a risk based QM program. Learning Objectives: Learn how to design a process map for a radiotherapy process. Learn how to perform a FMEA analysis for a given process. Learn what Fault tree analysis is all about. Learn how to design a quality management program based upon the information obtained from process mapping, FMEA and FTA.« less

  7. Operation reliability analysis of independent power plants of gas-transmission system distant production facilities

    NASA Astrophysics Data System (ADS)

    Piskunov, Maksim V.; Voytkov, Ivan S.; Vysokomornaya, Olga V.; Vysokomorny, Vladimir S.

    2015-01-01

    The new approach was developed to analyze the failure causes in operation of linear facilities independent power supply sources (mini-CHP-plants) of gas-transmission system in Eastern part of Russia. Triggering conditions of ceiling operation substance temperature at condenser output were determined with mathematical simulation use of unsteady heat and mass transfer processes in condenser of mini-CHP-plants. Under these conditions the failure probability in operation of independent power supply sources is increased. Influence of environmental factors (in particular, ambient temperature) as well as output electric capability values of power plant on mini-CHP-plant operation reliability was analyzed. Values of mean time to failure and power plant failure density during operation in different regions of Eastern Siberia and Far East of Russia were received with use of numerical simulation results of heat and mass transfer processes at operation substance condensation.

  8. Analysis and design of ion-implanted bubble memory devices

    NASA Astrophysics Data System (ADS)

    Wullert, J. R., II; Kryder, M. H.

    1987-04-01

    4-μm period ion-implanted contiguous disk bubble memory circuits, designed and fabricated at AT&T Bell Laboratories, Murray Hill, NJ, have been investigated. Quasistatic testing has provided information about both the operational bias field ranges and the exact failure modes. A variety of major loop layouts were investigated and two turns found to severely limit bias field margins are discussed. The generation process, using a hairpin nucleator, was tested and several interesting failure modes were uncovered. Propagation on four different minor loop paths was observed and each was found to have characteristic failure modes. The transfer processes, both into and out of the minor loops, were investigated at higher frequencies to avoid local heating due to long transfer pulses at low frequencies. Again specific failure modes were identified. Overall bias margins for the chip were 9% at 50 Oe drive field and were limited by transfer-in.

  9. Utility of Failure Mode and Effect Analysis to Improve Safety in Suctioning by Orotracheal Tube.

    PubMed

    Vázquez-Valencia, Agustín; Santiago-Sáez, Andrés; Perea-Pérez, Bernardo; Labajo-González, Elena; Albarrán-Juan, Maria Elena

    2017-02-01

    The objective of the study was to use the Failure Mode and Effect Analysis (FMEA) tool to analyze the technique of secretion suctioning on patients with an endotracheal tube who were admitted into an intensive care unit. Brainstorming was carried out within the service to determine the potential errors most frequent in the process. After this, the FMEA was applied, including its stages, prioritizing risk in accordance with the risk prioritization number (RPN), selecting improvement actions in which they have an RPN of more than 300. We obtained 32 failure modes, of which 13 surpassed an RPN of 300. After our result, 21 improvement actions were proposed for those failure modes with RPN scores above 300. FMEA allows us to ascertain possible failures so as to later propose improvement actions for those which have an RPN of more than 300. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  10. The integration methods of fuzzy fault mode and effect analysis and fault tree analysis for risk analysis of yogurt production

    NASA Astrophysics Data System (ADS)

    Aprilia, Ayu Rizky; Santoso, Imam; Ekasari, Dhita Murita

    2017-05-01

    Yogurt is a product based on milk, which has beneficial effects for health. The process for the production of yogurt is very susceptible to failure because it involves bacteria and fermentation. For an industry, the risks may cause harm and have a negative impact. In order for a product to be successful and profitable, it requires the analysis of risks that may occur during the production process. Risk analysis can identify the risks in detail and prevent as well as determine its handling, so that the risks can be minimized. Therefore, this study will analyze the risks of the production process with a case study in CV.XYZ. The method used in this research is the Fuzzy Failure Mode and Effect Analysis (fuzzy FMEA) and Fault Tree Analysis (FTA). The results showed that there are 6 risks from equipment variables, raw material variables, and process variables. Those risks include the critical risk, which is the risk of a lack of an aseptic process, more specifically if starter yogurt is damaged due to contamination by fungus or other bacteria and a lack of sanitation equipment. The results of quantitative analysis of FTA showed that the highest probability is the probability of the lack of an aseptic process, with a risk of 3.902%. The recommendations for improvement include establishing SOPs (Standard Operating Procedures), which include the process, workers, and environment, controlling the starter of yogurt and improving the production planning and sanitation equipment using hot water immersion.

  11. Innovative approach to improving the care of acute decompensated heart failure.

    PubMed

    Merhaut, Shawn; Trupp, Robin

    2011-06-01

    The care of patients presenting to hospitals with acute decompensated heart failure remains a challenging and multifaceted dilemma across the continuum of care. The combination of improved survival rates for and rising incidence of heart failure has created both a clinical and economic burden for hospitals of epidemic proportion. With limited clinical resources, hospitals are expected to provide efficient, comprehensive, and quality care to a population laden with multiple comorbidities and social constraints. Further, this care must be provided in the setting of a volatile economic climate heavily affected by prolonged length of stays, high readmission rates, and changing healthcare policy. Although problems continue to mount, solutions remain scarce. In an effort to help hospitals identify gaps in care, control costs, streamline processes, and ultimately improve outcomes for these patients, the Society of Chest Pain Centers launched Heart Failure Accreditation in July 2009. Rooted in process improvement science, the Society's approach includes utilization of a tiered Accreditation tool to identify best practices, facilitate an internal gap analysis, and generate opportunities for improvement. In contrast to other organizations that require compliance with predetermined specifications, the Society's Heart Failure Accreditation focuses on the overall process including the continuum of care from emergency medical services, emergency department care, inpatient management, transition from hospital to home, and community outreach. As partners in the process, the Society strives to build relationships with facilities and share best practices with the ultimate goal to improve outcomes for heart failure patients.

  12. International Space Station Powered Bolt Nut Anomaly and Failure Analysis Summary

    NASA Technical Reports Server (NTRS)

    Sievers, Daniel E.; Warden, Harry K.

    2010-01-01

    A key mechanism used in the on-orbit assembly of the International Space Station (ISS) pressurized elements is the Common Berthing Mechanism. The mechanism that effects the structural connection of the Common Berthing Mechanism halves is the Powered Bolt Assembly. There are sixteen Powered Bolt Assemblies per Common Berthing Mechanism. The Common Berthing Mechanism has a bolt which engages a self aligning Powered Bolt Nut (PBN) on the mating interface (Figure 1). The Powered Bolt Assemblies are preloaded to approximately 84.5 kN (19000 lb) prior to pressurization of the CBM. The PBNs mentioned below, manufactured in 2009, will be used on ISS future missions. An on orbit functional failure of this hardware would be unacceptable and in some instances catastrophic due to the failure of modules to mate and seal the atmosphere, risking loss of crew and ISS functions. The manufacturing processes that create the PBNs need to be strictly controlled. Functional (torque vs. tension) acceptance test failures will be the result of processes not being strictly followed. Without the proper knowledge of thread tolerances, fabrication techniques, and dry film lubricant application processes, PBNs will be, and have been manufactured improperly. The knowledge gained from acceptance test failures and the resolution of those failures, thread fabrication techniques and thread dry film lubrication processes can be applied to many aerospace mechanisms to enhance their performance. Test data and manufactured PBN thread geometry will be discussed for both failed and successfully accepted PBNs.

  13. International Space Station Powered Bolt Nut Anomaly and Failure Analysis Summary

    NASA Technical Reports Server (NTRS)

    Sievers, Daniel E.; Warden, Harry K.

    2010-01-01

    A key mechanism used in the on-orbit assembly of the International Space Station (ISS) pressurized elements is the Common Berthing Mechanism (CBM). The mechanism that effects the structural connection of the CBM halves is the Powered Bolt Assembly. There are sixteen Powered Bolt Assemblies per CBM. The CBM has a bolt which engages a self aligning Powered Bolt Nut (PBN) on the mating interface; see Figure 1. The Powered Bolt Assemblies are preloaded to approximately 19 kilo pounds (KIPs) prior to pressurization of the CBM. The PBNs mentioned below, manufactured in 2009, will be used on ISS future missions. An on orbit functional failure of this hardware would be unacceptable and in some instances catastrophic due to the failure of modules to mate and seal the atmosphere, risking loss of crew and ISS functions. The manufacturing processes which create the PBNs need to be strictly controlled. Functional (torque vs. tension) acceptance test failures will be the result of processes not being strictly followed. Without the proper knowledge of thread tolerances, fabrication techniques, and dry film lubricant application processes, PBNs will be, and have been manufactured improperly. The knowledge gained from acceptance test failures and the resolution of those failures, thread fabrication techniques and thread dry film lubrication processes can be applied to many aerospace mechanisms to enhance their performance. Test data and manufactured PBN thread geometry will be discussed for both failed and successfully accepted PBNs.

  14. Development of Decision Making in School-Aged Children and Adolescents: Evidence from Heart Rate and Skin Conductance Analysis

    ERIC Educational Resources Information Center

    Crone, Eveline A.; van der Molen, Maurits W.

    2007-01-01

    Age differences in decision making indicate that children fail to anticipate outcomes of their decisions. Using heart rate and skin conductance analyses, we tested whether developmental changes in decision making are associated with (a) a failure to process outcomes of decisions, or (b) a failure to anticipate future outcomes of decisions.…

  15. The analysis of the pilot's cognitive and decision processes

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    Articles are presented on pilot performance in zero-visibility precision approach, failure detection by pilots during automatic landing, experiments in pilot decision-making during simulated low visibility approaches, a multinomial maximum likelihood program, and a random search algorithm for laboratory computers. Other topics discussed include detection of system failures in multi-axis tasks and changes in pilot workload during an instrument landing.

  16. The role of failure modes and effects analysis in showing the benefits of automation in the blood bank.

    PubMed

    Han, Tae Hee; Kim, Moon Jung; Kim, Shinyoung; Kim, Hyun Ok; Lee, Mi Ae; Choi, Ji Seon; Hur, Mina; St John, Andrew

    2013-05-01

    Failure modes and effects analysis (FMEA) is a risk management tool used by the manufacturing industry but now being applied in laboratories. Teams from six South Korean blood banks used this tool to map their manual and automated blood grouping processes and determine the risk priority numbers (RPNs) as a total measure of error risk. The RPNs determined by each of the teams consistently showed that the use of automation dramatically reduced the RPN compared to manual processes. In addition, FMEA showed where the major risks occur in each of the manual processes and where attention should be prioritized to improve the process. Despite no previous experience with FMEA, the teams found the technique relatively easy to use and the subjectivity associated with assigning risk numbers did not affect the validity of the data. FMEA should become a routine technique for improving processes in laboratories. © 2012 American Association of Blood Banks.

  17. Statistical analysis of field data for aircraft warranties

    NASA Astrophysics Data System (ADS)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  18. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  19. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  20. Using pattern analysis methods to do fast detection of manufacturing pattern failures

    NASA Astrophysics Data System (ADS)

    Zhao, Evan; Wang, Jessie; Sun, Mason; Wang, Jeff; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua

    2016-03-01

    At the advanced technology node, logic design has become extremely complex and is getting more challenging as the pattern geometry size decreases. The small sizes of layout patterns are becoming very sensitive to process variations. Meanwhile, the high pressure of yield ramp is always there due to time-to-market competition. The company that achieves patterning maturity earlier than others will have a great advantage and a better chance to realize maximum profit margins. For debugging silicon failures, DFT diagnostics can identify which nets or cells caused the yield loss. But normally, a long time period is needed with many resources to identify which failures are due to one common layout pattern or structure. This paper will present a new yield diagnostic flow, based on preliminary EFA results, to show how pattern analysis can more efficiently detect pattern related systematic defects. Increased visibility on design pattern related failures also allows more precise yield loss estimation.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheong, S-K; Kim, J

    Purpose: The aim of the study is the application of a Failure Modes and Effects Analysis (FMEA) to access the risks for patients undergoing a Low Dose Rate (LDR) Prostate Brachytherapy Treatment. Methods: FMEA was applied to identify all the sub processes involved in the stages of identifying patient, source handling, treatment preparation, treatment delivery, and post treatment. These processes characterize the radiation treatment associated with LDR Prostate Brachytherapy. The potential failure modes together with their causes and effects were identified and ranked in order of their importance. Three indexes were assigned for each failure mode: the occurrence rating (O),more » the severity rating (S), and the detection rating (D). A ten-point scale was used to score each category, ten being the number indicating most severe, most frequent, and least detectable failure mode, respectively. The risk probability number (RPN) was calculated as a product of the three attributes: RPN = O X S x D. The analysis was carried out by a working group (WG) at UPMC. Results: The total of 56 failure modes were identified including 32 modes before the treatment, 13 modes during the treatment, and 11 modes after the treatment. In addition to the protocols already adopted in the clinical practice, the prioritized risk management will be implanted to the high risk procedures on the basis of RPN score. Conclusion: The effectiveness of the FMEA method was established. The FMEA methodology provides a structured and detailed assessment method for the risk analysis of the LDR Prostate Brachytherapy Procedure and can be applied to other radiation treatment modes.« less

  2. Reliability analysis of different structure parameters of PCBA under drop impact

    NASA Astrophysics Data System (ADS)

    Liu, P. S.; Fan, G. M.; Liu, Y. H.

    2018-03-01

    The establishing process of PCBA is modelled by finite element analysis software ABAQUS. Firstly, introduce the Input-G method and the fatigue life under drop impact are introduced and the mechanism of the solder joint failure in the process of drop is analysed. The main reason of solder joint failure is that the PCB component is suffering repeated tension and compression stress during the drop impact. Finally, the equivalent stress and peel stress of different solder joint and plate-level components under different impact acceleration are also analysed. The results show that the reliability of tin-silver copper joint is better than that of tin- lead solder joint, and the fatigue life of solder joint expectancy decrease as the impact pulse amplitude increases.

  3. Damage and Failure Analysis of AZ31 Alloy Sheet in Warm Stamping Processes

    NASA Astrophysics Data System (ADS)

    Zhao, P. J.; Chen, Z. H.; Dong, C. F.

    2016-07-01

    In this study, a combined experimental-numerical investigation on the failure of AZ31 Mg alloy sheet in the warm stamping process was carried out based on modified GTN damage model which integrated Yld2000 anisotropic yield criterion. The constitutive equations of material were implemented into a VUMAT subroutine for solver ABAQUS/Explicit and applied to the formability analysis of mobile phone shell. The morphology near the crack area was observed using SEM, and the anisotropic damage evolution at various temperatures was simulated. The distributions of plastic strain, damage evolution, thickness, and fracture initiation obtained from FE simulation were analyzed. The corresponding forming limit diagrams were worked out, and the comparison with the experimental data showed a good agreement.

  4. Mesoscale analysis of failure in quasi-brittle materials: comparison between lattice model and acoustic emission data.

    PubMed

    Grégoire, David; Verdon, Laura; Lefort, Vincent; Grassl, Peter; Saliba, Jacqueline; Regoin, Jean-Pierre; Loukili, Ahmed; Pijaudier-Cabot, Gilles

    2015-10-25

    The purpose of this paper is to analyse the development and the evolution of the fracture process zone during fracture and damage in quasi-brittle materials. A model taking into account the material details at the mesoscale is used to describe the failure process at the scale of the heterogeneities. This model is used to compute histograms of the relative distances between damaged points. These numerical results are compared with experimental data, where the damage evolution is monitored using acoustic emissions. Histograms of the relative distances between damage events in the numerical calculations and acoustic events in the experiments exhibit good agreement. It is shown that the mesoscale model provides relevant information from the point of view of both global responses and the local failure process. © 2015 The Authors. International Journal for Numerical and Analytical Methods in Geomechanics published by John Wiley & Sons Ltd.

  5. Composite laminate failure parameter optimization through four-point flexure experimentation and analysis

    DOE PAGES

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    2016-05-06

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  6. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  7. Basic failure mechanisms in advanced composites

    NASA Technical Reports Server (NTRS)

    Mullin, J. V.; Mazzio, V. F.; Mehan, R. L.

    1972-01-01

    Failure mechanisms in carbon-epoxy composites are identified as a basis for more reliable prediction of the performance of these materials. The approach involves both the study of local fracture events in model specimens containing small groups of filaments and fractographic examination of high fiber content engineering composites. Emphasis is placed on the correlation of model specimen observations with gross fracture modes. The effects of fiber surface treatment, resin modification and fiber content are studied and acoustic emission methods are applied. Some effort is devoted to analysis of the failure process in composite/metal specimens.

  8. Tools for developing a quality management program: proactive tools (process mapping, value stream mapping, fault tree analysis, and failure mode and effects analysis).

    PubMed

    Rath, Frank

    2008-01-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.

  9. Mechanics and complications of reverse shoulder arthroplasty: morse taper failure analysis and prospective rectification

    NASA Astrophysics Data System (ADS)

    Hoskin, HLD; Furie, E.; Collins, W.; Ganey, TM; Schlatterer, DR

    2017-05-01

    Since Sir John Charnley began his monumental hip arthroplasty work in 1958, clinical researchers have been incrementally improving longevity and functionality of total joint systems, although implant failure occurs on occasion. The purpose of this study is to report the fracture of the humeral tray Morse taper of a reverse total shoulder system (RTSS), which to date has not been reported with metallurgic analysis for any RTSS. There was no reported antecedent fall, motor vehicle collision, or other traumatic event prior to implant fracture in this case. Analysis was performed on the retrieved failed implant by Scanning Electron Microscopy (SEM) and Electron Dispersion Spectroscopy (EDS) in an attempt to determine the failure method, as well as to offer improvements for future implants. At the time of revision surgery all explants were retained from the left shoulder of a 61-year old male who underwent a non-complicated RTSS 4 years prior. The explants, particularly the cracked humeral tray, were processed as required for SEM and EDS. Analysis was performed on the failure sites in order to determine the chemical composition of the different parts of the implant, discover the chemical composition of the filler metal used during the electron beam welding process, and to detect any foreign elements that could suggest corrosion or other evidence of failure etiology. Gross visual inspection of all explants revealed that implant failure was a result of dissociation of the taper from the humeral tray at the weld, leaving the Morse taper embedded in the humeral stem while the tray floated freely in the patient’s shoulder. SEM further confirmed the jagged edges noted grossly at the weld fracture site, both suggesting failure due to torsional forces. EDS detected elevated levels of carbon and oxygen at the fracture site on the taper only and not on the humeral tray. In order to determine the origin of the high levels of C and O, it was considered that in titanium alloys, C and O are used as stabilizers that help raise the temperature at which titanium can be cast. Since the presence of stabilizers reduces ductility and fatigue strength, all interstitial elements are removed after casting. Considering this, the presence of C and O suggests that not all of the interstitials were removed during the manufacturing process, resulting in decreased fatigue strength. Further destructive analytical testing would verify weld quality and failure mode. RTSSs are quite successful in select patients not amenable to traditional shoulder arthroplasty options. This case report highlights how an implant may function well for several years and then suddenly fail without warning. SEM and EDS analysis suggest that residual C and O in the taper lowered the metal implant’s integrity, leading to torsional cracking at the weld junction of the humeral tray and the taper. The elevated levels of C and O measured at fracture sites on both the tray and the taper suggest poor quality filler metal or failure to remove all interstitial elements after casting. In both cases, the results would be decreased fatigue strength and overall toughness, leading to mechanical failure. A manufacturer’s recall of all implants soon followed the reporting of this implant failure; subsequently, the metal materials were changed from Ti6Al4V to both titanium alloy and cobalt-chrome alloy (Co-Cr-Mo). Time will tell if the alterations were sufficient.

  10. Failure mode and effects analysis outputs: are they valid?

    PubMed Central

    2012-01-01

    Background Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies between the teams’ estimates and similar incidents reported on the trust’s incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA’s validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues. PMID:22682433

  11. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control/remote manipulator system subsystem

    NASA Technical Reports Server (NTRS)

    Robinson, W. W.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the Electrical Power Distribution and Control (EPD and C)/Remote Manipulator System (RMS) hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained in the NASA FMEA/CIL documentation. This report documents the results of the independent analysis of the EPD and C/RMS (both port and starboard) hardware. The EPD and C/RMS subsystem hardware provides the electrical power and power control circuitry required to safely deploy, operate, control, and stow or guillotine and jettison two (one port and one starboard) RMSs. The EPD and C/RMS subsystem is subdivided into the four following functional divisions: Remote Manipulator Arm; Manipulator Deploy Control; Manipulator Latch Control; Manipulator Arm Shoulder Jettison; and Retention Arm Jettison. The IOA analysis process utilized available EPD and C/RMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based on the severity of the effect for each failure mode.

  12. Fatigue failure of regenerator screens in a high frequency Stirling engine

    NASA Technical Reports Server (NTRS)

    Hull, David R.; Alger, Donald L.; Moore, Thomas J.; Scheuermann, Coulson M.

    1988-01-01

    Failure of Stirling Space Power Demonstrator Engine (SPDE) regenerator screens was investigated. After several hours of operation the SPDE was shut down for inspection and on removing the regenator screens, debris of unknown origin was discovered along with considerable cracking of the screens in localized areas. Metallurgical analysis of the debris determined it to be cracked-off-deformed pieces of the 41 micron thickness Type 304 stainless steel wire screen. Scanning electron microscopy of the cracked screens revealed failures occurring at wire crossovers and fatigue striations on the fracture surface of the wires. Thus, the screen failure can be characterized as a fatigue failure of the wires. The crossovers were determined to contain 30 percent reduction in wire thickness and a highly worked microstructure occurring from the manufacturing process of the wire screens. Later it was found that reduction in wire thickness occurred because the screen fabricator had subjected it to a light cold-roll process after weaving. Installation of this screen left a clearance in the regenerator allowing the screens to move. The combined effects of the reduction in wire thickness, stress concentration (caused by screen movement), and highly worked microstructure at the wire crossovers led to the fatigue failure of the screens.

  13. [Application of root cause analysis in healthcare].

    PubMed

    Hsu, Tsung-Fu

    2007-12-01

    The main purpose of this study was to explore various aspects of root cause analysis (RCA), including its definition, rationale concept, main objective, implementation procedures, most common analysis methodology (fault tree analysis, FTA), and advantages and methodologic limitations in regard to healthcare. Several adverse events that occurred at a certain hospital were also analyzed by the author using FTA as part of this study. RCA is a process employed to identify basic and contributing causal factors underlying performance variations associated with adverse events. The rationale concept of RCA offers a systemic approach to improving patient safety that does not assign blame or liability to individuals. The four-step process involved in conducting an RCA includes: RCA preparation, proximate cause identification, root cause identification, and recommendation generation and implementation. FTA is a logical, structured process that can help identify potential causes of system failure before actual failures occur. Some advantages and significant methodologic limitations of RCA were discussed. Finally, we emphasized that errors stem principally from faults attributable to system design, practice guidelines, work conditions, and other human factors, which induce health professionals to make negligence or mistakes with regard to healthcare. We must explore the root causes of medical errors to eliminate potential RCA system failure factors. Also, a systemic approach is needed to resolve medical errors and move beyond a current culture centered on assigning fault to individuals. In constructing a real environment of patient-centered safety healthcare, we can help encourage clients to accept state-of-the-art healthcare services.

  14. Reliability of pathogen control in direct potable reuse: Performance evaluation and QMRA of a full-scale 1 MGD advanced treatment train.

    PubMed

    Pecson, Brian M; Triolo, Sarah C; Olivieri, Simon; Chen, Elise C; Pisarenko, Aleksey N; Yang, Chao-Chun; Olivieri, Adam; Haas, Charles N; Trussell, R Shane; Trussell, R Rhodes

    2017-10-01

    To safely progress toward direct potable reuse (DPR), it is essential to ensure that DPR systems can provide public health protection equivalent to or greater than that of conventional drinking water sources. This study collected data over a one-year period from a full-scale DPR demonstration facility, and used both performance distribution functions (PDFs) and quantitative microbial risk assessment (QMRA) to define and evaluate the reliability of the advanced water treatment facility (AWTF). The AWTF's ability to control enterovirus, Giardia, and Cryptosporidium was characterized using online monitoring of surrogates in a treatment train consisting of ozone, biological activated carbon, microfiltration, reverse osmosis, and ultraviolet light with an advanced oxidation process. This process train was selected to improve reliability by providing redundancy, defined as the provision of treatment beyond the minimum needed to meet regulatory requirements. The PDFs demonstrated treatment that consistently exceeded the 12/10/10-log thresholds for virus, Giardia, and Cryptosporidium, as currently required for potable reuse in California (via groundwater recharge and surface water augmentation). Because no critical process failures impacted pathogen removal performance during the yearlong testing, hypothetical failures were incorporated into the analysis to understand the benefit of treatment redundancy on performance. Each unit process was modeled with a single failure per year lasting four different failure durations: 15 min, 60 min, 8 h, and 24 h. QMRA was used to quantify the impact of failures on pathogen risk. The median annual risk of infection for Cryptosporidium was 4.9 × 10 -11 in the absence of failures, and reached a maximum of 1.1 × 10 -5 assuming one 24-h failure per process per year. With the inclusion of free chlorine disinfection as part of the treatment process, enterovirus had a median annual infection risk of 1.5 × 10 -14 (no failures) and a maximum annual value of 2.1 × 10 -5 (assuming one 24-h failure per year). Even with conservative failure assumptions, pathogen risk from this treatment train remains below the risk targets for both the U.S. (10 -4 infections/person/year) and the WHO (approximately 10 -3 infections/person/year, equivalent to 10 -6 DALY/person/year), demonstrating the value of a failure prevention strategy based on treatment redundancy. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. An observational study of the frequency, severity, and etiology of failures in postoperative care after major elective general surgery.

    PubMed

    Symons, Nicholas R A; Almoudaris, Alex M; Nagpal, Kamal; Vincent, Charles A; Moorthy, Krishna

    2013-01-01

    To investigate the nature of process failures in postoperative care, to assess their frequency and preventability, and to explore their relationship to adverse events. Adverse events are common and are frequently caused by failures in the process of care. These processes are often evaluated independently using clinical audit. There is little understanding of process failures in terms of their overall frequency, relative risk, and cumulative effect on the surgical patient. Patients were observed daily from the first postoperative day until discharge by an independent surgeon. Field notes on the circumstances surrounding any nonroutine or atypical event were recorded. Field notes were assessed by 2 surgeons to identify failures in the process of care. Preventability, the degree of harm caused to the patient, and the underlying etiology of process failures were evaluated by 2 independent surgeons. Fifty patients undergoing major elective general surgery were observed for a total of 659 days of postoperative care. A total of 256 process failures were identified, of which 85% were preventable and 51% directly led to patient harm. Process failures occurred in all aspects of care, the most frequent being medication prescribing and administration, management of lines, tubes, and drains, and pain control interventions. Process failures accounted for 57% of all preventable adverse events. Communication failures and delays were the main etiologies, leading to 54% of process failures. Process failures are common in postoperative care, are highly preventable, and frequently cause harm to patients. Interventions to prevent process failures will improve the reliability of surgical postoperative care and have the potential to reduce hospital stay.

  16. The Application of Failure Modes and Effects Analysis Methodology to Intrathecal Drug Delivery for Pain Management

    PubMed Central

    Patel, Teresa; Fisher, Stanley P.

    2016-01-01

    Objective This study aimed to utilize failure modes and effects analysis (FMEA) to transform clinical insights into a risk mitigation plan for intrathecal (IT) drug delivery in pain management. Methods The FMEA methodology, which has been used for quality improvement, was adapted to assess risks (i.e., failure modes) associated with IT therapy. Ten experienced pain physicians scored 37 failure modes in the following categories: patient selection for therapy initiation (efficacy and safety concerns), patient safety during IT therapy, and product selection for IT therapy. Participants assigned severity, probability, and detection scores for each failure mode, from which a risk priority number (RPN) was calculated. Failure modes with the highest RPNs (i.e., most problematic) were discussed, and strategies were proposed to mitigate risks. Results Strategic discussions focused on 17 failure modes with the most severe outcomes, the highest probabilities of occurrence, and the most challenging detection. The topic of the highest‐ranked failure mode (RPN = 144) was manufactured monotherapy versus compounded combination products. Addressing failure modes associated with appropriate patient and product selection was predicted to be clinically important for the success of IT therapy. Conclusions The methodology of FMEA offers a systematic approach to prioritizing risks in a complex environment such as IT therapy. Unmet needs and information gaps are highlighted through the process. Risk mitigation and strategic planning to prevent and manage critical failure modes can contribute to therapeutic success. PMID:27477689

  17. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  18. Evaluation of the Effect of the Volume Throughput and Maximum Flux of Low-Surface-Tension Fluids on Bacterial Penetration of 0.2 Micron-Rated Filters during Process-Specific Filter Validation Testing.

    PubMed

    Folmsbee, Martha

    2015-01-01

    Approximately 97% of filter validation tests result in the demonstration of absolute retention of the test bacteria, and thus sterile filter validation failure is rare. However, while Brevundimonas diminuta (B. diminuta) penetration of sterilizing-grade filters is rarely detected, the observation that some fluids (such as vaccines and liposomal fluids) may lead to an increased incidence of bacterial penetration of sterilizing-grade filters by B. diminuta has been reported. The goal of the following analysis was to identify important drivers of filter validation failure in these rare cases. The identification of these drivers will hopefully serve the purpose of assisting in the design of commercial sterile filtration processes with a low risk of filter validation failure for vaccine, liposomal, and related fluids. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to the effect of bacterial load (CFU/cm(2)), bacterial load rate (CFU/min/cm(2)), volume throughput (mL/cm(2)), and maximum filter flux (mL/min/cm(2)) on bacterial penetration. The data set (∼1162 individual filtrations) included all instances of process-specific filter validation failures performed at Pall Corporation, including those using other filter media, but did not include all successful retentive filter validation bacterial challenges. It was neither practical nor necessary to include all filter validation successes worldwide (Pall Corporation) to achieve the goals of this analysis. The percentage of failed filtration events for the selected total master data set was 27% (310/1162). Because it is heavily weighted with penetration events, this percentage is considerably higher than the actual rate of failed filter validations, but, as such, facilitated a close examination of the conditions that lead to filter validation failure. In agreement with our previous reports, two of the significant drivers of bacterial penetration identified were the total bacterial load and the bacterial load rate. In addition to these parameters, another three possible drivers of failure were also identified: volume throughput, maximum filter flux, and pressure. Of the data for which volume throughput information was available, 24% (249/1038) of the filtrations resulted in penetration. However, for the volume throughput range of 680-2260 mL/cm(2), only 9 out of 205 bacterial challenges (∼4%) resulted in penetration. Of the data for which flux information was available, 22% (212/946) resulted in bacterial penetration. However, in the maximum filter flux range from 7 to 18 mL/min/cm(2), only one out of 121 filtrations (0.6%) resulted in penetration. A slight increase in filter failure was observed in filter bacterial challenges with a differential pressure greater than 30 psid. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other potentially high-risk fluid), targeting the volume throughput range of 680-2260 mL/cm(2) or flux range of 7-18 mL/min/cm(2), and maintaining the differential pressure below 30 psid, could significantly decrease the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful process-specific filter validation of low-surface-tension fluids. An overwhelming majority of process-specific filter validation (qualification) tests result in the demonstration of absolute retention of test bacteria by sterilizing-grade membrane filters. As such, process-specific filter validation failure is rare. However, while bacterial penetration of sterilizing-grade filters during process-specific filter validation is rarely detected, some fluids (such as vaccines and liposomal fluids) have been associated with an increased incidence of bacterial penetration. The goal of the following analysis was to identify important drivers of process-specific filter validation failure. The identification of these drivers will possibly serve to assist in the design of commercial sterile filtration processes with a low risk of filter validation failure. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to bacterial concentration and rates, as well as filtered fluid volume and rate (Pall Corporation). The master data set (∼1160 individual filtrations) included all recorded instances of process-specific filter validation failures but did not include all successful filter validation bacterial challenge tests. This allowed for a close examination of the conditions that lead to process-specific filter validation failure. As previously reported, two significant drivers of bacterial penetration were identified: the total bacterial load (the total number of bacteria per filter) and the bacterial load rate (the rate at which bacteria were applied to the filter). In addition to these parameters, another three possible drivers of failure were also identified: volumetric throughput, filter flux, and pressure. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other penetrative-risk fluid), targeting the identified bacterial challenge loads, volume throughput, and corresponding flux rates could decrease, and possibly eliminate, the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful filter validation of low-surface-tension fluids. © PDA, Inc. 2015.

  19. Towards Prognostics of Power MOSFETs: Accelerated Aging and Precursors of Failure

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxena, Abhinav; Wysocki, Philip; Saha, Sankalita; Goebel, Kai

    2010-01-01

    This paper presents research results dealing with power MOSFETs (metal oxide semiconductor field effect transistor) within the prognostics and health management of electronics. Experimental results are presented for the identification of the on-resistance as a precursor to failure of devices with die-attach degradation as a failure mechanism. Devices are aged under power cycling in order to trigger die-attach damage. In situ measurements of key electrical and thermal parameters are collected throughout the aging process and further used for analysis and computation of the on-resistance parameter. Experimental results show that the devices experience die-attach damage and that the on-resistance captures the degradation process in such a way that it could be used for the development of prognostics algorithms (data-driven or physics-based).

  20. SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, J; Xiao, Y; Wang, J

    2014-06-15

    Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range ofmore » 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future.« less

  1. Application of failure mode and effect analysis in managing catheter-related blood stream infection in intensive care unit

    PubMed Central

    Li, Xixi; He, Mei; Wang, Haiyan

    2017-01-01

    Abstract In this study, failure mode and effect analysis (FMEA), a proactive tool, was applied to reduce errors associated with the process which begins with assessment of patient and ends with treatment of complications. The aim of this study is to assess whether FMEA implementation will significantly reduce the incidence of catheter-related bloodstream infections (CRBSIs) in intensive care unit. The FMEA team was constructed. A team of 15 medical staff from different departments were recruited and trained. Their main responsibility was to analyze and score all possible processes of central venous catheterization failures. Failure modes with risk priority number (RPN) ≥100 (top 10 RPN scores) were deemed as high-priority-risks, meaning that they needed immediate corrective action. After modifications were put, the resulting RPN was compared with the previous one. A centralized nursing care system was designed. A total of 25 failure modes were identified. High-priority risks were “Unqualified medical device sterilization” (RPN, 337), “leukopenia, very low immunity” (RPN, 222), and “Poor hand hygiene Basic diseases” (RPN, 160). The corrective measures that we took allowed a decrease in the RPNs, especially for the high-priority risks. The maximum reduction was approximately 80%, as observed for the failure mode “Not creating the maximal barrier for patient.” The averaged incidence of CRBSIs was reduced from 5.19% to 1.45%, with 3 months of 0 infection rate. The FMEA can effectively reduce incidence of CRBSIs, improve the security of central venous catheterization technology, decrease overall medical expenses, and improve nursing quality. PMID:29390515

  2. Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy.

    PubMed

    Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Di Muzio, Nadia; Longobardi, Barbara; Mangili, Paola; Veronese, Ivan

    2013-09-06

    The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety.

  3. Composite laminate free-edge reinforcement with U-shaped caps. I - Stress analysis. II - Theoretical-experimental correlation

    NASA Technical Reports Server (NTRS)

    Howard, W. E.; Gossard, Terry, Jr.; Jones, Robert M.

    1989-01-01

    The present generalized plane-strain FEM analysis for the prediction of interlaminar normal stress reduction when a U-shaped cap is bonded to the edge of a composite laminate gives attention to the highly variable transverse stresses near the free edge, cap length and thickness, and a gap under the cap due to the manufacturing process. The load-transfer mechanism between cap and laminate is found to be strain-compatibility, rather than shear lag. In the second part of this work, the three-dimensional composite material failure criteria are used in a progressive laminate failure analysis to predict failure loads of laminates with different edge-cap designs; symmetric 11-layer graphite-epoxy laminates with a one-layer cap of kevlar-epoxy are shown to carry 130-140 percent greater loading than uncapped laminates, under static tensile and tension-tension fatigue loading.

  4. Reliability analysis of forty-five strain-gage systems mounted on the first fan stage of a YF-100 engine

    NASA Technical Reports Server (NTRS)

    Holanda, R.; Frause, L. M.

    1977-01-01

    The reliability of 45 state-of-the-art strain gage systems under full scale engine testing was investigated. The flame spray process was used to install 23 systems on the first fan rotor of a YF-100 engine; the others were epoxy cemented. A total of 56 percent of the systems failed in 11 hours of engine operation. Flame spray system failures were primarily due to high gage resistance, probably caused by high stress levels. Epoxy system failures were principally erosion failures, but only on the concave side of the blade. Lead-wire failures between the blade-to-disk jump and the control room could not be analyzed.

  5. Dynamics of functional failures and recovery in complex road networks

    NASA Astrophysics Data System (ADS)

    Zhan, Xianyuan; Ukkusuri, Satish V.; Rao, P. Suresh C.

    2017-11-01

    We propose a new framework for modeling the evolution of functional failures and recoveries in complex networks, with traffic congestion on road networks as the case study. Differently from conventional approaches, we transform the evolution of functional states into an equivalent dynamic structural process: dual-vertex splitting and coalescing embedded within the original network structure. The proposed model successfully explains traffic congestion and recovery patterns at the city scale based on high-resolution data from two megacities. Numerical analysis shows that certain network structural attributes can amplify or suppress cascading functional failures. Our approach represents a new general framework to model functional failures and recoveries in flow-based networks and allows understanding of the interplay between structure and function for flow-induced failure propagation and recovery.

  6. Methods for improved forewarning of critical events across multiple data channels

    DOEpatents

    Hively, Lee M [Philadelphia, TN

    2007-04-24

    This disclosed invention concerns improvements in forewarning of critical events via phase-space dissimilarity analysis of data from mechanical devices, electrical devices, biomedical data, and other physical processes. First, a single channel of process-indicative data is selected that can be used in place of multiple data channels without sacrificing consistent forewarning of critical events. Second, the method discards data of inadequate quality via statistical analysis of the raw data, because the analysis of poor quality data always yields inferior results. Third, two separate filtering operations are used in sequence to remove both high-frequency and low-frequency artifacts using a zero-phase quadratic filter. Fourth, the method constructs phase-space dissimilarity measures (PSDM) by combining of multi-channel time-serial data into a multi-channel time-delay phase-space reconstruction. Fifth, the method uses a composite measure of dissimilarity (C.sub.i) to provide a forewarning of failure and an indicator of failure onset.

  7. The report of Task Group 100 of the AAPM: Application of risk analysis methods to radiation therapy quality management

    PubMed Central

    Huq, M. Saiful; Fraass, Benedick A.; Dunscombe, Peter B.; Gibbons, John P.; Mundt, Arno J.; Mutic, Sasa; Palta, Jatinder R.; Rath, Frank; Thomadsen, Bruce R.; Williamson, Jeffrey F.; Yorke, Ellen D.

    2016-01-01

    The increasing complexity of modern radiation therapy planning and delivery challenges traditional prescriptive quality management (QM) methods, such as many of those included in guidelines published by organizations such as the AAPM, ASTRO, ACR, ESTRO, and IAEA. These prescriptive guidelines have traditionally focused on monitoring all aspects of the functional performance of radiotherapy (RT) equipment by comparing parameters against tolerances set at strict but achievable values. Many errors that occur in radiation oncology are not due to failures in devices and software; rather they are failures in workflow and process. A systematic understanding of the likelihood and clinical impact of possible failures throughout a course of radiotherapy is needed to direct limit QM resources efficiently to produce maximum safety and quality of patient care. Task Group 100 of the AAPM has taken a broad view of these issues and has developed a framework for designing QM activities, based on estimates of the probability of identified failures and their clinical outcome through the RT planning and delivery process. The Task Group has chosen a specific radiotherapy process required for “intensity modulated radiation therapy (IMRT)” as a case study. The goal of this work is to apply modern risk-based analysis techniques to this complex RT process in order to demonstrate to the RT community that such techniques may help identify more effective and efficient ways to enhance the safety and quality of our treatment processes. The task group generated by consensus an example quality management program strategy for the IMRT process performed at the institution of one of the authors. This report describes the methodology and nomenclature developed, presents the process maps, FMEAs, fault trees, and QM programs developed, and makes suggestions on how this information could be used in the clinic. The development and implementation of risk-assessment techniques will make radiation therapy safer and more efficient. PMID:27370140

  8. The report of Task Group 100 of the AAPM: Application of risk analysis methods to radiation therapy quality management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huq, M. Saiful, E-mail: HUQS@UPMC.EDU

    The increasing complexity of modern radiation therapy planning and delivery challenges traditional prescriptive quality management (QM) methods, such as many of those included in guidelines published by organizations such as the AAPM, ASTRO, ACR, ESTRO, and IAEA. These prescriptive guidelines have traditionally focused on monitoring all aspects of the functional performance of radiotherapy (RT) equipment by comparing parameters against tolerances set at strict but achievable values. Many errors that occur in radiation oncology are not due to failures in devices and software; rather they are failures in workflow and process. A systematic understanding of the likelihood and clinical impact ofmore » possible failures throughout a course of radiotherapy is needed to direct limit QM resources efficiently to produce maximum safety and quality of patient care. Task Group 100 of the AAPM has taken a broad view of these issues and has developed a framework for designing QM activities, based on estimates of the probability of identified failures and their clinical outcome through the RT planning and delivery process. The Task Group has chosen a specific radiotherapy process required for “intensity modulated radiation therapy (IMRT)” as a case study. The goal of this work is to apply modern risk-based analysis techniques to this complex RT process in order to demonstrate to the RT community that such techniques may help identify more effective and efficient ways to enhance the safety and quality of our treatment processes. The task group generated by consensus an example quality management program strategy for the IMRT process performed at the institution of one of the authors. This report describes the methodology and nomenclature developed, presents the process maps, FMEAs, fault trees, and QM programs developed, and makes suggestions on how this information could be used in the clinic. The development and implementation of risk-assessment techniques will make radiation therapy safer and more efficient.« less

  9. The report of Task Group 100 of the AAPM: Application of risk analysis methods to radiation therapy quality management.

    PubMed

    Huq, M Saiful; Fraass, Benedick A; Dunscombe, Peter B; Gibbons, John P; Ibbott, Geoffrey S; Mundt, Arno J; Mutic, Sasa; Palta, Jatinder R; Rath, Frank; Thomadsen, Bruce R; Williamson, Jeffrey F; Yorke, Ellen D

    2016-07-01

    The increasing complexity of modern radiation therapy planning and delivery challenges traditional prescriptive quality management (QM) methods, such as many of those included in guidelines published by organizations such as the AAPM, ASTRO, ACR, ESTRO, and IAEA. These prescriptive guidelines have traditionally focused on monitoring all aspects of the functional performance of radiotherapy (RT) equipment by comparing parameters against tolerances set at strict but achievable values. Many errors that occur in radiation oncology are not due to failures in devices and software; rather they are failures in workflow and process. A systematic understanding of the likelihood and clinical impact of possible failures throughout a course of radiotherapy is needed to direct limit QM resources efficiently to produce maximum safety and quality of patient care. Task Group 100 of the AAPM has taken a broad view of these issues and has developed a framework for designing QM activities, based on estimates of the probability of identified failures and their clinical outcome through the RT planning and delivery process. The Task Group has chosen a specific radiotherapy process required for "intensity modulated radiation therapy (IMRT)" as a case study. The goal of this work is to apply modern risk-based analysis techniques to this complex RT process in order to demonstrate to the RT community that such techniques may help identify more effective and efficient ways to enhance the safety and quality of our treatment processes. The task group generated by consensus an example quality management program strategy for the IMRT process performed at the institution of one of the authors. This report describes the methodology and nomenclature developed, presents the process maps, FMEAs, fault trees, and QM programs developed, and makes suggestions on how this information could be used in the clinic. The development and implementation of risk-assessment techniques will make radiation therapy safer and more efficient.

  10. Identification of priorities for medication safety in neonatal intensive care.

    PubMed

    Kunac, Desireé L; Reith, David M

    2005-01-01

    Although neonates are reported to be at greater risk of medication error than infants and older children, little is known about the causes and characteristics of error in this patient group. Failure mode and effects analysis (FMEA) is a technique used in industry to evaluate system safety and identify potential hazards in advance. The aim of this study was to identify and prioritize potential failures in the neonatal intensive care unit (NICU) medication use process through application of FMEA. Using the FMEA framework and a systems-based approach, an eight-member multidisciplinary panel worked as a team to create a flow diagram of the neonatal unit medication use process. Then by brainstorming, the panel identified all potential failures, their causes and their effects at each step in the process. Each panel member independently rated failures based on occurrence, severity and likelihood of detection to allow calculation of a risk priority score (RPS). The panel identified 72 failures, with 193 associated causes and effects. Vulnerabilities were found to be distributed across the entire process, but multiple failures and associated causes were possible when prescribing the medication and when preparing the drug for administration. The top ranking issue was a perceived lack of awareness of medication safety issues (RPS score 273), due to a lack of medication safety training. The next highest ranking issues were found to occur at the administration stage. Common potential failures related to errors in the dose, timing of administration, infusion pump settings and route of administration. Perceived causes were multiple, but were largely associated with unsafe systems for medication preparation and storage in the unit, variable staff skill level and lack of computerised technology. Interventions to decrease medication-related adverse events in the NICU should aim to increase staff awareness of medication safety issues and focus on medication administration processes.

  11. Spectral Characteristics of Continuous Acoustic Emission (AE) Data from Laboratory Rock Deformation Experiments

    NASA Astrophysics Data System (ADS)

    Flynn, J. William; Goodfellow, Sebastian; Reyes-Montes, Juan; Nasseri, Farzine; Young, R. Paul

    2016-04-01

    Continuous acoustic emission (AE) data recorded during rock deformation tests facilitates the monitoring of fracture initiation and propagation due to applied stress changes. Changes in the frequency and energy content of AE waveforms have been previously observed and were associated with microcrack coalescence and the induction or mobilisation of large fractures which are naturally associated with larger amplitude AE events and lower-frequency components. The shift from high to low dominant frequency components during the late stages of the deformation experiment, as the rate of AE events increases and the sample approaches failure, indicates a transition from the micro-cracking to macro-cracking regime, where large cracks generated result in material failure. The objective of this study is to extract information on the fracturing process from the acoustic records around sample failure, where the fast occurrence of AE events does not allow for identification of individual AE events and phase arrivals. Standard AE event processing techniques are not suitable for extracting this information at these stages. Instead the observed changes in the frequency content of the continuous record can be used to characterise and investigate the fracture process at the stage of microcrack coalescence and sample failure. To analyse and characterise these changes, a detailed non-linear and non-stationary time-frequency analysis of the continuous waveform data is required. Empirical Mode Decomposition (EMD) and Hilbert Spectral Analysis (HSA) are two of the techniques used in this paper to analyse the acoustic records which provide a high-resolution temporal frequency distribution of the data. In this paper we present the results from our analysis of continuous AE data recorded during a laboratory triaxial deformation experiment using the combined EMD and HSA method.

  12. Fractography: determining the sites of fracture initiation.

    PubMed

    Mecholsky, J J

    1995-03-01

    Fractography is the analysis of fracture surfaces. Here, it refers to quantitative fracture surface analysis (FSA) in the context of applying the principles of fracture mechanics to the topography observed on the fracture surface of brittle materials. The application of FSA is based on the principle that encoded on the fracture surface of brittle materials is the entire history of the fracture process. It is our task to develop the skills and knowledge to decode this information. There are several motivating factors for applying our knowledge of FSA. The first and foremost is that there is specific, quantitative information to be obtained from the fracture surface. This information includes the identification of the size and location of the fracture initiating crack or defect, the stress state at failure, the existence, or not, of local or global residual stress, the existence, or not, of stress corrosion and a knowledge of local processing anomalies which affect the fracture process. The second motivating factor is that the information is free. Once a material is tested to failure, the encoded information becomes available. If we decide to observe the features produced during fracture then we are rewarded with much information. If we decide to ignore the fracture surface, then we are left to guess and/or reason as to the cause of the failure without the benefit of all of the possible information available. This paper addresses the application of quantitative fracture surface analysis to basic research, material and product development, and "trouble-shooting" of in-service failures. First, the basic principles involved will be presented. Next, the methodology necessary to apply the principles will be presented. Finally, a summary of the presentation will be made showing the applicability to design and reliability.

  13. Lithographic chip identification: meeting the failure analysis challenge

    NASA Astrophysics Data System (ADS)

    Perkins, Lynn; Riddell, Kevin G.; Flack, Warren W.

    1992-06-01

    This paper describes a novel method using stepper photolithography to uniquely identify individual chips for permanent traceability. A commercially available 1X stepper is used to mark chips with an identifier or `serial number' which can be encoded with relevant information for the integrated circuit manufacturer. The permanent identification of individual chips can improve current methods of quality control, failure analysis, and inventory control. The need for this technology is escalating as manufacturers seek to provide six sigma quality control for their products and trace fabrication problems to their source. This need is especially acute for parts that fail after packaging and are returned to the manufacturer for analysis. Using this novel approach, failure analysis data can be tied back to a particular batch, wafer, or even a position within a wafer. Process control can be enhanced by identifying the root cause of chip failures. Chip identification also addresses manufacturers concerns with increasing incidences of chip theft. Since chips currently carry no identification other than the manufacturer's name and part number, recovery efforts are hampered by the inability to determine the sales history of a specific packaged chip. A definitive identifier or serial number for each chip would address this concern. The results of chip identification (patent pending) are easily viewed through a low power microscope. Batch number, wafer number, exposure step, and chip location within the exposure step can be recorded, as can dates and other items of interest. An explanation of the chip identification procedure and processing requirements are described. Experimental testing and results are presented, and potential applications are discussed.

  14. Preventing medical errors by designing benign failures.

    PubMed

    Grout, John R

    2003-07-01

    One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.

  15. An improved method for risk evaluation in failure modes and effects analysis of CNC lathe

    NASA Astrophysics Data System (ADS)

    Rachieru, N.; Belu, N.; Anghel, D. C.

    2015-11-01

    Failure mode and effects analysis (FMEA) is one of the most popular reliability analysis tools for identifying, assessing and eliminating potential failure modes in a wide range of industries. In general, failure modes in FMEA are evaluated and ranked through the risk priority number (RPN), which is obtained by the multiplication of crisp values of the risk factors, such as the occurrence (O), severity (S), and detection (D) of each failure mode. However, the crisp RPN method has been criticized to have several deficiencies. In this paper, linguistic variables, expressed in Gaussian, trapezoidal or triangular fuzzy numbers, are used to assess the ratings and weights for the risk factors S, O and D. A new risk assessment system based on the fuzzy set theory and fuzzy rule base theory is to be applied to assess and rank risks associated to failure modes that could appear in the functioning of Turn 55 Lathe CNC. Two case studies have been shown to demonstrate the methodology thus developed. It is illustrated a parallel between the results obtained by the traditional method and fuzzy logic for determining the RPNs. The results show that the proposed approach can reduce duplicated RPN numbers and get a more accurate, reasonable risk assessment. As a result, the stability of product and process can be assured.

  16. Fractography, NDE, and fracture mechanics applications in failure analysis studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morin, C.R.; Shipley, R.J.; Wilkinson, J.A.

    1994-10-01

    While identification of the precise mode of a failure can lead logically to the underlying cause, a thorough failure investigation requires much more than just the identification of a specific metallurgical mechanism, for example, fatigue, creep, stress corrosion cracking, etc. Failures involving fracture provide good illustrations of this concept. An initial step in characterizing fracture surfaces is often the identification of an origin or origins. However, the analysis should not stop there. If the origin is associated with a discontinuity, the manner in which it was formed must also be addressed. The stresses that would have existed at the originmore » must be determined and compared with material properties to determine whether or not a crack should have initiated and propagated during normal operation. Many critical components are inspected throughout their lives by nondestructive methods. When a crack progresses to failure, its nondetection at earlier inspections must also be understood. Careful study of the fracture surface combined with crack growth analysis based on fracture mechanics can provide an estimate of the crack length at the times of previous inspections. An important issue often overlooked in such studies is how processing of parts during manufacture or rework affects the probability of detection of such cracks. The ultimate goal is to understand thoroughly the progression of the failure, to understand the root cause(s), and to design appropriate corrective action(s) to minimize recurrence.« less

  17. MeDICi Software Superglue for Data Analysis Pipelines

    ScienceCinema

    Ian Gorton

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework is an integrated middleware platform developed to solve data analysis and processing needs of scientists across many domains. MeDICi is scalable, easily modified, and robust to multiple languages, protocols, and hardware platforms, and in use today by PNNL scientists for bioinformatics, power grid failure analysis, and text analysis.

  18. Experimental micromechanical approach to failure process in CFRP cross-ply laminates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeda, N.; Ogihara, S.; Kobayashi, A.

    The microscopic failure process of three different types of cross-ply laminates, (0/90{sub n}/0) (n = 4, 8, 12), was investigated at R.T. and 80 C. Progressive damage parameters, the transverse crack density and the delamination ratio, were measured. A simple modified shear-lag analysis including the thermal residual strains was conducted to predict the transverse crack density as a function of laminate strain, considering the constraint effect, as well as the strength distribution of the transverse layer. The analysis was also extended to the system containing delamination to predict the delamination length. A prediction was also presented for the transverse crackmore » density including the effect of the delamination growth. The prediction showed good agreement with the experimental results.« less

  19. Accounting for Epistemic Uncertainty in Mission Supportability Assessment: A Necessary Step in Understanding Risk and Logistics Requirements

    NASA Technical Reports Server (NTRS)

    Owens, Andrew; De Weck, Olivier L.; Stromgren, Chel; Goodliff, Kandyce; Cirillo, William

    2017-01-01

    Future crewed missions to Mars present a maintenance logistics challenge that is unprecedented in human spaceflight. Mission endurance – defined as the time between resupply opportunities – will be significantly longer than previous missions, and therefore logistics planning horizons are longer and the impact of uncertainty is magnified. Maintenance logistics forecasting typically assumes that component failure rates are deterministically known and uses them to represent aleatory uncertainty, or uncertainty that is inherent to the process being examined. However, failure rates cannot be directly measured; rather, they are estimated based on similarity to other components or statistical analysis of observed failures. As a result, epistemic uncertainty – that is, uncertainty in knowledge of the process – exists in failure rate estimates that must be accounted for. Analyses that neglect epistemic uncertainty tend to significantly underestimate risk. Epistemic uncertainty can be reduced via operational experience; for example, the International Space Station (ISS) failure rate estimates are refined using a Bayesian update process. However, design changes may re-introduce epistemic uncertainty. Thus, there is a tradeoff between changing a design to reduce failure rates and operating a fixed design to reduce uncertainty. This paper examines the impact of epistemic uncertainty on maintenance logistics requirements for future Mars missions, using data from the ISS Environmental Control and Life Support System (ECLS) as a baseline for a case study. Sensitivity analyses are performed to investigate the impact of variations in failure rate estimates and epistemic uncertainty on spares mass. The results of these analyses and their implications for future system design and mission planning are discussed.

  20. Testing of containers made of glass-fiber reinforced plastic with the aid of acoustic emission analysis

    NASA Technical Reports Server (NTRS)

    Wolitz, K.; Brockmann, W.; Fischer, T.

    1979-01-01

    Acoustic emission analysis as a quasi-nondestructive test method makes it possible to differentiate clearly, in judging the total behavior of fiber-reinforced plastic composites, between critical failure modes (in the case of unidirectional composites fiber fractures) and non-critical failure modes (delamination processes or matrix fractures). A particular advantage is that, for varying pressure demands on the composites, the emitted acoustic pulses can be analyzed with regard to their amplitude distribution. In addition, definite indications as to how the damages occurred can be obtained from the time curves of the emitted acoustic pulses as well as from the particular frequency spectrum. Distinct analogies can be drawn between the various analytical methods with respect to whether the failure modes can be classified as critical or non-critical.

  1. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  2. A Mixed Methods Analysis of the Effect of Google Docs Environment on EFL Learners' Writing Performance and Causal Attributions for Success and Failure

    ERIC Educational Resources Information Center

    Seyyedrezaie, Zari Sadat; Ghonsooly, Behzad; Shahriari, Hesamoddin; Fatemi, Hazar Hosseini

    2016-01-01

    This study investigated the effect of writing process in Google Docs environment on Iranian EFL learners' writing performance. It also examined students' perceptions towards the effects of Google Docs and their perceived causes of success or failure in writing performance. In this regard, 48 EFL students were chosen based on their IELTs writing…

  3. Statistical Models and Inference Procedures for Structural and Materials Reliability

    DTIC Science & Technology

    1990-12-01

    as an official Department of the Army positio~n, policy, or decision, unless sD designated by other documentazion. 12a. DISTRIBUTION /AVAILABILITY...Some general stress-strength models were also developed and applied to the failure of systems subject to cyclic loading. Involved in the failure of...process control ideas and sequential design and analysis methods. Finally, smooth nonparametric quantile .wJ function estimators were studied. All of

  4. Independent Orbiter Assessment (IOA): Analysis of the landing/deceleration subsystem

    NASA Technical Reports Server (NTRS)

    Compton, J. M.; Beaird, H. G.; Weissinger, W. D.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Landing/Deceleration Subsystem hardware. The Landing/Deceleration Subsystem is utilized to allow the Orbiter to perform a safe landing, allowing for landing-gear deploy activities, steering and braking control throughout the landing rollout to wheel-stop, and to allow for ground-handling capability during the ground-processing phase of the flight cycle. Specifically, the Landing/Deceleration hardware consists of the following components: Nose Landing Gear (NLG); Main Landing Gear (MLG); Brake and Antiskid (B and AS) Electrical Power Distribution and Controls (EPD and C); Nose Wheel Steering (NWS); and Hydraulics Actuators. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the lack of redundancy in the Landing/Deceleration Subsystems there is a high number of critical items.

  5. Resilience of antagonistic networks with regard to the effects of initial failures and degree-degree correlations

    NASA Astrophysics Data System (ADS)

    Watanabe, Shunsuke; Kabashima, Yoshiyuki

    2016-09-01

    In this study we investigate the resilience of duplex networked layers α and β coupled with antagonistic interlinks, each layer of which inhibits its counterpart at the microscopic level, changing the following factors: whether the influence of the initial failures in α remains [quenched (case Q )] or not [free (case F )]; the effect of intralayer degree-degree correlations in each layer and interlayer degree-degree correlations; and the type of the initial failures, such as random failures or targeted attacks (TAs). We illustrate that the percolation processes repeat in both cases Q and F , although only in case F are nodes that initially failed reactivated. To analytically evaluate the resilience of each layer, we develop a methodology based on the cavity method for deriving the size of a giant component (GC). Strong hysteresis, which is ignored in the standard cavity analysis, is observed in the repetition of the percolation processes particularly in case F . To handle this, we heuristically modify interlayer messages for macroscopic analysis, the utility of which is verified by numerical experiments. The percolation transition in each layer is continuous in both cases Q and F . We also analyze the influences of degree-degree correlations on the robustness of layer α , in particular for the case of TAs. The analysis indicates that the critical fraction of initial failures that makes the GC size in layer α vanish depends only on its intralayer degree-degree correlations. Although our model is defined in a somewhat abstract manner, it may have relevance to ecological systems that are composed of endangered species (layer α ) and invaders (layer β ), the former of which are damaged by the latter whereas the latter are exterminated in the areas where the former are active.

  6. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  7. WE-H-BRC-01: Failure Mode and Effects Analysis of Skin Electronic Brachytherapy Using Esteya Unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibanez-Rosello, B; Bautista-Ballesteros, J; Bonaque, J

    Purpose: A failure mode and effect analysis (FMEA) of skin lesions treatment process using Esteya™ device (Elekta Brachyterapy, Veenendaal, The Netherlands) was performed, with the aim of increasing the quality of the treatment and reducing the likelihood of unwanted events. Methods: A multidisciplinary team with experience in the treatment process met to establish the process map, which outlines the flow of various stages for such patients undergoing skin treatment. Potential failure modes (FM) were identified and the value of severity (S), frequency of occurrence (O), and lack of detectability (D) of the proposed FM were scored individually, each on amore » scale of 1 to 10 following TG-100 guidelines of the AAPM. These failure modes were ranked according to our risk priority number (RPN) and S scores. The efficiency of existing quality management tools was analyzed through a reassessment of the O and D made by consensus. Results: 149 FM were identified, 43 of which had RPN ≥ 100 and 30 had S ≥ 7. After introduction of the tools of quality management, only 3 FM had RPN ≥ 100 and 22 FM had RPN ≥ 50. These 22 FM were thoroughly analyzed and new tools for quality management were proposed. The most common cause of highest RPN FM was associated with the heavy patient workload and the continuous and accurate applicator-patient skin contact during the treatment. To overcome this second item, a regular quality control and setup review by a second individual before each treatment session was proposed. Conclusion: FMEA revealed some of the FM potentials that were not predicted during the initial implementation of the quality management tools. This exercise was useful in identifying the need of periodic update of the FMEA process as new potential failures can be identified.« less

  8. 8 CFR 208.10 - Failure to appear at an interview before an asylum officer or failure to follow requirements for...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... asylum officer or failure to follow requirements for fingerprint processing. 208.10 Section 208.10 Aliens... asylum officer or failure to follow requirements for fingerprint processing. Failure to appear for a... right to an interview. Failure to comply with fingerprint processing requirements without good cause may...

  9. 8 CFR 208.10 - Failure to appear at an interview before an asylum officer or failure to follow requirements for...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... asylum officer or failure to follow requirements for fingerprint processing. 208.10 Section 208.10 Aliens... asylum officer or failure to follow requirements for fingerprint processing. Failure to appear for a... right to an interview. Failure to comply with fingerprint processing requirements without good cause may...

  10. 8 CFR 208.10 - Failure to appear at an interview before an asylum officer or failure to follow requirements for...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... asylum officer or failure to follow requirements for fingerprint processing. 208.10 Section 208.10 Aliens... asylum officer or failure to follow requirements for fingerprint processing. Failure to appear for a... right to an interview. Failure to comply with fingerprint processing requirements without good cause may...

  11. 8 CFR 208.10 - Failure to appear at an interview before an asylum officer or failure to follow requirements for...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... asylum officer or failure to follow requirements for fingerprint processing. 208.10 Section 208.10 Aliens... asylum officer or failure to follow requirements for fingerprint processing. Failure to appear for a... right to an interview. Failure to comply with fingerprint processing requirements without good cause may...

  12. 8 CFR 208.10 - Failure to appear at an interview before an asylum officer or failure to follow requirements for...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... asylum officer or failure to follow requirements for fingerprint processing. 208.10 Section 208.10 Aliens... asylum officer or failure to follow requirements for fingerprint processing. Failure to appear for a... right to an interview. Failure to comply with fingerprint processing requirements without good cause may...

  13. Application of Failure Mode and Effects Analysis to Intraoperative Radiation Therapy Using Mobile Electron Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciocca, Mario, E-mail: mario.ciocca@cnao.it; Cantone, Marie-Claire; Veronese, Ivan

    2012-02-01

    Purpose: Failure mode and effects analysis (FMEA) represents a prospective approach for risk assessment. A multidisciplinary working group of the Italian Association for Medical Physics applied FMEA to electron beam intraoperative radiation therapy (IORT) delivered using mobile linear accelerators, aiming at preventing accidental exposures to the patient. Methods and Materials: FMEA was applied to the IORT process, for the stages of the treatment delivery and verification, and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system,more » based on the product of three parameters (severity, frequency of occurrence and detectability, each ranging from 1 to 10); 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. Results: Twenty-four subprocesses were identified. Ten potential failure modes were found and scored, in terms of RPN, in the range of 42-216. The most critical failure modes consisted of internal shield misalignment, wrong Monitor Unit calculation and incorrect data entry at treatment console. Potential causes of failure included shield displacement, human errors, such as underestimation of CTV extension, mainly because of lack of adequate training and time pressures, failure in the communication between operators, and machine malfunctioning. The main effects of failure were represented by CTV underdose, wrong dose distribution and/or delivery, unintended normal tissue irradiation. As additional safety measures, the utilization of a dedicated staff for IORT, double-checking of MU calculation and data entry and finally implementation of in vivo dosimetry were suggested. Conclusions: FMEA appeared as a useful tool for prospective evaluation of patient safety in radiotherapy. The application of this method to IORT lead to identify three safety measures for risk mitigation.« less

  14. Failure mode and effects analysis of skin electronic brachytherapy using Esteya® unit

    PubMed Central

    Bautista-Ballesteros, Juan Antonio; Bonaque, Jorge; Celada, Francisco; Lliso, Françoise; Carmona, Vicente; Gimeno-Olmos, Jose; Ouhib, Zoubir; Rosello, Joan; Perez-Calatayud, Jose

    2016-01-01

    Purpose Esteya® (Nucletron, an Elekta company, Elekta AB, Stockholm, Sweden) is an electronic brachytherapy device used for skin cancer lesion treatment. In order to establish an adequate level of quality of treatment, a risk analysis of the Esteya treatment process has been done, following the methodology proposed by the TG-100 guidelines of the American Association of Physicists in Medicine (AAPM). Material and methods A multidisciplinary team familiar with the treatment process was formed. This team developed a process map (PM) outlining the stages, through which a patient passed when subjected to the Esteya treatment. They identified potential failure modes (FM) and each individual FM was assessed for the severity (S), frequency of occurrence (O), and lack of detection (D). A list of existing quality management tools was developed and the FMs were consensually reevaluated. Finally, the FMs were ranked according to their risk priority number (RPN) and their S. Results 146 FMs were identified, 106 of which had RPN ≥ 50 and 30 had S ≥ 7. After introducing the quality management tools, only 21 FMs had RPN ≥ 50. The importance of ensuring contact between the applicator and the surface of the patient’s skin was emphasized, so the setup was reviewed by a second individual before each treatment session with periodic quality control to ensure stability of the applicator pressure. Some of the essential quality management tools are already being implemented in the installation are the simple templates for reproducible positioning of skin applicators, that help marking the treatment area and positioning of X-ray tube. Conclusions New quality management tools have been established as a result of the application of the failure modes and effects analysis (FMEA) treatment. However, periodic update of the FMEA process is necessary, since clinical experience has suggested occurring of further new possible potential failure modes. PMID:28115958

  15. 40 CFR Appendix B to Subpart G of... - Substitutes Subject to Use Restrictions and Unacceptable Substitutes

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... inches) and right-hand thread direction for CO2 refrigerant service containers.3 Manufacturers should... Failure Mode and Effect Analysis in Manufacturing and Assembly Process [Process FMEA] on the MVAC as... submitted to demonstrate it can be used safely in this end-use. CFC-11, CFC-12, R-502 Industrial Process...

  16. 40 CFR Appendix B to Subpart G of... - Substitutes Subject to Use Restrictions and Unacceptable Substitutes

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... inches) and right-hand thread direction for CO2 refrigerant service containers.3 Manufacturers should... Failure Mode and Effect Analysis in Manufacturing and Assembly Process [Process FMEA] on the MVAC as... submitted to demonstrate it can be used safely in this end-use. CFC-11, CFC-12, R-502 Industrial Process...

  17. High Speed Dynamics in Brittle Materials

    NASA Astrophysics Data System (ADS)

    Hiermaier, Stefan

    2015-06-01

    Brittle Materials under High Speed and Shock loading provide a continuous challenge in experimental physics, analysis and numerical modelling, and consequently for engineering design. The dependence of damage and fracture processes on material-inherent length and time scales, the influence of defects, rate-dependent material properties and inertia effects on different scales make their understanding a true multi-scale problem. In addition, it is not uncommon that materials show a transition from ductile to brittle behavior when the loading rate is increased. A particular case is spallation, a brittle tensile failure induced by the interaction of stress waves leading to a sudden change from compressive to tensile loading states that can be invoked in various materials. This contribution highlights typical phenomena occurring when brittle materials are exposed to high loading rates in applications such as blast and impact on protective structures, or meteorite impact on geological materials. A short review on experimental methods that are used for dynamic characterization of brittle materials will be given. A close interaction of experimental analysis and numerical simulation has turned out to be very helpful in analyzing experimental results. For this purpose, adequate numerical methods are required. Cohesive zone models are one possible method for the analysis of brittle failure as long as some degree of tension is present. Their recent successful application for meso-mechanical simulations of concrete in Hopkinson-type spallation tests provides new insight into the dynamic failure process. Failure under compressive loading is a particular challenge for numerical simulations as it involves crushing of material which in turn influences stress states in other parts of a structure. On a continuum scale, it can be modeled using more or less complex plasticity models combined with failure surfaces, as will be demonstrated for ceramics. Models which take microstructural cracking directly into account may provide a more physics-based approach for compressive failure in the future.

  18. Modelling river bank retreat by combining fluvial erosion, seepage and mass failure

    NASA Astrophysics Data System (ADS)

    Dapporto, S.; Rinaldi, M.

    2003-04-01

    Streambank erosion processes contribute significantly to the sediment yielded from a river system and represent an important issue in the contexts of soil degradation and river management. Bank retreat is controlled by a complex interaction of hydrologic, geotechnical, and hydraulic processes. The capability of modelling these different components allows for a full reconstruction and comprehension of the causes and rates of bank erosion. River bank retreat during a single flow event has been modelled by combining simulation of fluvial erosion, seepage, and mass failures. The study site, along the Sieve River (Central Italy), has been subject to extensive researches, including monitoring of pore water pressures for a period of 4 years. The simulation reconstructs fairly faithfully the observed changes, and is used to: a) test the potentiality and discuss advantages and limitations of such type of methodology for modelling bank retreat; c) quantify the contribution and mutual role of the different processes determining bank retreat. The hydrograph of the event is divided in a series of time steps. Modelling of the riverbank retreat includes for each step the following components: a) fluvial erosion and consequent changes in bank geometry; b) finite element seepage analysis; c) stability analysis by limit equilibrium method. Direct fluvial shear erosion is computed using empirically derived relationships expressing lateral erosion rate as a function of the excess of shear stress to the critical entrainment value for the different materials along the bank profile. Lateral erosion rate has been calibrated on the basis of the total bank retreat measured by digital terrestrial photogrammetry. Finite element seepage analysis is then conducted to reconstruct the saturated and unsaturated flow within the bank and the pore water pressure distribution for each time step. The safety factor for mass failures is then computed, using the pore water pressure distribution obtained by the seepage analysis, and the geometry of the upper bank is modified in case of failure.

  19. Microcracking, microcrack-induced delamination, and longitudinal splitting of advanced composite structures

    NASA Technical Reports Server (NTRS)

    Nairn, John A.

    1992-01-01

    A combined analytical and experimental study was conducted to analyze microcracking, microcrack-induced delamination, and longitudinal splitting in polymer matrix composites. Strain energy release rates, calculated by a variational analysis, were used in a failure criterion to predict microcracking. Predictions and test results were compared for static, fatigue, and cyclic thermal loading. The longitudinal splitting analysis accounted for the effects of fiber bridging. Test data are analyzed and compared for longitudinal splitting and delamination under mixed-mode loading. This study emphasizes the importance of using fracture mechanics analyses to understand the complex failure processes that govern composite strength and life.

  20. Risk factors for eye bank preparation failure of Descemet membrane endothelial keratoplasty tissue.

    PubMed

    Vianna, Lucas M M; Stoeger, Christopher G; Galloway, Joshua D; Terry, Mark; Cope, Leslie; Belfort, Rubens; Jun, Albert S

    2015-05-01

    To assess the results of a single eye bank preparing a high volume of Descemet membrane endothelial keratoplasty (DMEK) tissues using multiple technicians to provide an overview of the experience and to identify possible risk factors for DMEK preparation failure. Cross-sectional study. setting: Lions VisionGift and Wilmer Eye Institute at Johns Hopkins Hospital. All 563 corneal tissues processed by technicians at Lions VisionGift for DMEK between October 2011 and May 2014 inclusive. Tissues were divided into 2 groups: DMEK preparation success and DMEK preparation failure. We compared donor characteristics, including past medical history. The overall tissue preparation failure rate was 5.2%. Univariate analysis showed diabetes mellitus (P = .000028) and its duration (P = .023), hypertension (P = .021), and hyperlipidemia or obesity (P = .0004) were more common in the failure group. Multivariate analysis showed diabetes mellitus (P = .0001) and hyperlipidemia or obesity (P = .0142) were more common in the failure group. Elimination of tissues from donors either with diabetes or with hyperlipidemia or obesity reduced the failure rate from 5.2% to 2.2%. Trends toward lower failure rates occurring with increased technician experience also were found. Our work showed that tissues from donors with diabetes mellitus (especially with longer disease duration) and hyperlipidemia or obesity were associated with higher failure rates in DMEK preparation. Elimination of tissues from donors either with diabetes mellitus or with hyperlipidemia or obesity reduced the failure rate. In addition, our data may provide useful initial guidelines and benchmark values for eye banks seeking to establish and maintain DMEK programs. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Joint scale-change models for recurrent events and failure time.

    PubMed

    Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun

    2017-01-01

    Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.

  2. Modeling Cognitive Strategies during Complex Task Performing Process

    ERIC Educational Resources Information Center

    Mazman, Sacide Guzin; Altun, Arif

    2012-01-01

    The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…

  3. [Beliefs about the adversary, political violence and peace processes].

    PubMed

    Borja, Henry; Barreto, Idaly; Alzate, Mónica; Sabucedo, José Manuel; López López, Wilson

    2009-11-01

    The aim of this study is to test in a real political context whether or not a change in the beliefs which were fueling the political violence in question is required during the advent of a peace process. Two hypothesis are considered: a) in the case of these beliefs not being modified, there will be difficulties to reach an atmosphere of trust between both parts and the process will fail, and b) if this happens, the groups will develop more extreme beliefs against the opponent. The results obtained through a textual analysis support both hypotheses. During the failure of the peace process, neither the strategy of the delegitimization of the opponent nor the identities in conflict were modified. Consequently, when the process failed, responsibility for this failure was attributed to the opponent, and, at the same time, delegitimization against the opponent intensified.

  4. Fatigue failure of regenerator screens in a high frequency Stirling engine

    NASA Technical Reports Server (NTRS)

    Hull, David R.; Alger, Donald L.; Moore, Thomas J.; Scheuermann, Coulson M.

    1987-01-01

    Failure of Stirling Space Power Demonstrator Engine (SPDE) regenerator screens was investigated. After several hours of operation the SPDE was shut down for inspection and on removing the regenerator screens, debris of unknown origin was discovered along with considerable cracking of the screens in localized areas. Metallurgical analysis of the debris determined it to be cracked-off-deformed pieces of the 41 micron thickness Type 304 stainless steel wire screen. Scanning electron microscopy of the cracked screens revealed failures occurring at wire crossovers and fatigue striations on the fracture surface of the wires. Thus, the screen failure can be characterized as a fatigue failure of the wires. The crossovers were determined to contain a 30 percent reduction in wire thickness and a highly worked microstructure occurring from the manufacturing process of the wire screens. Later it was found that reduction in wire thickness occurred because the screen fabricator had subjected it to a light cold-roll process after weaving. Installation of this screen left a clearance in the regenerator allowing the screens to move. The combined effects of the reduction in wire thickness, stress concentration (caused by screen movement), and highly worked microstructure at the wire crossovers led to the fatigue failure of the screens.

  5. Comprehensive risk assessment method of catastrophic accident based on complex network properties

    NASA Astrophysics Data System (ADS)

    Cui, Zhen; Pang, Jun; Shen, Xiaohong

    2017-09-01

    On the macro level, the structural properties of the network and the electrical characteristics of the micro components determine the risk of cascading failures. And the cascading failures, as a process with dynamic development, not only the direct risk but also potential risk should be considered. In this paper, comprehensively considered the direct risk and potential risk of failures based on uncertain risk analysis theory and connection number theory, quantified uncertain correlation by the node degree and node clustering coefficient, then established a comprehensive risk indicator of failure. The proposed method has been proved by simulation on the actual power grid. Modeling a network according to the actual power grid, and verified the rationality of the proposed method.

  6. Application of Quality Management Tools for Evaluating the Failure Frequency of Cutter-Loader and Plough Mining Systems

    NASA Astrophysics Data System (ADS)

    Biały, Witold

    2017-06-01

    Failure frequency in the mining process, with a focus on the mining machine, has been presented and illustrated by the example of two coal-mines. Two mining systems have been subjected to analysis: a cutter-loader and a plough system. In order to reduce costs generated by failures, maintenance teams should regularly make sure that the machines are used and operated in a rational and effective way. Such activities will allow downtimes to be reduced, and, in consequence, will increase the effectiveness of a mining plant. The evaluation of mining machines' failure frequency contained in this study has been based on one of the traditional quality management tools - the Pareto chart.

  7. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  8. Physical explosion analysis in heat exchanger network design

    NASA Astrophysics Data System (ADS)

    Pasha, M.; Zaini, D.; Shariff, A. M.

    2016-06-01

    The failure of shell and tube heat exchangers is being extensively experienced by the chemical process industries. This failure can create a loss of production for long time duration. Moreover, loss of containment through heat exchanger could potentially lead to a credible event such as fire, explosion and toxic release. There is a need to analyse the possible worst case effect originated from the loss of containment of the heat exchanger at the early design stage. Physical explosion analysis during the heat exchanger network design is presented in this work. Baker and Prugh explosion models are deployed for assessing the explosion effect. Microsoft Excel integrated with process design simulator through object linking and embedded (OLE) automation for this analysis. Aspen HYSYS V (8.0) used as a simulation platform in this work. A typical heat exchanger network of steam reforming and shift conversion process was presented as a case study. It is investigated from this analysis that overpressure generated from the physical explosion of each heat exchanger can be estimated in a more precise manner by using Prugh model. The present work could potentially assist the design engineer to identify the critical heat exchanger in the network at the preliminary design stage.

  9. Emotions and encounters with healthcare professionals as predictors for the self-estimated ability to return to work: a cross-sectional study of people with heart failure.

    PubMed

    Nordgren, Lena; Söderlund, Anne

    2016-11-09

    To live with heart failure means that life is delimited. Still, people with heart failure can have a desire to stay active in working life as long as possible. Although a number of factors affect sick leave and rehabilitation processes, little is known about sick leave and vocational rehabilitation concerning people with heart failure. This study aimed to identify emotions and encounters with healthcare professionals as possible predictors for the self-estimated ability to return to work in people on sick leave due to heart failure. A population-based cross-sectional study design was used. The study was conducted in Sweden. Data were collected in 2012 from 3 different sources: 2 official registries and 1 postal questionnaire. A total of 590 individuals were included. Descriptive statistics, correlation analysis and linear multiple regression analysis were used. 3 variables, feeling strengthened in the situation (β=-0.21, p=0.02), feeling happy (β=-0.24, p=0.02) and receiving encouragement about work (β=-0.32, p≤0.001), were identified as possible predictive factors for the self-estimated ability to return to work. To feel strengthened, happy and to receive encouragement about work can affect the return to work process for people on sick leave due to heart failure. In order to develop and implement rehabilitation programmes to meet these needs, more research is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  10. Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy

    PubMed Central

    Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Muzio, Nadia Di; Longobardi, Barbara; Mangili, Paola

    2013-01-01

    The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety. PACS number: 87.55.Qr PMID:24036868

  11. Application of failure mode and effect analysis in managing catheter-related blood stream infection in intensive care unit.

    PubMed

    Li, Xixi; He, Mei; Wang, Haiyan

    2017-12-01

    In this study, failure mode and effect analysis (FMEA), a proactive tool, was applied to reduce errors associated with the process which begins with assessment of patient and ends with treatment of complications. The aim of this study is to assess whether FMEA implementation will significantly reduce the incidence of catheter-related bloodstream infections (CRBSIs) in intensive care unit.The FMEA team was constructed. A team of 15 medical staff from different departments were recruited and trained. Their main responsibility was to analyze and score all possible processes of central venous catheterization failures. Failure modes with risk priority number (RPN) ≥100 (top 10 RPN scores) were deemed as high-priority-risks, meaning that they needed immediate corrective action. After modifications were put, the resulting RPN was compared with the previous one. A centralized nursing care system was designed.A total of 25 failure modes were identified. High-priority risks were "Unqualified medical device sterilization" (RPN, 337), "leukopenia, very low immunity" (RPN, 222), and "Poor hand hygiene Basic diseases" (RPN, 160). The corrective measures that we took allowed a decrease in the RPNs, especially for the high-priority risks. The maximum reduction was approximately 80%, as observed for the failure mode "Not creating the maximal barrier for patient." The averaged incidence of CRBSIs was reduced from 5.19% to 1.45%, with 3 months of 0 infection rate.The FMEA can effectively reduce incidence of CRBSIs, improve the security of central venous catheterization technology, decrease overall medical expenses, and improve nursing quality. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.

  12. CRYOGENIC UPPER STAGE SYSTEM SAFETY

    NASA Technical Reports Server (NTRS)

    Smith, R. Kenneth; French, James V.; LaRue, Peter F.; Taylor, James L.; Pollard, Kathy (Technical Monitor)

    2005-01-01

    NASA s Exploration Initiative will require development of many new systems or systems of systems. One specific example is that safe, affordable, and reliable upper stage systems to place cargo and crew in stable low earth orbit are urgently required. In this paper, we examine the failure history of previous upper stages with liquid oxygen (LOX)/liquid hydrogen (LH2) propulsion systems. Launch data from 1964 until midyear 2005 are analyzed and presented. This data analysis covers upper stage systems from the Ariane, Centaur, H-IIA, Saturn, and Atlas in addition to other vehicles. Upper stage propulsion system elements have the highest impact on reliability. This paper discusses failure occurrence in all aspects of the operational phases (Le., initial burn, coast, restarts, and trends in failure rates over time). In an effort to understand the likelihood of future failures in flight, we present timelines of engine system failures relevant to initial flight histories. Some evidence suggests that propulsion system failures as a result of design problems occur shortly after initial development of the propulsion system; whereas failures because of manufacturing or assembly processing errors may occur during any phase of the system builds process, This paper also explores the detectability of historical failures. Observations from this review are used to ascertain the potential for increased upper stage reliability given investments in integrated system health management. Based on a clear understanding of the failure and success history of previous efforts by multiple space hardware development groups, the paper will investigate potential improvements that can be realized through application of system safety principles.

  13. Surgical Adverse Events, Risk Management, and Malpractice Outcome: Morbidity and Mortality Review Is Not Enough

    PubMed Central

    Morris, John A.; Carrillo, Ysela; Jenkins, Judith M.; Smith, Philip W.; Bledsoe, Sandy; Pichert, James; White, Andrew

    2003-01-01

    Objective To review all admissions (age > 13) to three surgical patient care centers at a single academic medical center between January 1, 1995, and December 6, 1999, for significant surgical adverse events. Summary Background Data Little data exist on the interrelationships between surgical adverse events, risk management, malpractice claims, and resulting indemnity payments to plaintiffs. The authors hypothesized that examination of this process would identify performance improvement opportunities overlooked by standard medical peer review; the risk of litigation would be constant across the three homogeneous patient care centers; and the risk management process would exceed the performance improvement process. Methods Data collected included patient demographics (age, gender, and employment status), hospital financials (hospital charges, costs, and financial class), and outcome. Outcome categories were medical (disability: <1 month, 1–6 months, permanent/death), legal (no legal action, settlement, summary judgment), financial (indemnity payments, legal fees, write-offs), and cause and effect analysis. Cause and effect analysis attempts to identify system failures contributing to adverse outcomes. This was determined by two independent analysts using the 17 Harvard criteria and subdividing these into subsystem causative factors. Results The study group consisted of 130 patients with surgical adverse events resulting in total liabilities of $8.2 million. The incidence of adverse events per 1,000 admissions across the three patient care centers was similar, but indemnity payments per 1,000 admissions varied (cardiothoracic = $30, women’s health = $90, trauma = $520). Patient demographics were not predictive of high-risk subgroups for adverse events or litigation. In terms of medical outcome, 51 patients had permanent disability or death, accounting for 98% of the indemnity payments. In terms of legal outcome, 103 patients received no indemnity payments, 15 patients received indemnity payments, four suits remain open, and in eight cases charges were written off ($0.121 million). To date, no cases have been adjudicated in court. Cause and effect analysis identified 390 system failures contributing to the adverse events (mean 3.0 failures per adverse event); there were 4.7 failures per adverse event in the 15 indemnity cases. Five categories of causes accounted for 75% of the failures (patient management, n = 104; communication, n = 89; administration, n = 33; documentation, n = 32; behavior, n = 23). The current medical review process would have identified 104 of 390 systems failures (37%). Conclusions This study demonstrates no rational link between the tort system and the reduction of adverse events. Sixty-three percent of contributing causes to adverse events were undetected by current medical review processes. Adverse events occur at the interface between different systems or disciplines and result from multiple failures. Indemnity costs per hospital day vary dramatically by patient care center (range $3.60–97.60 a day). The regionalization of healthcare is in jeopardy from the burden of high indemnity payments. PMID:12796581

  14. Fault Tree Analysis: Its Implications for Use in Education.

    ERIC Educational Resources Information Center

    Barker, Bruce O.

    This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…

  15. "If at first you don't succeed": using failure to improve teaching.

    PubMed

    Pinsky, L E; Irby, D M

    1997-11-01

    The authors surveyed a group of distinguished clinical teachers regarding episodes of failure that had subsequently led to improvements in their teaching. Specifically, they examined how these teachers had used reflection on failed approaches as a tool for experiential learning. The respondents believed that failures were as important as successes in learning to be a good teacher. Using qualitative content analysis of the respondents' comments, the authors identified eight common types of failure associated with each of the three phases of teaching: planning, teaching, and reflection. Common failures associated with the planning stage were misjudging learners, lack of preparation, presenting too much content, lack of purpose, and difficulties with audiovisuals. The primary failure associated with actual teaching was inflexibly using a single teaching method. In the reflection phase, respondents said they most often realized that they had made one of two common errors: selecting the wrong teaching strategy or incorrectly implementing a sound strategy. For each identified failure, the respondents made recommendations for improvement. The deliberative process that had guided planning, teaching, and reflecting had helped all of them transform past failures into successes.

  16. Material failure modelling in metals at high strain rates

    NASA Astrophysics Data System (ADS)

    Panov, Vili

    2005-07-01

    Plate impact tests have been conducted on the OFHC Cu using single-stage gas gun. Using stress gauges, which were supported with PMMA blocks on the back of the target plates, stress-time histories have been recorded. After testing, micro structural observations of the softly recovered OFHC Cu spalled specimen were carried out and evolution of damage has been examined. To account for the physical mechanisms of failure, the concept that thermal activation in material separation during fracture processes has been adopted as basic mechanism for this material failure model development. With this basic assumption, the proposed model is compatible with the Mechanical Threshold Stress model and therefore in this development it was incorporated into the MTS material model in DYNA3D. In order to analyse proposed criterion a series of FE simulations have been performed for OFHC Cu. The numerical analysis results clearly demonstrate the ability of the model to predict the spall process and experimentally observed tensile damage and failure. It is possible to simulate high strain rate deformation processes and dynamic failure in tension for wide range of temperature. The proposed cumulative criterion, introduced in the DYNA3D code, is able to reproduce the ``pull-back'' stresses of the free surface caused by creation of the internal spalling, and enables one to analyse numerically the spalling over a wide range of impact velocities.

  17. Foresight begins with FMEA. Delivering accurate risk assessments.

    PubMed

    Passey, R D

    1999-03-01

    If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.

  18. 10 CFR 70.62 - Safety program and integrated safety analysis.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... conclusion of each failure investigation of an item relied on for safety or management measure. (b) Process... methodology being used. (3) Requirements for existing licensees. Individuals holding an NRC license on...

  19. Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)

    NASA Technical Reports Server (NTRS)

    Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.

    2004-01-01

    I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.

  20. The use of failure mode and effects analysis to construct an effective disposal and prevention mechanism for infectious hospital waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Chao Chung, E-mail: ho919@pchome.com.tw; Liao, Ching-Jong

    Highlights: > This study is based on a real case in a regional teaching hospital in Taiwan. > We use Failure mode and effects analysis (FMEA) as the evaluation method. > We successfully identify the risk factors of infectious waste disposal. > We propose plans for the detection of exceptional cases of infectious waste. - Abstract: In recent times, the quality of medical care has been continuously improving in medical institutions wherein patient-centred care has been emphasized. Failure mode and effects analysis (FMEA) has also been promoted as a method of basic risk management and as part of total qualitymore » management (TQM) for improving the quality of medical care and preventing mistakes. Therefore, a study was conducted using FMEA to evaluate the potential risk causes in the process of infectious medical waste disposal, devise standard procedures concerning the waste, and propose feasible plans for facilitating the detection of exceptional cases of infectious waste. The analysis revealed the following results regarding medical institutions: (a) FMEA can be used to identify the risk factors of infectious waste disposal. (b) During the infectious waste disposal process, six items were scored over 100 in the assessment of uncontrolled risks: erroneous discarding of infectious waste by patients and their families, erroneous discarding by nursing staff, erroneous discarding by medical staff, cleaning drivers pierced by sharp articles, cleaning staff pierced by sharp articles, and unmarked output units. Therefore, the study concluded that it was necessary to (1) provide education and training about waste classification to the medical staff, patients and their families, nursing staff, and cleaning staff; (2) clarify the signs of caution; and (3) evaluate the failure mode and strengthen the effects.« less

  1. TH-EF-BRC-03: Fault Tree Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomadsen, B.

    2016-06-15

    This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less

  2. Probabilistic Analysis of a Composite Crew Module

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Krishnamurthy, Thiagarajan

    2011-01-01

    An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.

  3. Medication management strategies used by older adults with heart failure: A systems-based analysis.

    PubMed

    Mickelson, Robin S; Holden, Richard J

    2017-09-01

    Older adults with heart failure use strategies to cope with the constraining barriers impeding medication management. Strategies are behavioral adaptations that allow goal achievement despite these constraining conditions. When strategies do not exist, are ineffective or maladaptive, medication performance and health outcomes are at risk. While constraints to medication adherence are described in literature, strategies used by patients to manage medications are less well-described or understood. Guided by cognitive engineering concepts, the aim of this study was to describe and analyze the strategies used by older adults with heart failure to achieve their medication management goals. This mixed methods study employed an empirical strategies analysis method to elicit medication management strategies used by older adults with heart failure. Observation and interview data collected from 61 older adults with heart failure and 31 caregivers were analyzed using qualitative content analysis to derive categories, patterns and themes within and across cases. Data derived thematic sub-categories described planned and ad hoc methods of strategic adaptations. Stable strategies proactively adjusted the medication management process, environment, or the patients themselves. Patients applied situational strategies (planned or ad hoc) to irregular or unexpected situations. Medication non-adherence was a strategy employed when life goals conflicted with medication adherence. The health system was a source of constraints without providing commensurate strategies. Patients strived to control their medication system and achieve goals using adaptive strategies. Future patient self-mangement research can benefit from methods and theories used to study professional work, such as strategies analysis.

  4. Material and morphology parameter sensitivity analysis in particulate composite materials

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Oskay, Caglar

    2017-12-01

    This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.

  5. Final report of coordination and cooperation with the European Union on embankment failure analysis

    USDA-ARS?s Scientific Manuscript database

    There has been an emphasis in the European Union (EU) community on the investigation of extreme flood processes and the uncertainties related to these processes. Over a 3-year period, the EU and the U.S. dam safety community (1) coordinated their efforts and collected information needed to integrate...

  6. Failure mode effect analysis and fault tree analysis as a combined methodology in risk management

    NASA Astrophysics Data System (ADS)

    Wessiani, N. A.; Yoshio, F.

    2018-04-01

    There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.

  7. Development of NASA's Accident Precursor Analysis Process Through Application on the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Maggio, Gaspare; Groen, Frank; Hamlin, Teri; Youngblood, Robert

    2010-01-01

    Accident Precursor Analysis (APA) serves as the bridge between existing risk modeling activities, which are often based on historical or generic failure statistics, and system anomalies, which provide crucial information about the failure mechanisms that are actually operative in the system. APA docs more than simply track experience: it systematically evaluates experience, looking for under-appreciated risks that may warrant changes to design or operational practice. This paper presents the pilot application of the NASA APA process to Space Shuttle Orbiter systems. In this effort, the working sessions conducted at Johnson Space Center (JSC) piloted the APA process developed by Information Systems Laboratories (ISL) over the last two years under the auspices of NASA's Office of Safety & Mission Assurance, with the assistance of the Safety & Mission Assurance (S&MA) Shuttle & Exploration Analysis Branch. This process is built around facilitated working sessions involving diverse system experts. One important aspect of this particular APA process is its focus on understanding the physical mechanism responsible for an operational anomaly, followed by evaluation of the risk significance of the observed anomaly as well as consideration of generalizations of the underlying mechanism to other contexts. Model completeness will probably always be an issue, but this process tries to leverage operating experience to the extent possible in order to address completeness issues before a catastrophe occurs.

  8. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  9. Identifying the latent failures underpinning medication administration errors: an exploratory study.

    PubMed

    Lawton, Rebecca; Carruthers, Sam; Gardner, Peter; Wright, John; McEachan, Rosie R C

    2012-08-01

    The primary aim of this article was to identify the latent failures that are perceived to underpin medication errors. The study was conducted within three medical wards in a hospital in the United Kingdom. The study employed a cross-sectional qualitative design. Interviews were conducted with 12 nurses and eight managers. Interviews were transcribed and subject to thematic content analysis. A two-step inter-rater comparison tested the reliability of the themes. Ten latent failures were identified based on the analysis of the interviews. These were ward climate, local working environment, workload, human resources, team communication, routine procedures, bed management, written policies and procedures, supervision and leadership, and training. The discussion focuses on ward climate, the most prevalent theme, which is conceptualized here as interacting with failures in the nine other organizational structures and processes. This study is the first of its kind to identify the latent failures perceived to underpin medication errors in a systematic way. The findings can be used as a platform for researchers to test the impact of organization-level patient safety interventions and to design proactive error management tools and incident reporting systems in hospitals. © Health Research and Educational Trust.

  10. Stress redistribution and damage in interconnects caused by electromigration

    NASA Astrophysics Data System (ADS)

    Chiras, Stefanie Ruth

    Electromigration has long been recognized as a phenomenon that induces mass redistribution in metals which, when constrained, can lead to the creation of stress. Since the development of the integrated circuit, electromigration. in interconnects, (the metal lines which carry current between devices in integrated circuits), has become a reliability concern. The primary failure mechanism in the interconnects is usually voiding, which causes electrical resistance increases in the circuit. In some cases, however, another failure mode occurs, fracture of the surrounding dielectric driven by electromigration induced compressive stresses within the interconnect. It is this failure mechanism that is the focus of this thesis. To study dielectric fracture, both residual processing stresses and the development of electromigration induced stress in isolated, constrained interconnects was measured. The high-resolution measurements were made using two types of piezospectroscopy, complemented by finite element analysis (FEA). Both procedures directly measured stress in the underlying or neighboring substrate and used FEA to determine interconnect stresses. These interconnect stresses were related to the effected circuit failure mode through post-test scanning electron microscopy and resistance measurements taken during electromigration testing. The results provide qualitative evidence of electromigration driven passivation fracture, and quantitative analysis of the theoretical model of the failure, the "immortal" interconnect concept.

  11. Failure Analysis Results and Corrective Actions Implemented for the Extravehicular Mobility Unit 3011 Water in the Helmet Mishap

    NASA Technical Reports Server (NTRS)

    Steele, John; Metselaar, Carol; Peyton, Barbara; Rector, Tony; Rossato, Robert; Macias, Brian; Weigel, Dana; Holder, Don

    2015-01-01

    Water entered the Extravehicular Mobility Unit (EMU) helmet during extravehicular activity (EVA) no. 23 aboard the International Space Station on July 16, 2013, resulting in the termination of the EVA approximately 1 hour after it began. It was estimated that 1.5 liters of water had migrated up the ventilation loop into the helmet, adversely impacting the astronaut's hearing, vision, and verbal communication. Subsequent on-board testing and ground-based test, tear-down, and evaluation of the affected EMU hardware components determined that the proximate cause of the mishap was blockage of all water separator drum holes with a mixture of silica and silicates. The blockages caused a failure of the water separator degassing function, which resulted in EMU cooling water spilling into the ventilation loop, migrating around the circulating fan, and ultimately pushing into the helmet. The root cause of the failure was determined to be ground-processing shortcomings of the Airlock Cooling Loop Recovery (ALCLR) Ion Filter Beds, which led to various levels of contaminants being introduced into the filters before they left the ground. Those contaminants were thereafter introduced into the EMU hardware on-orbit during ALCLR scrubbing operations. This paper summarizes the failure analysis results along with identified process, hardware, and operational corrective actions that were implemented as a result of findings from this investigation.

  12. Failure Analysis Results and Corrective Actions Implemented for the EMU 3011 Water in the Helmet Mishap

    NASA Technical Reports Server (NTRS)

    Steele, John; Metselaar, Carol; Peyton, Barbara; Rector, Tony; Rossato, Robert; Macias, Brian; Weigel, Dana; Holder, Don

    2015-01-01

    During EVA (Extravehicular Activity) No. 23 aboard the ISS (International Space Station) on 07/16/2013 water entered the EMU (Extravehicular Mobility Unit) helmet resulting in the termination of the EVA (Extravehicular Activity) approximately 1-hour after it began. It was estimated that 1.5-L of water had migrated up the ventilation loop into the helmet, adversely impacting the astronauts hearing, vision and verbal communication. Subsequent on-board testing and ground-based TT and E (Test, Tear-down and Evaluation) of the affected EMU hardware components led to the determination that the proximate cause of the mishap was blockage of all water separator drum holes with a mixture of silica and silicates. The blockages caused a failure of the water separator function which resulted in EMU cooling water spilling into the ventilation loop, around the circulating fan, and ultimately pushing into the helmet. The root cause of the failure was determined to be ground-processing short-comings of the ALCLR (Airlock Cooling Loop Recovery) Ion Filter Beds which led to various levels of contaminants being introduced into the Filters before they left the ground. Those contaminants were thereafter introduced into the EMU hardware on-orbit during ALCLR scrubbing operations. This paper summarizes the failure analysis results along with identified process, hardware and operational corrective actions that were implemented as a result of findings from this investigation.

  13. Methods for improved forewarning of condition changes in monitoring physical processes

    DOEpatents

    Hively, Lee M.

    2013-04-09

    This invention teaches further improvements in methods for forewarning of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves objective determination of a forewarning threshold (U.sub.FW), together with a failure-onset threshold (U.sub.FAIL) corresponding to a normalized value of a composite measure (C) of dissimilarity; and providing a visual or audible indication to a human observer of failure forewarning and/or failure onset. Another improvement relates to symbolization of the data according the binary numbers representing the slope between adjacent data points. Another improvement relates to adding measures of dissimilarity based on state-to-state dynamical changes of the system. And still another improvement relates to using a Shannon entropy as the measure of condition change in lieu of a connected or unconnected phase space.

  14. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  15. Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.

    PubMed

    Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda

    2015-08-31

    The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.

  16. Failure Mode and Effect Analysis (FMEA) may enhance implementation of clinical practice guidelines: An experience from the Middle East.

    PubMed

    Babiker, Amir; Amer, Yasser S; Osman, Mohamed E; Al-Eyadhy, Ayman; Fatani, Solafa; Mohamed, Sarar; Alnemri, Abdulrahman; Titi, Maher A; Shaikh, Farheen; Alswat, Khalid A; Wahabi, Hayfaa A; Al-Ansary, Lubna A

    2018-02-01

    Implementation of clinical practice guidelines (CPGs) has been shown to reduce variation in practice and improve health care quality and patients' safety. There is a limited experience of CPG implementation (CPGI) in the Middle East. The CPG program in our institution was launched in 2009. The Quality Management department conducted a Failure Mode and Effect Analysis (FMEA) for further improvement of CPGI. This is a prospective study of a qualitative/quantitative design. Our FMEA included (1) process review and recording of the steps and activities of CPGI; (2) hazard analysis by recording activity-related failure modes and their effects, identification of actions required, assigned severity, occurrence, and detection scores for each failure mode and calculated the risk priority number (RPN) by using an online interactive FMEA tool; (3) planning: RPNs were prioritized, recommendations, and further planning for new interventions were identified; and (4) monitoring: after reduction or elimination of the failure mode. The calculated RPN will be compared with subsequent analysis in post-implementation phase. The data were scrutinized from a feedback of quality team members using a FMEA framework to enhance the implementation of 29 adapted CPGs. The identified potential common failure modes with the highest RPN (≥ 80) included awareness/training activities, accessibility of CPGs, fewer advocates from clinical champions, and CPGs auditing. Actions included (1) organizing regular awareness activities, (2) making CPGs printed and electronic copies accessible, (3) encouraging senior practitioners to get involved in CPGI, and (4) enhancing CPGs auditing as part of the quality sustainability plan. In our experience, FMEA could be a useful tool to enhance CPGI. It helped us to identify potential barriers and prepare relevant solutions. © 2017 John Wiley & Sons, Ltd.

  17. Putting Integrated Systems Health Management Capabilities to Work: Development of an Advanced Caution and Warning System for Next-Generation Crewed Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Mccann, Robert S.; Spirkovska, Lilly; Smith, Irene

    2013-01-01

    Integrated System Health Management (ISHM) technologies have advanced to the point where they can provide significant automated assistance with real-time fault detection, diagnosis, guided troubleshooting, and failure consequence assessment. To exploit these capabilities in actual operational environments, however, ISHM information must be integrated into operational concepts and associated information displays in ways that enable human operators to process and understand the ISHM system information rapidly and effectively. In this paper, we explore these design issues in the context of an advanced caution and warning system (ACAWS) for next-generation crewed spacecraft missions. User interface concepts for depicting failure diagnoses, failure effects, redundancy loss, "what-if" failure analysis scenarios, and resolution of ambiguity groups are discussed and illustrated.

  18. TH-EF-BRC-04: Quality Management Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yorke, E.

    2016-06-15

    This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less

  19. TH-EF-BRC-00: TG-100 Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2016-06-15

    This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less

  20. TH-EF-BRC-02: FMEA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huq, M.

    2016-06-15

    This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunscombe, P.

    This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less

  2. Qualification of computerized monitoring systems in a cell therapy facility compliant with the good manufacturing practices.

    PubMed

    Del Mazo-Barbara, Anna; Mirabel, Clémentine; Nieto, Valentín; Reyes, Blanca; García-López, Joan; Oliver-Vila, Irene; Vives, Joaquim

    2016-09-01

    Computerized systems (CS) are essential in the development and manufacture of cell-based medicines and must comply with good manufacturing practice, thus pushing academic developers to implement methods that are typically found within pharmaceutical industry environments. Qualitative and quantitative risk analyses were performed by Ishikawa and Failure Mode and Effects Analysis, respectively. A process for qualification of a CS that keeps track of environmental conditions was designed and executed. The simplicity of the Ishikawa analysis permitted to identify critical parameters that were subsequently quantified by Failure Mode Effects Analysis, resulting in a list of test included in the qualification protocols. The approach presented here contributes to simplify and streamline the qualification of CS in compliance with pharmaceutical quality standards.

  3. Submarine slope failures along the convergent continental margin of the Middle America Trench

    NASA Astrophysics Data System (ADS)

    Harders, Rieka; Ranero, CéSar R.; Weinrebe, Wilhelm; Behrmann, Jan H.

    2011-06-01

    We present the first comprehensive study of mass wasting processes in the continental slope of a convergent margin of a subduction zone where tectonic processes are dominated by subduction erosion. We have used multibeam bathymetry along ˜1300 km of the Middle America Trench of the Central America Subduction Zone and deep-towed side-scan sonar data. We found abundant evidence of large-scale slope failures that were mostly previously unmapped. The features are classified into a variety of slope failure types, creating an inventory of 147 slope failure structures. Their type distribution and abundance define a segmentation of the continental slope in six sectors. The segmentation in slope stability processes does not appear to be related to slope preconditioning due to changes in physical properties of sediment, presence/absence of gas hydrates, or apparent changes in the hydrogeological system. The segmentation appears to be better explained by changes in slope preconditioning due to variations in tectonic processes. The region is an optimal setting to study how tectonic processes related to variations in intensity of subduction erosion and changes in relief of the underthrusting plate affect mass wasting processes of the continental slope. The largest slope failures occur offshore Costa Rica. There, subducting ridges and seamounts produce failures with up to hundreds of meters high headwalls, with detachment planes that penetrate deep into the continental margin, in some cases reaching the plate boundary. Offshore northern Costa Rica a smooth oceanic seafloor underthrusts the least disturbed continental slope. Offshore Nicaragua, the ocean plate is ornamented with smaller seamounts and horst and graben topography of variable intensity. Here mass wasting structures are numerous and comparatively smaller, but when combined, they affect a large part of the margin segment. Farther north, offshore El Salvador and Guatemala the downgoing plate has no large seamounts but well-defined horst and graben topography. Off El Salvador slope failure is least developed and mainly occurs in the uppermost continental slope at canyon walls. Off Guatemala mass wasting is abundant and possibly related to normal faulting across the slope. Collapse in the wake of subducting ocean plate topography is a likely failure trigger of slumps. Rapid oversteepening above subducting relief may trigger translational slides in the middle Nicaraguan upper Costa Rican slope. Earthquake shaking may be a trigger, but we interpret that slope failure rate is lower than recurrence time of large earthquakes in the region. Generally, our analysis indicates that the importance of mass wasting processes in the evolution of margins dominated by subduction erosion and its role in sediment dynamics may have been previously underestimated.

  4. Robustness surfaces of complex networks

    NASA Astrophysics Data System (ADS)

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-09-01

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarrack, A.G.

    The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less

  6. Finite element assisted prediction of ductile fracture in sheet bulging

    NASA Astrophysics Data System (ADS)

    Donald, Bryan J. Mac; Lorza, Ruben Lostado; Yoshihara, Shoichiro

    2017-10-01

    With growing demand for energy efficiency, there is much focus on reducing oil consumption rates and utilising alternative fuels. A contributor to the solution in this area is to produce lighter vehicles that are more fuel efficient and/or allow for the use of alternative fuel sources (e.g. electric powered automobiles). Near-net-shape manufacturing processes such as hydroforming have great potential to reduce structural weight while still maintaining structural strength and performance. Finite element analysis techniques have proved invaluable in optimizing such hydroforming processes, however, the majority of such studies have used simple predictors of failure which are usually yield criteria such as von Mises stress. There is clearly potential to obtain more optimal solutions using more advanced predictors of failure. This paper compared the Von Mises stress failure criteria and the Oyane's ductile fracture criteria in the sheet hydroforming of magnesium alloys. It was found that the results obtained from the models which used Oyane's ductile fracture criteria were more realistic than those obtained from those that used Von Mises stress as a failure criteria.

  7. Robustness surfaces of complex networks.

    PubMed

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-09-02

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.

  8. Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.

    1999-01-01

    A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.

  9. Independent Orbiter Assessment (IOA): Assessment of the backup flight system FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Prust, E. E.; Ewell, J. J., Jr.; Hinsdale, L. W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Backup Flight System (BFS) hardware, generating draft failure modes and Potential Critical Items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed NASA Post 51-L FMEA/CIL baseline. A resolution of each discrepancy from the comparison is provided through additional analysis as required. This report documents the results of that comparison for the Orbiter BFS hardware. The IOA product for the BFS analysis consisted of 29 failure mode worksheets that resulted in 21 Potential Critical Items (PCI) being identified. This product was originally compared with the proposed NASA BFS baseline and subsequently compared with the applicable Data Processing System (DPS), Electrical Power Distribution and Control (EPD and C), and Displays and Controls NASA CIL items. The comparisons determined if there were any results which had been found by the IOA but were not in the NASA baseline. The original assessment determined there were numerous failure modes and potential critical items in the IOA analysis that were not contained in the NASA BFS baseline. Conversely, the NASA baseline contained three FMEAs (IMU, ADTA, and Air Data Probe) for CIL items that were not identified in the IOA product.

  10. PDA survey of quality risk management practices in the pharmaceutical, devices, & biotechnology industries.

    PubMed

    Ahmed, Ruhi; Baseman, Harold; Ferreira, Jorge; Genova, Thomas; Harclerode, William; Hartman, Jeffery; Kim, Samuel; Londeree, Nanette; Long, Michael; Miele, William; Ramjit, Timothy; Raschiatore, Marlene; Tomonto, Charles

    2008-01-01

    In July 2006 the Parenteral Drug Association's Risk Management Task Force for Aseptic Processes, conducted an electronic survey of PDA members to determine current industry practices regarding implementation of Quality Risk Management in their organizations. This electronic survey was open and publicly available via the PDA website and targeted professionals in our industry who are involved in initiating, implementing, or reviewing risk management programs or decisions in their organizations. One hundred twenty-nine members participated and their demographics are presented in the sidebar "Correspondents Profile". Among the major findings are: *The "Aseptic Processing/Filling" operation is the functional area identified as having the greatest need for risk assessment and quality risk management. *The most widely used methodology in industry to identify risk is Failure Mode and Effects Analysis (FMEA). This tool was most widely applied in assessing change control and for adverse event, complaint, or failure investigations. *Despite the fact that personnel training was identified as the strategy most used for controlling/minimizing risk, the largest contributors to sterility failure in operations are still "Personnel". *Most companies still rely on "Manufacturing Controls" to mitigate risk and deemed the utilization of Process Analytical Technology (PAT) least important in this aspect. *A majority of correspondents verified that they did not periodically assess their risk management programs. *A majority of the correspondents desired to see case studies or examples of risk analysis implementation (as applicable to aseptic processing) in future PDA technical reports on risk management.

  11. 49 CFR 510.12 - Remedies for failure to comply with compulsory process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Remedies for failure to comply with compulsory process. 510.12 Section 510.12 Transportation Other Regulations Relating to Transportation (Continued... § 510.12 Remedies for failure to comply with compulsory process. Any failure to comply with compulsory...

  12. Application of Six Sigma methodology to a cataract surgery unit.

    PubMed

    Taner, Mehmet Tolga

    2013-01-01

    The article's aim is to focus on the application of Six Sigma to minimise intraoperative and post-operative complications rates in a Turkish public hospital cataract surgery unit. Implementing define-measure-analyse-improve and control (DMAIC) involves process mapping, fishbone diagrams and rigorous data-collection. Failure mode and effect analysis (FMEA), pareto diagrams, control charts and process capability analysis are applied to redress cataract surgery failure root causes. Inefficient skills of assistant surgeons and technicians, low quality of IOLs used, wrong IOL placement, unsystematic sterilisation of surgery rooms and devices, and the unprioritising network system are found to be the critical drivers of intraoperative-operative and post-operative complications. Sigma level was increased from 2.60 to 3.75 subsequent to extensive training of assistant surgeons, ophthalmologists and technicians, better quality IOLs, systematic sterilisation and air-filtering, and the implementation of a more sophisticated network system. This article shows that Six Sigma measurement and process improvement can become the impetus for cataract unit staff to rethink their process and reduce malpractices. Measuring, recording and reporting data regularly helps them to continuously monitor their overall process and deliver safer treatments. This is the first Six Sigma ophthalmology study in Turkey.

  13. Failure Behavior of Granite Affected by Confinement and Water Pressure and Its Influence on the Seepage Behavior by Laboratory Experiments.

    PubMed

    Cheng, Cheng; Li, Xiao; Li, Shouding; Zheng, Bo

    2017-07-14

    Failure behavior of granite material is paramount for host rock stability of geological repositories for high-level waste (HLW) disposal. Failure behavior also affects the seepage behavior related to transportation of radionuclide. Few of the published studies gave a consistent analysis on how confinement and water pressure affect the failure behavior, which in turn influences the seepage behavior of the rock during the damage process. Based on a series of laboratory experiments on NRG01 granite samples cored from Alxa area, a candidate area for China's HLW disposal, this paper presents some detailed observations and analyses for a better understanding on the failure mechanism and seepage behavior of the samples under different confinements and water pressure. The main findings of this study are as follows: (1) Strength reduction properties were found for the granite under water pressure. Besides, the complete axial stress-strain curves show more obvious yielding process in the pre-peak region and a more gradual stress drop in the post-peak region; (2) Shear fracturing pattern is more likely to form in the granite samples with the effect of water pressure, even under much lower confinements, than the predictions from the conventional triaxial compressive results; (3) Four stages of inflow rate curves are divided and the seepage behaviors are found to depend on the failure behavior affected by the confinement and water pressure.

  14. Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.

    1997-01-01

    A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.

  15. An Example of Concurrent Engineering

    NASA Technical Reports Server (NTRS)

    Rowe, Sidney; Whitten, David; Cloyd, Richard; Coppens, Chris; Rodriguez, Pedro

    1998-01-01

    The Collaborative Engineering Design and Analysis Room (CEDAR) facility allows on-the- spot design review capability for any project during all phases of development. The required disciplines assemble in this facility to work on any problems (analysis, manufacturing, inspection, etc.) associated with a particular design. A small highly focused team of specialists can meet in this room to better expedite the process of developing a solution to an engineering task within the framework of the constraints that are unique to each discipline. This facility provides the engineering tools and translators to develop a concept within the confines of the room or with remote team members that could access the team's data from other locations. The CEDAR area is envisioned as excellent for failure investigation meetings to be conducted where the computer capabilities can be utilized in conjunction with the Smart Board display to develop failure trees, brainstorm failure modes, and evaluate possible solutions.

  16. [Development of Hospital Equipment Maintenance Information System].

    PubMed

    Zhou, Zhixin

    2015-11-01

    Hospital equipment maintenance information system plays an important role in improving medical treatment quality and efficiency. By requirement analysis of hospital equipment maintenance, the system function diagram is drawed. According to analysis of input and output data, tables and reports in connection with equipment maintenance process, relationships between entity and attribute is found out, and E-R diagram is drawed and relational database table is established. Software development should meet actual process requirement of maintenance and have a friendly user interface and flexible operation. The software can analyze failure cause by statistical analysis.

  17. Fundamental analysis of the failure of polymer-based fiber reinforced composites

    NASA Technical Reports Server (NTRS)

    Kanninen, M. F.; Rybicki, E. F.; Griffith, W. I.; Broek, D.

    1975-01-01

    A mathematical model predicting the strength of unidirectional fiber reinforced composites containing known flaws and with linear elastic-brittle material behavior was developed. The approach was to imbed a local heterogeneous region surrounding the crack tip into an anisotropic elastic continuum. This (1) permits an explicit analysis of the micromechanical processes involved in the fracture, and (2) remains simple enough to be useful in practical computations. Computations for arbitrary flaw size and orientation under arbitrary applied loads were performed. The mechanical properties were those of graphite epoxy. With the rupture properties arbitrarily varied to test the capabilities of the model to reflect real fracture modes, it was shown that fiber breakage, matrix crazing, crack bridging, matrix-fiber debonding, and axial splitting can all occur during a period of (gradually) increasing load prior to catastrophic failure. The calculations also reveal the sequential nature of the stable crack growth process proceding fracture.

  18. Microembossing of ultrafine grained Al: microstructural analysis and finite element modelling

    NASA Astrophysics Data System (ADS)

    Qiao, Xiao Guang; Bah, Mamadou T.; Zhang, Jiuwen; Gao, Nong; Moktadir, Zakaria; Kraft, Michael; Starink, Marco J.

    2010-10-01

    Ultra-fine-grained (UFG) Al-1050 processed by equal channel angular pressing and UFG Al-Mg-Cu-Mn processed by high-pressure torsion (HPT) were embossed at both room temperature and 300 °C, with the aim of producing micro-channels. The behaviour of Al alloys during the embossing process was analysed using finite element modelling. The cold embossing of both Al alloys is characterized by a partial pattern transfer, a large embossing force, channels with oblique sidewalls and a large failure rate of the mould. The hot embossing is characterized by straight channel sidewalls, fully transferred patterns and reduced loads which decrease the failure rate of the mould. Hot embossing of UFG Al-Mg-Cu-Mn produced by HPT shows a potential of fabrication of microelectromechanical system components with micro channels.

  19. The Effect of Positive Group Psychotherapy and Motivational Interviewing on Smoking Cessation: A Qualitative Descriptive Study.

    PubMed

    Lee, Eun Jin

    The purpose of this study was to describe the process and evaluate the effect of positive group psychotherapy and motivational interviewing as an intervention for smoking cessation. A qualitative descriptive study was conducted at a university in South Korea. Positive group psychotherapy and motivational interviewing were attended by 36 smokers for 1 hour once a week, for 6 hours. A recorded exit interview was conducted after the intervention. The resulting transcripts were analyzed with content analysis and thematic analysis. Among the 36 study participants, the importance of stopping smoking was rated higher in the successful cessation (defined as those who ceased smoking for at least 3 months; hereafter, success group) group (8.6 ± 0.4, n = 10) than in the failed cessation (defined as those who did not cease smoking for at least 3 months; hereafter, failure group) group (7.75 ± 0.3, n = 26; p < .01). The confidence to stop smoking was rated higher by the successes (8.4 ± 0.3) than by the failures (5.5 ± 0.4; p < .01). More successes wanted to stop smoking for the sake of their loved ones (60%) and health (50%), whereas more failures wanted to stop smoking for saving money (45.5%). Failures had more cross-addiction than successes (three to four addictions: 31.5% vs. 20%). When participants were asked to find 10 personality merits, 78% of the successes and 47% of the failures found their 10 merits. The therapeutic process was described as "sharing the smoking cessation process with others," "detailed guidance for stress management and smoking cessation," and "compliments about efforts for smoking cessation." The importance of and confidence in smoking cessation were predictors for successful cessation for 3-6 months. Motivational interviewing increased motivations, whereas positive group psychotherapy increased positive thoughts and confidence.

  20. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  1. FMEA team performance in health care: A qualitative analysis of team member perceptions.

    PubMed

    Wetterneck, Tosha B; Hundt, Ann Schoofs; Carayon, Pascale

    2009-06-01

    : Failure mode and effects analysis (FMEA) is a commonly used prospective risk assessment approach in health care. Failure mode and effects analyses are time consuming and resource intensive, and team performance is crucial for FMEA success. We evaluate FMEA team members' perceptions of FMEA team performance to provide recommendations to improve the FMEA process in health care organizations. : Structured interviews and survey questionnaires were administered to team members of 2 FMEA teams at a Midwest Hospital to evaluate team member perceptions of FMEA team performance and factors influencing team performance. Interview transcripts underwent content analysis, and descriptive statistics were performed on questionnaire results to identify and quantify FMEA team performance. Theme-based nodes were categorized using the input-process-outcome model for team performance. : Twenty-eight interviews and questionnaires were completed by 24 team members. Four persons participated on both teams. There were significant differences between the 2 teams regarding perceptions of team functioning and overall team effectiveness that are explained by difference in team inputs and process (e.g., leadership/facilitation, team objectives, attendance of process owners). : Evaluation of team members' perceptions of team functioning produced useful insights that can be used to model future team functioning. Guidelines for FMEA team success are provided.

  2. Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    2008-01-01

    Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.

  3. Knowledge representation and user interface concepts to support mixed-initiative diagnosis

    NASA Technical Reports Server (NTRS)

    Sobelman, Beverly H.; Holtzblatt, Lester J.

    1989-01-01

    The Remote Maintenance Monitoring System (RMMS) provides automated support for the maintenance and repair of ModComp computer systems used in the Launch Processing System (LPS) at Kennedy Space Center. RMMS supports manual and automated diagnosis of intermittent hardware failures, providing an efficient means for accessing and analyzing the data generated by catastrophic failure recovery procedures. This paper describes the design and functionality of the user interface for interactive analysis of memory dump data, relating it to the underlying declarative representation of memory dumps.

  4. Parts, materials, and processes experience summary, volume 2. [design, engineering, and quality control

    NASA Technical Reports Server (NTRS)

    1973-01-01

    This summary provides the general engineering community with the accumulated experience from ALERT reports issued by NASA and the Government-Industry. Data Exchange Program, and related experience gained by Government and industry. It provides expanded information on selected topics by relating the problem area (failure) to the cause, the investigation and findings, the suggestions for avoidance (inspections, screening tests, proper part applications, requirements for manufacturer's plant facilities, etc.), and failure analysis procedures. Diodes, integrated circuits, and transistors are covered in this volume.

  5. 8 CFR 1208.10 - Failure to appear at a scheduled hearing before an immigration judge; failure to follow...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... information processing. 1208.10 Section 1208.10 Aliens and Nationality EXECUTIVE OFFICE FOR IMMIGRATION REVIEW... to follow requirements for biometrics and other biographical information processing. Failure to... application and the entry of an order of deportation or removal in absentia. Failure to comply with processing...

  6. Failure Analysis in Platelet Molded Composite Systems

    NASA Astrophysics Data System (ADS)

    Kravchenko, Sergii G.

    Long-fiber discontinuous composite systems in the form of chopped prepreg tapes provide an advanced, structural grade, molding compound allowing for fabrication of complex three-dimensional components. Understanding of process-structure-property relationship is essential for application of prerpeg platelet molded components, especially because of their possible irregular disordered heterogeneous morphology. Herein, a structure-property relationship was analyzed in the composite systems of many platelets. Regular and irregular morphologies were considered. Platelet-based systems with more ordered morphology possess superior mechanical performance. While regular morphologies allow for a careful inspection of failure mechanisms derived from the morphological characteristics, irregular morphologies are representative of the composite architectures resulting from uncontrolled deposition and molding with chopped prerpegs. Progressive failure analysis (PFA) was used to study the damaged deformation up to ultimate failure in a platelet-based composite system. Computational damage mechanics approaches were utilized to conduct the PFA. The developed computational models granted understanding of how the composite structure details, meaning the platelet geometry and system morphology (geometrical arrangement and orientation distribution of platelets), define the effective mechanical properties of a platelet-molded composite system, its stiffness, strength and variability in properties.

  7. Identification and classification of failure modes in laminated composites by using a multivariate statistical analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Baccar, D.; Söffker, D.

    2017-11-01

    Acoustic Emission (AE) is a suitable method to monitor the health of composite structures in real-time. However, AE-based failure mode identification and classification are still complex to apply due to the fact that AE waves are generally released simultaneously from all AE-emitting damage sources. Hence, the use of advanced signal processing techniques in combination with pattern recognition approaches is required. In this paper, AE signals generated from laminated carbon fiber reinforced polymer (CFRP) subjected to indentation test are examined and analyzed. A new pattern recognition approach involving a number of processing steps able to be implemented in real-time is developed. Unlike common classification approaches, here only CWT coefficients are extracted as relevant features. Firstly, Continuous Wavelet Transform (CWT) is applied to the AE signals. Furthermore, dimensionality reduction process using Principal Component Analysis (PCA) is carried out on the coefficient matrices. The PCA-based feature distribution is analyzed using Kernel Density Estimation (KDE) allowing the determination of a specific pattern for each fault-specific AE signal. Moreover, waveform and frequency content of AE signals are in depth examined and compared with fundamental assumptions reported in this field. A correlation between the identified patterns and failure modes is achieved. The introduced method improves the damage classification and can be used as a non-destructive evaluation tool.

  8. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, M; Abazeed, M; Woody, N

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported tomore » R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.« less

  9. Failure analysis in the identification of synergies between cleaning monitoring methods.

    PubMed

    Whiteley, Greg S; Derry, Chris; Glasbey, Trevor

    2015-02-01

    The 4 monitoring methods used to manage the quality assurance of cleaning outcomes within health care settings are visual inspection, microbial recovery, fluorescent marker assessment, and rapid ATP bioluminometry. These methods each generate different types of information, presenting a challenge to the successful integration of monitoring results. A systematic approach to safety and quality control can be used to interrogate the known qualities of cleaning monitoring methods and provide a prospective management tool for infection control professionals. We investigated the use of failure mode and effects analysis (FMEA) for measuring failure risk arising through each cleaning monitoring method. FMEA uses existing data in a structured risk assessment tool that identifies weaknesses in products or processes. Our FMEA approach used the literature and a small experienced team to construct a series of analyses to investigate the cleaning monitoring methods in a way that minimized identified failure risks. FMEA applied to each of the cleaning monitoring methods revealed failure modes for each. The combined use of cleaning monitoring methods in sequence is preferable to their use in isolation. When these 4 cleaning monitoring methods are used in combination in a logical sequence, the failure modes noted for any 1 can be complemented by the strengths of the alternatives, thereby circumventing the risk of failure of any individual cleaning monitoring method. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  10. The role of tectonic damage and brittle rock fracture in the development of large rock slope failures

    NASA Astrophysics Data System (ADS)

    Brideau, Marc-André; Yan, Ming; Stead, Doug

    2009-01-01

    Rock slope failures are frequently controlled by a complex combination of discontinuities that facilitate kinematic release. These discontinuities are often associated with discrete folds, faults, and shear zones, and/or related tectonic damage. The authors, through detailed case studies, illustrate the importance of considering the influence of tectonic structures not only on three-dimensional kinematic release but also in the reduction of rock mass properties due to induced damage. The case studies selected reflect a wide range of rock mass conditions. In addition to active rock slope failures they include two major historic failures, the Hope Slide, which occurred in British Columbia in 1965 and the Randa rockslides which occurred in Switzerland in 1991. Detailed engineering geological mapping combined with rock testing, GIS data analysis and for selected case numerical modelling, have shown that specific rock slope failure mechanisms may be conveniently related to rock mass classifications such as the Geological Strength Index (GSI). The importance of brittle intact rock fracture in association with pre-existing rock mass damage is emphasized though a consideration of the processes involved in the progressive-time dependent development not only of though-going failure surfaces but also lateral and rear-release mechanisms. Preliminary modelling data are presented to illustrate the importance of intact rock fracture and step-path failure mechanisms; and the results are discussed with reference to selected field observations. The authors emphasize the importance of considering all forms of pre-existing rock mass damage when assessing potential or operative failure mechanisms. It is suggested that a rock slope rock mass damage assessment can provide an improved understanding of the potential failure mode, the likely hazard presented, and appropriate methods of both analysis and remedial treatment.

  11. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com; Johnson, Randall; Larson, Gary

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authorsmore » estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.« less

  12. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis.

    PubMed

    Zheng, Yuanshui; Johnson, Randall; Larson, Gary

    2016-06-01

    Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.

  13. Recruiting patients to clinical trials: lessons from studies of growth hormone treatment in renal failure.

    PubMed

    Postlethwaite, R J; Reynolds, J M; Wood, A J; Evans, J H; Lewis, M A; Eminson, D M

    1995-07-01

    Issues raised by the recruitment of children to trials of growth hormone treatment for short stature in chronic renal failure are reported. Information needs of parents and children are discussed, the latter should take account of the children's developmental level and anticipated involvement in decision making. When the incidence of certain side effects is low and probably unquantifiable there are particular problems; failure to include these in information sheets may compromise informed consent but inclusion will, at least for some families, make an already difficult decision even more complicated. A process of recruitment is described which attempts to protect against bias and which balances the requirement to impart neutral information with appropriate clinical involvement in the decision to enter the study. Other functions of the recruitment process are identified. Analysis of understanding and decision making demonstrates that good understanding is neither necessary nor sufficient for ease of decision making. The recruitment process was time consuming and needs planning and funding in future studies. Many of these issues are of general importance for trials of treatment in children.

  14. A statistical-based material and process guidelines for design of carbon nanotube field-effect transistors in gigascale integrated circuits.

    PubMed

    Ghavami, Behnam; Raji, Mohsen; Pedram, Hossein

    2011-08-26

    Carbon nanotube field-effect transistors (CNFETs) show great promise as building blocks of future integrated circuits. However, synthesizing single-walled carbon nanotubes (CNTs) with accurate chirality and exact positioning control has been widely acknowledged as an exceedingly complex task. Indeed, density and chirality variations in CNT growth can compromise the reliability of CNFET-based circuits. In this paper, we present a novel statistical compact model to estimate the failure probability of CNFETs to provide some material and process guidelines for the design of CNFETs in gigascale integrated circuits. We use measured CNT spacing distributions within the framework of detailed failure analysis to demonstrate that both the CNT density and the ratio of metallic to semiconducting CNTs play dominant roles in defining the failure probability of CNFETs. Besides, it is argued that the large-scale integration of these devices within an integrated circuit will be feasible only if a specific range of CNT density with an acceptable ratio of semiconducting to metallic CNTs can be adjusted in a typical synthesis process.

  15. Poster - Thur Eve - 38: Review of couch parameters using an FMEA.

    PubMed

    Larouche, R; Doucet, R; Rémy, E; Filion, A; Poirier, L

    2012-07-01

    To improve patient safety during positioning, we undertook a systematic review of the processes used by our center to obtain couch positions. We used a Failure Mode and Effects Analysis (FMEA) framework and fifteen different possible failures were identified and rated. The three major failures were 1) Loss of planned couch position and bias from the previous day's couch position, 2) DICOM origin or isocenter is different between two plans (imaging or treatment), and 3) Patient shift in opposite direction than intended. The main effect of these failures was to cause an override of couch parameters. Based on these results, we modified our processes, introduced new QA and software checks and developed new tolerance tables so as to improve system robustness and increase our success rate at catching failures before they can affect the patient. It has been a year since we made these modifications. Based on our results, we have reduced the number of overrides at our center from a maximum of 20.5% to a maximum of 6.3%, with an average at 4% of daily treatments. Our results suggest that FMEA is an effective tool in improving treatment quality that could be used in other centers. © 2012 American Association of Physicists in Medicine.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  17. Material wear and failure mode analysis of breakfast cereal extruder barrels and screw elements

    NASA Astrophysics Data System (ADS)

    Mastio, Michael Joseph, Jr.

    2005-11-01

    Nearly seventy-five years ago, the single screw extruder was introduced as a means to produce metal products. Shortly after that, the extruder found its way into the plastics industry. Today much of the world's polymer industry utilizes extruders to produce items such as soda bottles, PVC piping, and toy figurines. Given the significant economical advantages of extruders over conventional batch flow systems, extruders have also migrated into the food industry. Food applications include the meat, pet food, and cereal industries to name just a few. Cereal manufacturers utilize extruders to produce various forms of Ready-to-Eat (RTE) cereals. These cereals are made from grains such as rice, oats, wheat, and corn. The food industry has been incorrectly viewed as an extruder application requiring only minimal energy control and performance capability. This misconception has resulted in very little research in the area of material wear and failure mode analysis of breakfast cereal extruders. Breakfast cereal extruder barrels and individual screw elements are subjected to the extreme pressures and temperatures required to shear and cook the cereal ingredients, resulting in excessive material wear and catastrophic failure of these components. Therefore, this project focuses on the material wear and failure mode analysis of breakfast cereal extruder barrels and screw elements, modeled as a Discrete Time Markov Chain (DTMC) process in which historical data is used to predict future failures. Such predictive analysis will yield cost savings opportunities by providing insight into extruder maintenance scheduling and interchangeability of screw elements. In this DTMC wear analysis, four states of wear are defined and a probability transition matrix is determined based upon 24,041 hours of operational data. This probability transition matrix is used to predict when an extruder component will move to the next state of wear and/or failure. This information can be used to determine maintenance schedules and screw element interchangeability.

  18. Failure mode and effects analysis to reduce risk of anticoagulation levels above the target range during concurrent antimicrobial therapy.

    PubMed

    Daniels, Lisa M; Barreto, Jason N; Kuth, John C; Anderson, Jeremy R; Zhang, Beilei; Majka, Andrew J; Morgenthaler, Timothy I; Tosh, Pritish K

    2015-07-15

    A failure mode and effects analysis (FMEA) was conducted to analyze the clinical and operational processes leading to above-target International Normalized Ratios (INRs) in warfarin-treated patients receiving concurrent antimicrobial therapy. The INRs of patients on long-term warfarin therapy who received a course of trimethoprim-sulfamethoxazole, metronidazole, fluconazole, miconazole, or voriconazole (highly potentiating antimicrobials, or HPAs) between September 1 and December 31, 2011, were compared with patients on long-term warfarin therapy who did not receive any antimicrobial during the same period. A multidisciplinary team of physicians, pharmacists, and a systems analyst was then formed to complete a step-by-step outline of the processes involved in warfarin management and concomitant HPA therapy, followed by an FMEA. Patients taking trimethoprim-sulfamethoxazole, metronidazole, or fluconazole demonstrated a significantly increased risk of having an INR of >4.5. The FMEA identified 134 failure modes. The most common failure modes were as follows: (1) electronic medical records did not identify all patients receiving warfarin, (2) HPA prescribers were unaware of recommended warfarin therapy when HPAs were prescribed, (3) HPA prescribers were unaware that a patient was taking warfarin and that the drug interaction is significant, and (4) warfarin managers were unaware that an HPA had been prescribed for a patient. An FMEA determined that the risk of adverse events caused by concomitantly administering warfarin and HPAs can be decreased by preemptively identifying patients receiving warfarin, having a care process in place, alerting providers about the patient's risk status, and notifying providers at the anticoagulation clinic. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  19. Nondestructive SEM for surface and subsurface wafer imaging

    NASA Technical Reports Server (NTRS)

    Propst, Roy H.; Bagnell, C. Robert; Cole, Edward I., Jr.; Davies, Brian G.; Dibianca, Frank A.; Johnson, Darryl G.; Oxford, William V.; Smith, Craig A.

    1987-01-01

    The scanning electron microscope (SEM) is considered as a tool for both failure analysis as well as device characterization. A survey is made of various operational SEM modes and their applicability to image processing methods on semiconductor devices.

  20. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    PubMed Central

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software∕hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%–3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation of clinical resources because the most critical failure modes receive the most attention. It is expected that the set of guidelines proposed here will serve as a living document that is updated with the accumulation of progressively more intrainstitutional and interinstitutional experience with DMLC tracking. PMID:21302802

  1. Fatigue Behavior of Computer-Aided Design/Computer-Assisted Manufacture Ceramic Abutments as a Function of Design and Ceramics Processing.

    PubMed

    Kelly, J Robert; Rungruanganunt, Patchnee

    2016-01-01

    Zirconia is being widely used, at times apparently by simply copying a metal design into ceramic. Structurally, ceramics are sensitive to both design and processing (fabrication) details. The aim of this work was to examine four computer-aided design/computer-assisted manufacture (CAD/CAM) abutments using a modified International Standards Organization (ISO) implant fatigue protocol to determine performance as a function of design and processing. Two full zirconia and two hybrid (Ti-based) abutments (n = 12 each) were tested wet at 15 Hz at a variety of loads to failure. Failure probability distributions were examined at each load, and when found to be the same, data from all loads were combined for lifetime analysis from accelerated to clinical conditions. Two distinctly different failure modes were found for both full zirconia and Ti-based abutments. One of these for zirconia has been reported clinically in the literature, and one for the Ti-based abutments has been reported anecdotally. The ISO protocol modification in this study forced failures in the abutments; no implant bodies failed. Extrapolated cycles for 10% failure at 70 N were: full zirconia, Atlantis 2 × 10(7) and Straumann 3 × 10(7); and Ti-based, Glidewell 1 × 10(6) and Nobel 1 × 10(21). Under accelerated conditions (200 N), performance differed significantly: Straumann clearly outperformed Astra (t test, P = .013), and the Glidewell Ti-base abutment also outperformed Atlantis zirconia at 200 N (Nobel ran-out; t test, P = .035). The modified ISO protocol in this study produced failures that were seen clinically. The manufacture matters; differences in design and fabrication that influence performance cannot be discerned clinically.

  2. Deformation processes in forging ceramics

    NASA Technical Reports Server (NTRS)

    Cannon, R. M.; Rhodes, W. H.

    1972-01-01

    The deformation processes involved in the forging of refractory ceramic oxides were investigated. A combination of mechanical testing and forging are utilized to investigate both the flow and fracture processes involved. An additional hemisphere forging was done which failed prematurely. Analysis and comparison with available fracture data for AL2O3 indicated possible causes of the failure. Examination of previous forgings indicated an increase in grain boundary cavitation with increasing strain.

  3. Ground Vehicle Condition Based Maintenance

    DTIC Science & Technology

    2010-10-04

    Diagnostic Process Map 32 FMEAs Developed : • Diesel Engine • Transmission • Alternators Analysis : • Identify failure modes • Derive design factors and...S&T Initiatives  TARDEC P&D Process Map  Component Testing  ARL CBM Research  AMSAA SDC & Terrain Modeling UNCLASSIFIED 3 CBM+ Overview...UNCLASSIFIED 4 RCM and CBM are core processes for CBM+ System Development • Army Regulation 750-1, 20 Sep 2007, p. 79 - Reliability Centered Maintenance (RCM

  4. Critical laboratory value notification: a failure mode effects and criticality analysis.

    PubMed

    Saxena, Sunita; Kempf, Raymond; Wilcox, Susan; Shulman, Ira A; Wong, Louise; Cunningham, Glenn; Vega, Elaine; Hall, Stephanie

    2005-09-01

    The Failure Mode Effects and Criticality Analysis (FMECA) was applied to improve the timeliness of reporting and the timeliness of receipt by the responsible licensed caregiver of critical laboratory values (CLVs) for outpatients and non-critical care inpatients. Through a risk prioritization process, the most important areas for improvement, including contacting the provider, assisting the provider in contacting the patient, and educating the provider in follow-up options available during off hours, were identified. A variety of systemic improvements were made; for example, the CLV notification process was centralized in the customer service center, with databases to help providers select options and make arrangements for follow-up care and an electronic abstract form to document the CLV notification process. Review of documentation and appropriateness of CLV follow-up care was integrated into the quality monitoring process to detect any variations or problems. The average CLV notification time for the month steadily declined during an eight-month period. Compliance was 100% for the "read-back" requirement and documentation in patient's health record. This proactive risk assessment project successfully modified the CLV notification program from a high- to a low-risk process, identified activities to further improve the process, and helped ensure compliance with a variety of requirements.

  5. "Chance favors only the prepared mind": preparing minds to systematically reduce hazards in the testing process in primary care.

    PubMed

    Singh, Ranjit; Hickner, John; Mold, Jim; Singh, Gurdev

    2014-03-01

    Testing plays a vital role in primary care. Failures in the process are common and can be harmful. As the great 19th century microbiologist Louis Pasteur put it "chance favors only the prepared mind." Our objective is to prepare minds in primary care practices to improve safety in the testing process. Various principles from safety science can be applied. A prospective methodology that uses an anonymous practice survey based on concepts from failure modes and effects analysis is proposed. Responses are used to rank perceived hazards in the testing process, leading to prioritization of areas for intervention. Secondary data analysis (using data from a study of medication safety) was used to explore the value of this approach in the context of assessing the testing process. At 3 primary care practice sites, a total of 61 staff members completed 4 survey items examining the testing process. Comparison across practices shows that each has a distinct profile of hazards, which would lead each on a different path toward improvement. The proposed approach treats each practice as a unique complex adaptive system aiming to help it thrive by inculcating trust, mutual respect, and collaboration. Implications for patient safety research and practice are discussed.

  6. Space Propulsion Hazards Analysis Manual (SPHAM). Volume 2. Appendices

    DTIC Science & Technology

    1988-10-01

    lb. RESTRICTIVE MARKINGS UNCLASSIFIED 2a. SECURITY CLASSIFICATION AUTHORITY 3 . DISTRIBUTION/AVAILABILITY OF REPORT Approved for public release...Volume I Chapter 2 - Requirementb and the Hazards Analysis Process .... Volume I Chapter 3 - Accident Scenarios...list of the hazardous materials that are discussed; 3 ) description of the failure scenarios; 4) type of post-accident environment that is discussed

  7. PNNL Data-Intensive Computing for a Smarter Energy Grid

    ScienceCinema

    Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.

  8. Prediction of line failure fault based on weighted fuzzy dynamic clustering and improved relational analysis

    NASA Astrophysics Data System (ADS)

    Meng, Xiaocheng; Che, Renfei; Gao, Shi; He, Juntao

    2018-04-01

    With the advent of large data age, power system research has entered a new stage. At present, the main application of large data in the power system is the early warning analysis of the power equipment, that is, by collecting the relevant historical fault data information, the system security is improved by predicting the early warning and failure rate of different kinds of equipment under certain relational factors. In this paper, a method of line failure rate warning is proposed. Firstly, fuzzy dynamic clustering is carried out based on the collected historical information. Considering the imbalance between the attributes, the coefficient of variation is given to the corresponding weights. And then use the weighted fuzzy clustering to deal with the data more effectively. Then, by analyzing the basic idea and basic properties of the relational analysis model theory, the gray relational model is improved by combining the slope and the Deng model. And the incremental composition and composition of the two sequences are also considered to the gray relational model to obtain the gray relational degree between the various samples. The failure rate is predicted according to the principle of weighting. Finally, the concrete process is expounded by an example, and the validity and superiority of the proposed method are verified.

  9. Thermomechanical Controls on the Success and Failure of Continental Rift Systems

    NASA Astrophysics Data System (ADS)

    Brune, S.

    2017-12-01

    Studies of long-term continental rift evolution are often biased towards rifts that succeed in breaking the continent like the North Atlantic, South China Sea, or South Atlantic rifts. However there are many prominent rift systems on Earth where activity stopped before the formation of a new ocean basin such as the North Sea, the West and Central African Rifts, or the West Antarctic Rift System. The factors controlling the success and failure of rifts can be divided in two groups: (1) Intrinsic processes - for instance frictional weakening, lithospheric thinning, shear heating or the strain-dependent growth of rift strength by replacing weak crust with strong mantle. (2) External processes - such as a change of plate divergence rate, the waning of a far-field driving force, or the arrival of a mantle plume. Here I use numerical and analytical modeling to investigate the role of these processes for the success and failure of rift systems. These models show that a change of plate divergence rate under constant force extension is controlled by the non-linearity of lithospheric materials. For successful rifts, a strong increase in divergence velocity can be expected to take place within few million years, a prediction that agrees with independent plate tectonic reconstructions of major Mesozoic and Cenozoic ocean-forming rift systems. Another model prediction is that oblique rifting is mechanically favored over orthogonal rifting, which means that simultaneous deformation within neighboring rift systems of different obliquity and otherwise identical properties will lead to success and failure of the more and less oblique rift, respectively. This can be exemplified by the Cretaceous activity within the Equatorial Atlantic and the West African Rifts that lead to the formation of a highly oblique oceanic spreading center and the failure of the West African Rift System. While in nature the circumstances of rift success or failure may be manifold, simplified numerical and analytical models allow the isolated analysis of various contributing factors and to define a characteristic time scale for each process.

  10. Failure Analysis of Nonvolatile Residue (NVR) Analyzer Model SP-1000

    NASA Technical Reports Server (NTRS)

    Potter, Joseph C.

    2011-01-01

    National Aeronautics and Space Administration (NASA) subcontractor Wiltech contacted the NASA Electrical Lab (NE-L) and requested a failure analysis of a Solvent Purity Meter; model SP-IOOO produced by the VerTis Instrument Company. The meter, used to measure the contaminate in a solvent to determine the relative contamination on spacecraft flight hardware and ground servicing equipment, had been inoperable and in storage for an unknown amount of time. NE-L was asked to troubleshoot the unit and make a determination on what may be required to make the unit operational. Through the use of general troubleshooting processes and the review of a unit in service at the time of analysis, the unit was found to be repairable but would need the replacement of multiple components.

  11. How Analysis Informs Regulation:Success and Failure of ...

    EPA Pesticide Factsheets

    How Analysis Informs Regulation:Success and Failure of Evolving Approaches to Polyfluoroalkyl Acid Contamination The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  12. A large landslide in volcanic rock: failure processes, geometry and propagation

    NASA Astrophysics Data System (ADS)

    Putu Krishna Wijaya, I.; Zangerl, Christian; Straka, Wolfgang; Mergili, Martin; Prasad Pudasaini, Shiva; Arifianti, Yukni

    2017-04-01

    The Jemblung landslide in Banjarnegara, Indonesia was one of the most destructive landslides in the country since 2006. This landslide caused at least 90 deaths while more than 1300 people were evacuated to safer areas. Concerning the failure mechanisms and type of material, the event can be characterized as a complex landslide (earth slide to earth flow). It originated in volcaniclastic soil/rock, i.e. andesites and lapilli-tuffs of varying degrees of weathering that lie above tuffaceous sandstones, conglomerates, as well as an alternation of shale and brown coal layers. Unmanned aerial vehicle (UAV) data from a secondary database are processed by using photogrammetric software to obtain an overview of the landslide geometry before and after the failure event. Stratigraphic field data and geoelectrical measurements are compared and correlated to build a geological-geometrical model and to estimate the volume of the landslide. Petrographical and XRD analysis are conducted to explain the mineral composition of parent rock and its weathering products. Rainfall as well as seismologic data are collected to study potential trigger and failure mechanisms. The geological-geometrical model of the landslide, digital terrain models of the process area and geotechnical soil properties are combined to model the initial sliding process by applying limit-equilibrium software products. Furthermore, the landslide propagation is simulated with the novel, GIS-based, two-phase mass flow modelling tool r.avaflow in order to improve the understanding of the dynamics of the Jemblung landslide.

  13. Factors Determining the Success and Failure of eHealth Interventions: Systematic Review of the Literature.

    PubMed

    Granja, Conceição; Janssen, Wouter; Johansen, Monika Alise

    2018-05-01

    eHealth has an enormous potential to improve healthcare cost, effectiveness, and quality of care. However, there seems to be a gap between the foreseen benefits of research and clinical reality. Our objective was to systematically review the factors influencing the outcome of eHealth interventions in terms of success and failure. We searched the PubMed database for original peer-reviewed studies on implemented eHealth tools that reported on the factors for the success or failure, or both, of the intervention. We conducted the systematic review by following the patient, intervention, comparison, and outcome framework, with 2 of the authors independently reviewing the abstract and full text of the articles. We collected data using standardized forms that reflected the categorization model used in the qualitative analysis of the outcomes reported in the included articles. Among the 903 identified articles, a total of 221 studies complied with the inclusion criteria. The studies were heterogeneous by country, type of eHealth intervention, method of implementation, and reporting perspectives. The article frequency analysis did not show a significant discrepancy between the number of reports on failure (392/844, 46.5%) and on success (452/844, 53.6%). The qualitative analysis identified 27 categories that represented the factors for success or failure of eHealth interventions. A quantitative analysis of the results revealed the category quality of healthcare (n=55) as the most mentioned as contributing to the success of eHealth interventions, and the category costs (n=42) as the most mentioned as contributing to failure. For the category with the highest unique article frequency, workflow (n=51), we conducted a full-text review. The analysis of the 23 articles that met the inclusion criteria identified 6 barriers related to workflow: workload (n=12), role definition (n=7), undermining of face-to-face communication (n=6), workflow disruption (n=6), alignment with clinical processes (n=2), and staff turnover (n=1). The reviewed literature suggested that, to increase the likelihood of success of eHealth interventions, future research must ensure a positive impact in the quality of care, with particular attention given to improved diagnosis, clinical management, and patient-centered care. There is a critical need to perform in-depth studies of the workflow(s) that the intervention will support and to perceive the clinical processes involved. ©Conceição Granja, Wouter Janssen, Monika Alise Johansen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.05.2018.

  14. Factors Determining the Success and Failure of eHealth Interventions: Systematic Review of the Literature

    PubMed Central

    Janssen, Wouter; Johansen, Monika Alise

    2018-01-01

    Background eHealth has an enormous potential to improve healthcare cost, effectiveness, and quality of care. However, there seems to be a gap between the foreseen benefits of research and clinical reality. Objective Our objective was to systematically review the factors influencing the outcome of eHealth interventions in terms of success and failure. Methods We searched the PubMed database for original peer-reviewed studies on implemented eHealth tools that reported on the factors for the success or failure, or both, of the intervention. We conducted the systematic review by following the patient, intervention, comparison, and outcome framework, with 2 of the authors independently reviewing the abstract and full text of the articles. We collected data using standardized forms that reflected the categorization model used in the qualitative analysis of the outcomes reported in the included articles. Results Among the 903 identified articles, a total of 221 studies complied with the inclusion criteria. The studies were heterogeneous by country, type of eHealth intervention, method of implementation, and reporting perspectives. The article frequency analysis did not show a significant discrepancy between the number of reports on failure (392/844, 46.5%) and on success (452/844, 53.6%). The qualitative analysis identified 27 categories that represented the factors for success or failure of eHealth interventions. A quantitative analysis of the results revealed the category quality of healthcare (n=55) as the most mentioned as contributing to the success of eHealth interventions, and the category costs (n=42) as the most mentioned as contributing to failure. For the category with the highest unique article frequency, workflow (n=51), we conducted a full-text review. The analysis of the 23 articles that met the inclusion criteria identified 6 barriers related to workflow: workload (n=12), role definition (n=7), undermining of face-to-face communication (n=6), workflow disruption (n=6), alignment with clinical processes (n=2), and staff turnover (n=1). Conclusions The reviewed literature suggested that, to increase the likelihood of success of eHealth interventions, future research must ensure a positive impact in the quality of care, with particular attention given to improved diagnosis, clinical management, and patient-centered care. There is a critical need to perform in-depth studies of the workflow(s) that the intervention will support and to perceive the clinical processes involved. PMID:29716883

  15. Fatigue crack growth in an aluminum alloy-fractographic study

    NASA Astrophysics Data System (ADS)

    Salam, I.; Muhammad, W.; Ejaz, N.

    2016-08-01

    A two-fold approach was adopted to understand the fatigue crack growth process in an Aluminum alloy; fatigue crack growth test of samples and analysis of fractured surfaces. Fatigue crack growth tests were conducted on middle tension M(T) samples prepared from an Aluminum alloy cylinder. The tests were conducted under constant amplitude loading at R ratio 0.1. The stress applied was from 20,30 and 40 per cent of the yield stress of the material. The fatigue crack growth data was recorded. After fatigue testing, the samples were subjected to detailed scanning electron microscopic (SEM) analysis. The resulting fracture surfaces were subjected to qualitative and quantitative fractographic examinations. Quantitative fracture analysis included an estimation of crack growth rate (CGR) in different regions. The effect of the microstructural features on fatigue crack growth was examined. It was observed that in stage II (crack growth region), the failure mode changes from intergranular to transgranular as the stress level increases. In the region of intergranular failure the localized brittle failure was observed and fatigue striations are difficult to reveal. However, in the region of transgranular failure the crack path is independent of the microstructural features. In this region, localized ductile failure mode was observed and well defined fatigue striations were present in the wake of fatigue crack. The effect of interaction of growing fatigue crack with microstructural features was not substantial. The final fracture (stage III) was ductile in all the cases.

  16. Planned Educational Change in Search of a Research Tradition

    ERIC Educational Resources Information Center

    Heathers, Glenn

    1974-01-01

    Attempts at change in education have been largely unsuccessful, producing only superficial change because of inadequate needs analysis, unsound implementation plans, insufficient training in the use of innovations, and failure to involve practitioners in the decision-making process. (HMD)

  17. Advantage of the modified Lunn-McNeil technique over Kalbfleisch-Prentice technique in competing risks

    NASA Astrophysics Data System (ADS)

    Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.

    2002-03-01

    Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.

  18. Failure mode and effects analysis: A community practice perspective.

    PubMed

    Schuller, Bradley W; Burns, Angi; Ceilley, Elizabeth A; King, Alan; LeTourneau, Joan; Markovic, Alexander; Sterkel, Lynda; Taplin, Brigid; Wanner, Jennifer; Albert, Jeffrey M

    2017-11-01

    To report our early experiences with failure mode and effects analysis (FMEA) in a community practice setting. The FMEA facilitator received extensive training at the AAPM Summer School. Early efforts focused on department education and emphasized the need for process evaluation in the context of high profile radiation therapy accidents. A multidisciplinary team was assembled with representation from each of the major department disciplines. Stereotactic radiosurgery (SRS) was identified as the most appropriate treatment technique for the first FMEA evaluation, as it is largely self-contained and has the potential to produce high impact failure modes. Process mapping was completed using breakout sessions, and then compiled into a simple electronic format. Weekly sessions were used to complete the FMEA evaluation. Risk priority number (RPN) values > 100 or severity scores of 9 or 10 were considered high risk. The overall time commitment was also tracked. The final SRS process map contained 15 major process steps and 183 subprocess steps. Splitting the process map into individual assignments was a successful strategy for our group. The process map was designed to contain enough detail such that another radiation oncology team would be able to perform our procedures. Continuous facilitator involvement helped maintain consistent scoring during FMEA. Practice changes were made responding to the highest RPN scores, and new resulting RPN scores were below our high-risk threshold. The estimated person-hour equivalent for project completion was 258 hr. This report provides important details on the initial steps we took to complete our first FMEA, providing guidance for community practices seeking to incorporate this process into their quality assurance (QA) program. Determining the feasibility of implementing complex QA processes into different practice settings will take on increasing significance as the field of radiation oncology transitions into the new TG-100 QA paradigm. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. Andreas Acrivos Dissertation Award: Onset of Dynamic Wetting Failure - The Mechanics of High-Speed Fluid Displacement

    NASA Astrophysics Data System (ADS)

    Vandre, Eric

    2014-11-01

    Dynamic wetting is crucial to processes where a liquid displaces another fluid along a solid surface, such as the deposition of a coating liquid onto a moving substrate. Dynamic wetting fails when process speed exceeds some critical value, leading to incomplete fluid displacement and transient phenomena that impact a variety of applications, such as microfluidic devices, oil-recovery systems, and splashing droplets. Liquid coating processes are particularly sensitive to wetting failure, which can induce air entrainment and other catastrophic coating defects. Despite the industrial incentives for careful control of wetting behavior, the hydrodynamic factors that influence the transition to wetting failure remain poorly understood from empirical and theoretical perspectives. This work investigates the fundamentals of wetting failure in a variety of systems that are relevant to industrial coating flows. A hydrodynamic model is developed where an advancing fluid displaces a receding fluid along a smooth, moving substrate. Numerical solutions predict the onset of wetting failure at a critical substrate speed, which coincides with a turning point in the steady-state solution path for a given set of system parameters. Flow-field analysis reveals a physical mechanism where wetting failure results when capillary forces can no longer support the pressure gradients necessary to steadily displace the receding fluid. Novel experimental systems are used to measure the substrate speeds and meniscus shapes associated with the onset of air entrainment during wetting failure. Using high-speed visualization techniques, air entrainment is identified by the elongation of triangular air films with system-dependent size. Air films become unstable to thickness perturbations and ultimately rupture, leading to the entrainment of air bubbles. Meniscus confinement in a narrow gap between the substrate and a stationary plate is shown to delay air entrainment to higher speeds for a variety of water/glycerol solutions. In addition, liquid pressurization (relative to ambient air) further postpones air entrainment when the meniscus is located near a sharp corner along the plate. Recorded critical speeds compare well to predictions from the model, supporting the hydrodynamic mechanism for the onset of wetting failure. Lastly, the industrial practice of curtain coating is investigated using the hydrodynamic model. Due to the complexity of this system, a new computational approach is developed combining a finite element method and lubrication theory in order to improve the efficiency of the numerical analysis. Results show that the onset of wetting failure varies strongly with the operating conditions of this system. In addition, stresses from the air flow dramatically affect the steady wetting behavior of curtain coating. Ultimately, these findings emphasize the important role of two-fluid displacement mechanics in high-speed wetting systems.

  20. On possibilities of using global monitoring in effective prevention of tailings storage facilities failures.

    PubMed

    Stefaniak, Katarzyna; Wróżyńska, Magdalena

    2018-02-01

    Protection of common natural goods is one of the greatest challenges man faces every day. Extracting and processing natural resources such as mineral deposits contributes to the transformation of the natural environment. The number of activities designed to keep balance are undertaken in accordance with the concept of integrated order. One of them is the use of comprehensive systems of tailings storage facility monitoring. Despite the monitoring, system failures still occur. The quantitative aspect of the failures illustrates both the scale of the problem and the quantitative aspect of the consequences of tailings storage facility failures. The paper presents vast possibilities provided by the global monitoring in the effective prevention of these failures. Particular attention is drawn to the potential of using multidirectional monitoring, including technical and environmental monitoring by the example of one of the world's biggest hydrotechnical constructions-Żelazny Most Tailings Storage Facility (TSF), Poland. Analysis of monitoring data allows to take preventive action against construction failures of facility dams, which can have devastating effects on human life and the natural environment.

  1. Safety evaluation of driver cognitive failures and driving errors on right-turn filtering movement at signalized road intersections based on Fuzzy Cellular Automata (FCA) model.

    PubMed

    Chai, Chen; Wong, Yiik Diew; Wang, Xuesong

    2017-07-01

    This paper proposes a simulation-based approach to estimate safety impact of driver cognitive failures and driving errors. Fuzzy Logic, which involves linguistic terms and uncertainty, is incorporated with Cellular Automata model to simulate decision-making process of right-turn filtering movement at signalized intersections. Simulation experiments are conducted to estimate the relationships between cognitive failures and driving errors with safety performance. Simulation results show Different types of cognitive failures are found to have varied relationship with driving errors and safety performance. For right-turn filtering movement, cognitive failures are more likely to result in driving errors with denser conflicting traffic stream. Moreover, different driving errors are found to have different safety impacts. The study serves to provide a novel approach to linguistically assess cognitions and replicate decision-making procedures of the individual driver. Compare to crash analysis, the proposed FCA model allows quantitative estimation of particular cognitive failures, and the impact of cognitions on driving errors and safety performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Materials, processes, and environmental engineering network

    NASA Technical Reports Server (NTRS)

    White, Margo M.

    1993-01-01

    The Materials, Processes, and Environmental Engineering Network (MPEEN) was developed as a central holding facility for materials testing information generated by the Materials and Processes Laboratory. It contains information from other NASA centers and outside agencies, and also includes the NASA Environmental Information System (NEIS) and Failure Analysis Information System (FAIS) data. Environmental replacement materials information is a newly developed focus of MPEEN. This database is the NASA Environmental Information System, NEIS, which is accessible through MPEEN. Environmental concerns are addressed regarding materials identified by the NASA Operational Environment Team, NOET, to be hazardous to the environment. An environmental replacement technology database is contained within NEIS. Environmental concerns about materials are identified by NOET, and control or replacement strategies are formed. This database also contains the usage and performance characteristics of these hazardous materials. In addition to addressing environmental concerns, MPEEN contains one of the largest materials databases in the world. Over 600 users access this network on a daily basis. There is information available on failure analysis, metals and nonmetals testing, materials properties, standard and commercial parts, foreign alloy cross-reference, Long Duration Exposure Facility (LDEF) data, and Materials and Processes Selection List data.

  3. Incident Learning and Failure-Mode-and-Effects-Analysis Guided Safety Initiatives in Radiation Medicine

    PubMed Central

    Kapur, Ajay; Goode, Gina; Riehl, Catherine; Zuvic, Petrina; Joseph, Sherin; Adair, Nilda; Interrante, Michael; Bloom, Beatrice; Lee, Lucille; Sharma, Rajiv; Sharma, Anurag; Antone, Jeffrey; Riegel, Adam; Vijeh, Lili; Zhang, Honglai; Cao, Yijian; Morgenstern, Carol; Montchal, Elaine; Cox, Brett; Potters, Louis

    2013-01-01

    By combining incident learning and process failure-mode-and-effects-analysis (FMEA) in a structure-process-outcome framework we have created a risk profile for our radiation medicine practice and implemented evidence-based risk-mitigation initiatives focused on patient safety. Based on reactive reviews of incidents reported in our departmental incident-reporting system and proactive FMEA, high safety-risk procedures in our paperless radiation medicine process and latent risk factors were identified. Six initiatives aimed at the mitigation of associated severity, likelihood-of-occurrence, and detectability risks were implemented. These were the standardization of care pathways and toxicity grading, pre-treatment-planning peer review, a policy to thwart delay-rushed processes, an electronic whiteboard to enhance coordination, and the use of six sigma metrics to monitor operational efficiencies. The effectiveness of these initiatives over a 3-years period was assessed using process and outcome specific metrics within the framework of the department structure. There has been a 47% increase in incident-reporting, with no increase in adverse events. Care pathways have been used with greater than 97% clinical compliance rate. The implementation of peer review prior to treatment-planning and use of the whiteboard have provided opportunities for proactive detection and correction of errors. There has been a twofold drop in the occurrence of high-risk procedural delays. Patient treatment start delays are routinely enforced on cases that would have historically been rushed. Z-scores for high-risk procedures have steadily improved from 1.78 to 2.35. The initiatives resulted in sustained reductions of failure-mode risks as measured by a set of evidence-based metrics over a 3-years period. These augment or incorporate many of the published recommendations for patient safety in radiation medicine by translating them to clinical practice. PMID:24380074

  4. An Accident Precursor Analysis Process Tailored for NASA Space Systems

    NASA Technical Reports Server (NTRS)

    Groen, Frank; Stamatelatos, Michael; Dezfuli, Homayoon; Maggio, Gaspare

    2010-01-01

    Accident Precursor Analysis (APA) serves as the bridge between existing risk modeling activities, which are often based on historical or generic failure statistics, and system anomalies, which provide crucial information about the failure mechanisms that are actually operative in the system and which may differ in frequency or type from those in the various models. These discrepancies between the models (perceived risk) and the system (actual risk) provide the leading indication of an underappreciated risk. This paper presents an APA process developed specifically for NASA Earth-to-Orbit space systems. The purpose of the process is to identify and characterize potential sources of system risk as evidenced by anomalous events which, although not necessarily presenting an immediate safety impact, may indicate that an unknown or insufficiently understood risk-significant condition exists in the system. Such anomalous events are considered accident precursors because they signal the potential for severe consequences that may occur in the future, due to causes that are discernible from their occurrence today. Their early identification allows them to be integrated into the overall system risk model used to intbrm decisions relating to safety.

  5. Analysis of particulates on tape lift samples

    NASA Astrophysics Data System (ADS)

    Moision, Robert M.; Chaney, John A.; Panetta, Chris J.; Liu, De-Ling

    2014-09-01

    Particle counts on tape lift samples taken from a hardware surface exceeded threshold requirements in six successive tests despite repeated cleaning of the surface. Subsequent analysis of the particle size distributions of the failed tests revealed that the handling and processing of the tape lift samples may have played a role in the test failures. In order to explore plausible causes for the observed size distribution anomalies, scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDX), and time-of-flight secondary ion mass spectrometry (ToF-SIMS) were employed to perform chemical analysis on collected particulates. SEM/EDX identified Na and S containing particles on the hardware samples in a size range identified as being responsible for the test failures. ToF-SIMS was employed to further examine the Na and S containing particulates and identified the molecular signature of sodium alkylbenzene sulfonates, a common surfactant used in industrial detergent. The root cause investigation suggests that the tape lift test failures originated from detergent residue left behind on the glass slides used to mount and transport the tape following sampling and not from the hardware surface.

  6. Computer-aided operations engineering with integrated models of systems and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  7. Failure Behavior of Granite Affected by Confinement and Water Pressure and Its Influence on the Seepage Behavior by Laboratory Experiments

    PubMed Central

    Cheng, Cheng; Li, Xiao; Li, Shouding; Zheng, Bo

    2017-01-01

    Failure behavior of granite material is paramount for host rock stability of geological repositories for high-level waste (HLW) disposal. Failure behavior also affects the seepage behavior related to transportation of radionuclide. Few of the published studies gave a consistent analysis on how confinement and water pressure affect the failure behavior, which in turn influences the seepage behavior of the rock during the damage process. Based on a series of laboratory experiments on NRG01 granite samples cored from Alxa area, a candidate area for China’s HLW disposal, this paper presents some detailed observations and analyses for a better understanding on the failure mechanism and seepage behavior of the samples under different confinements and water pressure. The main findings of this study are as follows: (1) Strength reduction properties were found for the granite under water pressure. Besides, the complete axial stress–strain curves show more obvious yielding process in the pre-peak region and a more gradual stress drop in the post-peak region; (2) Shear fracturing pattern is more likely to form in the granite samples with the effect of water pressure, even under much lower confinements, than the predictions from the conventional triaxial compressive results; (3) Four stages of inflow rate curves are divided and the seepage behaviors are found to depend on the failure behavior affected by the confinement and water pressure. PMID:28773157

  8. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  9. An experimental evaluation of software redundancy as a strategy for improving reliability

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.

    1990-01-01

    The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.

  10. Philosophy of ATHEANA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bley, D.C.; Cooper, S.E.; Forester, J.A.

    ATHEANA, a second-generation Human Reliability Analysis (HRA) method integrates advances in psychology with engineering, human factors, and Probabilistic Risk Analysis (PRA) disciplines to provide an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant events. The method uses the characteristics of serious accidents identified through retrospective analysis of serious operational events to set priorities in a search process for significant human failure events, unsafe acts, and error-forcing context (unfavorable plant conditions combined with negative performance-shaping factors). ATHEANA has been tested in a demonstration project at an operating pressurized water reactor.

  11. MO-G-BRE-05: Clinical Process Improvement and Billing in Radiation Oncology: A Case Study of Applying FMEA for CPT Code 77336 (continuing Medical Physics Consultation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spirydovich, S; Huq, M

    2014-06-15

    Purpose: The improvement of quality in healthcare can be assessed by Failure Mode and Effects Analysis (FMEA). In radiation oncology, FMEA, as applied to the billing CPT code 77336, can improve both charge capture and, most importantly, quality of the performed services. Methods: We created an FMEA table for the process performed under CPT code 77336. For a given process step, each member of the assembled team (physicist, dosimetrist, and therapist) independently assigned numerical values for: probability of occurrence (O, 1–10), severity (S, 1–10), and probability of detection (D, 1–10) for every failure mode cause and effect combination. The riskmore » priority number, RPN, was then calculated as a product of O, S and D from which an average RPN was calculated for each combination mentioned above. A fault tree diagram, with each process sorted into 6 categories, was created with linked RPN. For processes with high RPN recommended actions were assigned. 2 separate R and V systems (Lantis and EMR-based ARIA) were considered. Results: We identified 9 potential failure modes and corresponding 19 potential causes of these failure modes all resulting in unjustified 77336 charge and compromised quality of care. In Lantis, the range of RPN was 24.5–110.8, and of S values – 2–10. The highest ranking RPN of 110.8 came from the failure mode described as “end-of-treatment check not done before the completion of treatment”, and the highest S value of 10 (RPN=105) from “overrides not checked”. For the same failure modes, within ARIA electronic environment with its additional controls, RPN values were significantly lower (44.3 for end-of-treatment missing check and 20.0 for overrides not checked). Conclusion: Our work has shown that when charge capture was missed that also resulted in some services not being performed. Absence of such necessary services may result in sub-optimal quality of care rendered to patients.« less

  12. Failure mechanism of the polymer infiltration of carbon nanotube forests

    NASA Astrophysics Data System (ADS)

    Buchheim, Jakob; Park, Hyung Gyu

    2016-11-01

    Polymer melt infiltration is one of the feasible methods for manufacturing filter membranes out of carbon nanotubes (CNTs) on large scales. Practically, however, its process suffers from low yields, and the mechanism behind this failure is rather poorly understood. Here, we investigate a failure mechanism of polymer melt infiltration of vertical aligned (VA-) CNTs. In penetrating the VA-CNT interstices, polymer melts exert a capillarity-induced attractive force laterally on CNTs at the moving meniscus, leading to locally agglomerated macroscale bunches. Such a large configurational change can deform and distort individual CNTs so much as to cause buckling or breakdown of the alignment. In view of membrane manufacturing, this irreversible distortion of nanotubes is detrimental, as it could block the transport path of the membranes. The failure mechanism of the polymer melt infiltration is largely attributed to steric hindrance and an energy penalty of confined polymer chains. Euler beam theory and scaling analysis affirm that CNTs with low aspect ratio, thick walls and sparse distribution can maintain their vertical alignment. Our results can enrich a mechanistic understanding of the polymer melt infiltration process and offer guidelines to the facile large-scale manufacturing of the CNT-polymer filter membranes.

  13. Correction of engineering servicing regularity of transporttechnological machines in operational process

    NASA Astrophysics Data System (ADS)

    Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.

    2018-03-01

    In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.

  14. Robustness surfaces of complex networks

    PubMed Central

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-01-01

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared. PMID:25178402

  15. Numerical investigation of contact stresses for fretting fatigue damage initiation

    NASA Astrophysics Data System (ADS)

    Bhatti, N. A.; Abdel Wahab, M.

    2017-05-01

    Fretting fatigue phenomena occurs due to interaction between contacting bodies under application of cyclic and normal loads. In addition to environmental conditions and material properties, the response at the contact interface highly depends on the combination of applied loads. High stress concentration is present at the contact interface, which can start the damage nucleation process. At the culmination of nucleation process several micro cracks are initiated, ultimately leading to the structural failure. In this study, effect of ratio of tangential to normal load on contact stresses, slip amplitude and damage initiation is studied using finite element analysis. The results are evaluated for Ruiz parameter as it involves the slip amplitude which in an important factor in fretting fatigue conditions. It is observed that tangential to normal load ratio influences the stick zone size and damage initiation life. Furthermore, it is observed that tensile stress is the most important factor that drives the damage initiation to failure for the cases where failure occurs predominantly in mode I manner.

  16. Comprehensive Analysis of Gene Expression Profiles of Sepsis-Induced Multiorgan Failure Identified Its Valuable Biomarkers.

    PubMed

    Wang, Yumei; Yin, Xiaoling; Yang, Fang

    2018-02-01

    Sepsis is an inflammatory-related disease, and severe sepsis would induce multiorgan dysfunction, which is the most common cause of death of patients in noncoronary intensive care units. Progression of novel therapeutic strategies has proven to be of little impact on the mortality of severe sepsis, and unfortunately, its mechanisms still remain poorly understood. In this study, we analyzed gene expression profiles of severe sepsis with failure of lung, kidney, and liver for the identification of potential biomarkers. We first downloaded the gene expression profiles from the Gene Expression Omnibus and performed preprocessing of raw microarray data sets and identification of differential expression genes (DEGs) through the R programming software; then, significantly enriched functions of DEGs in lung, kidney, and liver failure sepsis samples were obtained from the Database for Annotation, Visualization, and Integrated Discovery; finally, protein-protein interaction network was constructed for DEGs based on the STRING database, and network modules were also obtained through the MCODE cluster method. As a result, lung failure sepsis has the highest number of DEGs of 859, whereas the number of DEGs in kidney and liver failure sepsis samples is 178 and 175, respectively. In addition, 17 overlaps were obtained among the three lists of DEGs. Biological processes related to immune and inflammatory response were found to be significantly enriched in DEGs. Network and module analysis identified four gene clusters in which all or most of genes were upregulated. The expression changes of Icam1 and Socs3 were further validated through quantitative PCR analysis. This study should shed light on the development of sepsis and provide potential therapeutic targets for sepsis-induced multiorgan failure.

  17. Determining Component Probability using Problem Report Data for Ground Systems used in Manned Space Flight

    NASA Technical Reports Server (NTRS)

    Monaghan, Mark W.; Gillespie, Amanda M.

    2013-01-01

    During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rusu, I; Thomas, T; Roeske, J

    Purpose: To identify areas of improvement in our liver stereotactic body radiation therapy (SBRT) program, using failure mode and effect analysis (FMEA). Methods: A multidisciplinary group consisting of one physician, three physicists, one dosimetrist and two therapists was formed. A process map covering 10 major stages of the liver SBRT program from the initial diagnosis to post treatment follow-up was generated. A total of 102 failure modes, together with their causes and effects, were identified. The occurrence (O), severity (S) and lack of detectability (D) were independently scored. The ranking was done using the risk probability number (RPN) defined asmore » the product of average O, S and D numbers for each mode. The scores were normalized to remove inter-observer variability, while preserving individual ranking order. Further, a correlation analysis on the overall agreement on rank order of all failure modes resulted in positive values for successive pairs of evaluators. The failure modes with the highest RPN value were considered for further investigation. Results: The average normalized RPN values for all modes were 39 with a range of 9 to 103. The FMEA analysis resulted in the identification of the top 10 critical failures modes as: Incorrect CT-MR registration, MR scan not performed in treatment position, patient movement between CBCT acquisition and treatment, daily IGRT QA not verified, incorrect or incomplete ITV delineation, OAR contours not verified, inaccurate normal liver effective dose (Veff) calculation, failure of bolus tracking for 4D CT scan, setup instructions not followed for treatment and plan evaluation metrics missed. Conclusion: The application of FMEA to our liver SBRT program led to the identification and possible improvement of areas affecting patient safety.« less

  19. Elucidating the mechanical effects of pore water pressure increase on the stability of unsaturated soil slopes

    NASA Astrophysics Data System (ADS)

    Buscarnera, G.

    2012-12-01

    The increase of the pore water pressure due to rain infiltration can be a dominant component in the activation of slope failures. This paper shows an application of the theory of material stability to the triggering analysis of this important class of natural hazards. The goal is to identify the mechanisms through which the process of suction removal promotes the initiation of mechanical instabilities. The interplay between increase in pore water pressure, and failure mechanisms is investigated at material point level. In order to account for multiple failure mechanisms, the second-order work criterion is used and different stability indices are devised. The paper shows that the theory of material stability can assess the risk of shear failure and static liquefaction in both saturated and unsaturated contexts. It is shown that the combined use of an enhanced definition of second-order work for unsaturated porous media and a hydro-mechanical constitutive framework enables to retrieve bifurcation conditions for water-infiltration processes in unsaturated deposits. This finding discloses the importance of the coupling terms that incorporate the interaction between the solid skeleton and the pore fluids. As a consequence, these theoretical results suggest that some material properties that are not directly associated with the shearing resistance (e.g., the potential for wetting compaction) can play an important role in the initiation of slope failures. According to the proposed interpretation, the process of pore pressure increase can be understood as a trigger of uncontrolled strains, which at material point level are reflected by the onset of bifurcation conditions.

  20. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  1. Ampoule Failure System

    NASA Technical Reports Server (NTRS)

    Watring, Dale A. (Inventor); Johnson, Martin L. (Inventor)

    1996-01-01

    An ampoule failure system for use in material processing furnaces comprising a containment cartridge and an ampoule failure sensor. The containment cartridge contains an ampoule of toxic material therein and is positioned within a furnace for processing. An ampoule failure probe is positioned in the containment cartridge adjacent the ampoule for detecting a potential harmful release of toxic material therefrom during processing. The failure probe is spaced a predetermined distance from the ampoule and is chemically chosen so as to undergo a timely chemical reaction with the toxic material upon the harmful release thereof. The ampoule failure system further comprises a data acquisition system which is positioned externally of the furnace and is electrically connected to the ampoule failure probe so as to form a communicating electrical circuit. The data acquisition system includes an automatic shutdown device for shutting down the furnace upon the harmful release of toxic material. It also includes a resistance measuring device for measuring the resistance of the failure probe during processing. The chemical reaction causes a step increase in resistance of the failure probe whereupon the automatic shutdown device will responsively shut down the furnace.

  2. Fidelity Failures in Brief Strategic Family Therapy for Adolescent Drug Abuse: A Clinical Analysis.

    PubMed

    Lebensohn-Chialvo, Florencia; Rohrbaugh, Michael J; Hasler, Brant P

    2018-04-30

    As evidence-based family treatments for adolescent substance use and conduct problems gain traction, cutting edge research moves beyond randomized efficacy trials to address questions such as how these treatments work and how best to disseminate them to community settings. A key factor in effective dissemination is treatment fidelity, which refers to implementing an intervention in a manner consistent with an established manual. While most fidelity research is quantitative, this study offers a qualitative clinical analysis of fidelity failures in a large, multisite effectiveness trial of Brief Strategic Family Therapy (BSFT) for adolescent drug abuse, where BSFT developers trained community therapists to administer this intervention in their own agencies. Using case notes and video recordings of therapy sessions, an independent expert panel first rated 103 cases on quantitative fidelity scales grounded in the BSFT manual and the broader structural-strategic framework that informs BSFT intervention. Because fidelity was generally low, the panel reviewed all cases qualitatively to identify emergent types or categories of fidelity failure. Ten categories of failures emerged, characterized by therapist omissions (e.g., failure to engage key family members, failure to think in threes) and commissions (e.g., off-model, nonsystemic formulations/interventions). Of these, "failure to think in threes" appeared basic and particularly problematic, reflecting the central place of this idea in structural theory and therapy. Although subject to possible bias, our observations highlight likely stumbling blocks in exporting a complex family treatment like BSFT to community settings. These findings also underscore the importance of treatment fidelity in family therapy research. © 2018 Family Process Institute.

  3. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    The theory of System Health Management (SHM) and of its operational subset Fault Management (FM) states that FM is implemented as a "meta" control loop, known as an FM Control Loop (FMCL). The FMCL detects that all or part of a system is now failed, or in the future will fail (that is, cannot be controlled within acceptable limits to achieve its objectives), and takes a control action (a response) to return the system to a controllable state. In terms of control theory, the effectiveness of each FMCL is estimated based on its ability to correctly estimate the system state, and on the speed of its response to the current or impending failure effects. This paper describes how this theory has been successfully applied on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) Program to quantitatively estimate the effectiveness of proposed abort triggers so as to select the most effective suite to protect the astronauts from catastrophic failure of the SLS. The premise behind this process is to be able to quantitatively provide the value versus risk trade-off for any given abort trigger, allowing decision makers to make more informed decisions. All current and planned crewed launch vehicles have some form of vehicle health management system integrated with an emergency launch abort system to ensure crew safety. While the design can vary, the underlying principle is the same: detect imminent catastrophic vehicle failure, initiate launch abort, and extract the crew to safety. Abort triggers are the detection mechanisms that identify that a catastrophic launch vehicle failure is occurring or is imminent and cause the initiation of a notification to the crew vehicle that the escape system must be activated. While ensuring that the abort triggers provide this function, designers must also ensure that the abort triggers do not signal that a catastrophic failure is imminent when in fact the launch vehicle can successfully achieve orbit. That is, the abort triggers must have low false negative rates to be sure that real crew-threatening failures are detected, and also low false positive rates to ensure that the crew does not abort from non-crew-threatening launch vehicle behaviors. The analysis process described in this paper is a compilation of over six years of lessons learned and refinements from experiences developing abort triggers for NASA's Constellation Program (Ares I Project) and the SLS Program, as well as the simultaneous development of SHM/FM theory. The paper will describe the abort analysis concepts and process, developed in conjunction with SLS Safety and Mission Assurance (S&MA) to define a common set of mission phase, failure scenario, and Loss of Mission Environment (LOME) combinations upon which the SLS Loss of Mission (LOM) Probabilistic Risk Assessment (PRA) models are built. This abort analysis also requires strong coordination with the Multi-Purpose Crew Vehicle (MPCV) and SLS Structures and Environments (STE) to formulate a series of abortability tables that encapsulate explosion dynamics over the ascent mission phase. The design and assessment of abort conditions and triggers to estimate their Loss of Crew (LOC) Benefits also requires in-depth integration with other groups, including Avionics, Guidance, Navigation and Control(GN&C), the Crew Office, Mission Operations, and Ground Systems. The outputs of this analysis are a critical input to SLS S&MA's LOC PRA models. The process described here may well be the first full quantitative application of SHM/FM theory to the selection of a sensor suite for any aerospace system.

  4. Anatomy of a bottleneck: diagnosing factors limiting population growth in the Puerto Rican parrot

    USGS Publications Warehouse

    Beissenger, S.R.; Wunderle, J.M.; Meyers, J.M.; Saether, B.-E.; Engen, S.

    2008-01-01

    The relative importance of genetic, demographic, environmental, and catastrophic processes that maintain population bottlenecks has received little consideration. We evaluate the role of these factors in maintaining the Puerto Rican Parrot (Amazona vittata) in a prolonged bottleneck from 1973 through 2000 despite intensive conservation efforts. We first conduct a risk analysis, then examine evidence for the importance of specific processes maintaining the bottleneck using the multiple competing hypotheses approach, and finally integrate these results through a sensitivity analysis of a demographic model using life-stage simulation analysis (LSA) to determine the relative importance of genetic, demographic, environmental, and catastrophic processes on population growth. Annual population growth has been slow and variable (1.0 6 5.2 parrots per year, or an average k?1.05 6 0.19) from 16 parrots (1973) to a high of 40-42 birds (1997-1998). A risk analysis based on population prediction intervals (PPI) indicates great risk and large uncertainty, with a range of 22?83 birds in the 90% PPI only five years into the future. Four primary factors (reduced hatching success due to inbreeding, failure of adults to nest, nest failure due to nongenetic causes, and reduced survival of adults and juveniles) were responsible for maintaining the bottleneck. Egghatchability rates were low (70.6% per egg and 76.8% per pair), and hatchability increased after mate changes, suggesting inbreeding effects. Only an average of 34% of the population nested annually, which was well below the percentage of adults that should have reached an age of first breeding (41-56%). This chronic failure to nest appears to have been caused primarily by environmental and/or behavioral factors, and not by nest-site scarcity or a skewed sex ratio. Nest failure rates from nongenetic causes (i.e., predation, parasitism, and wet cavities) were low (29%) due to active management (protecting nests and fostering captive young into wild nests), diminishing the importance of nest failure as a limiting factor. Annual survival has been periodically reduced by catastrophes (hurricanes), which have greatly constrained population growth, but survival rates were high under non-catastrophic conditions. Although the importance of factors maintaining the Puerto Rican Parrot bottleneck varied throughout the 30-year period of study, we determined their long-term influence using LSA simulations to correlate variation in demographic rates with variation in population growth (k). The bottleneck appears to have been maintained primarily by periodic catastrophes (hurricanes) that reduced adult survival, and secondarily by environmental and/or behavioral factors that resulted in a failure of many adults to nest. The influence of inbreeding through reduced hatching success played a much less significant role, even when additional effects of inbreeding on the production and mortality of young were incorporated into the LSA. Management actions needed to speed recovery include (1) continued nest guarding to minimize the effects of nest failure due to nongenetic causes; (2) creating a second population at another location on the island --a process that was recently initiated--to reduce the chance that hurricane strikes will cause extinction; and (3) determining the causes of the low percentage of breeders in the population and ameliorating them, which would have a large impact on population growth.

  5. Team B Intelligence Coups

    ERIC Educational Resources Information Center

    Mitchell, Gordon R.

    2006-01-01

    The 2003 Iraq prewar intelligence failure was not simply a case of the U.S. intelligence community providing flawed data to policy-makers. It also involved subversion of the competitive intelligence analysis process, where unofficial intelligence boutiques "stovepiped" misleading intelligence assessments directly to policy-makers and…

  6. A New Perspective on Teaching Constitutional Law

    ERIC Educational Resources Information Center

    Rosenblum, Robert

    1977-01-01

    The author suggests that a major failure of most law schools and traditional undergraduate constitutional law courses is that they omit an adequate analysis of the political nature of the judicial process. Political influences on a variety of court cases are discussed. (LBH)

  7. a New Method for Fmeca Based on Fuzzy Theory and Expert System

    NASA Astrophysics Data System (ADS)

    Byeon, Yoong-Tae; Kim, Dong-Jin; Kim, Jin-O.

    2008-10-01

    Failure Mode Effects and Criticality Analysis (FMECA) is one of most widely used methods in modern engineering system to investigate potential failure modes and its severity upon the system. FMECA evaluates criticality and severity of each failure mode and visualize the risk level matrix putting those indices to column and row variable respectively. Generally, those indices are determined subjectively by experts and operators. However, this process has no choice but to include uncertainty. In this paper, a method for eliciting expert opinions considering its uncertainty is proposed to evaluate the criticality and severity. In addition, a fuzzy expert system is constructed in order to determine the crisp value of risk level for each failure mode. Finally, an illustrative example system is analyzed in the case study. The results are worth considering in deciding the proper policies for each component of the system.

  8. Failure Forecasting in Triaxially Stressed Sandstones

    NASA Astrophysics Data System (ADS)

    Crippen, A.; Bell, A. F.; Curtis, A.; Main, I. G.

    2017-12-01

    Precursory signals to fracturing events have been observed to follow power-law accelerations in spatial, temporal, and size distributions leading up to catastrophic failure. In previous studies this behavior was modeled using Voight's relation of a geophysical precursor in order to perform `hindcasts' by solving for failure onset time. However, performing this analysis in retrospect creates a bias, as we know an event happened, when it happened, and we can search data for precursors accordingly. We aim to remove this retrospective bias, thereby allowing us to make failure forecasts in real-time in a rock deformation laboratory. We triaxially compressed water-saturated 100 mm sandstone cores (Pc= 25MPa, Pp = 5MPa, σ = 1.0E-5 s-1) to the point of failure while monitoring strain rate, differential stress, AEs, and continuous waveform data. Here we compare the current `hindcast` methods on synthetic and our real laboratory data. We then apply these techniques to increasing fractions of the data sets to observe the evolution of the failure forecast time with precursory data. We discuss these results as well as our plan to mitigate false positives and minimize errors for real-time application. Real-time failure forecasting could revolutionize the field of hazard mitigation of brittle failure processes by allowing non-invasive monitoring of civil structures, volcanoes, and possibly fault zones.

  9. Cycles till failure of silver-zinc cells with competing failure modes - Preliminary data analysis

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.

    1980-01-01

    The data analysis of cycles to failure of silver-zinc electrochemical cells with competing failure modes is presented. The test ran 129 cells through charge-discharge cycles until failure; preliminary data analysis consisted of response surface estimate of life. Batteries fail through low voltage condition and an internal shorting condition; a competing failure modes analysis was made using maximum likelihood estimation for the extreme value life distribution. Extensive residual plotting and probability plotting were used to verify data quality and selection of model.

  10. Medication safety--reliability of preference cards.

    PubMed

    Dawson, Anthony; Orsini, Michael J; Cooper, Mary R; Wollenburg, Karol

    2005-09-01

    A CLINICAL ANALYSIS of surgeons' preference cards was initiated in one hospital as part of a comprehensive analysis to reduce medication-error risks by standardizing and simplifying the intraoperative medication-use process specific to the sterile field. THE PREFERENCE CARD ANALYSIS involved two subanalyses: a review of the information as it appeared on the cards and a failure mode and effects analysis of the process involved in using and maintaining the cards. THE ANALYSIS FOUND that the preference card system in use at this hospital is outdated. Variations and inconsistencies within the preference card system indicate that the use of preference cards as guides for medication selection for surgical procedures presents an opportunity for medication errors to occur.

  11. Geotechnical Characteristics and Stability Analysis of Rock-Soil Aggregate Slope at the Gushui Hydropower Station, Southwest China

    PubMed Central

    Shi, Chong; Xu, Fu-gang

    2013-01-01

    Two important features of the high slopes at Gushui Hydropower Station are layered accumulations (rock-soil aggregate) and multilevel toppling failures of plate rock masses; the Gendakan slope is selected for case study in this paper. Geological processes of the layered accumulation of rock and soil particles are carried out by the movement of water flow; the main reasons for the toppling failure of plate rock masses are the increasing weight of the upper rock-soil aggregate and mountain erosion by river water. Indoor triaxial compression test results show that, the cohesion and friction angle of the rock-soil aggregate decreased with the increasing water content; the cohesion and the friction angle for natural rock-soil aggregate are 57.7 kPa and 31.3° and 26.1 kPa and 29.1° for saturated rock-soil aggregate, respectively. The deformation and failure mechanism of the rock-soil aggregate slope is a progressive process, and local landslides will occur step by step. Three-dimensional limit equilibrium analysis results show that the minimum safety factor of Gendakan slope is 0.953 when the rock-soil aggregate is saturated, and small scale of landslide will happen at the lower slope. PMID:24082854

  12. Geotechnical characteristics and stability analysis of rock-soil aggregate slope at the Gushui Hydropower Station, southwest China.

    PubMed

    Zhou, Jia-wen; Shi, Chong; Xu, Fu-gang

    2013-01-01

    Two important features of the high slopes at Gushui Hydropower Station are layered accumulations (rock-soil aggregate) and multilevel toppling failures of plate rock masses; the Gendakan slope is selected for case study in this paper. Geological processes of the layered accumulation of rock and soil particles are carried out by the movement of water flow; the main reasons for the toppling failure of plate rock masses are the increasing weight of the upper rock-soil aggregate and mountain erosion by river water. Indoor triaxial compression test results show that, the cohesion and friction angle of the rock-soil aggregate decreased with the increasing water content; the cohesion and the friction angle for natural rock-soil aggregate are 57.7 kPa and 31.3° and 26.1 kPa and 29.1° for saturated rock-soil aggregate, respectively. The deformation and failure mechanism of the rock-soil aggregate slope is a progressive process, and local landslides will occur step by step. Three-dimensional limit equilibrium analysis results show that the minimum safety factor of Gendakan slope is 0.953 when the rock-soil aggregate is saturated, and small scale of landslide will happen at the lower slope.

  13. Effect of Sensors on the Reliability and Control Performance of Power Circuits in the Web of Things (WoT)

    PubMed Central

    Bae, Sungwoo; Kim, Myungchin

    2016-01-01

    In order to realize a true WoT environment, a reliable power circuit is required to ensure interconnections among a range of WoT devices. This paper presents research on sensors and their effects on the reliability and response characteristics of power circuits in WoT devices. The presented research can be used in various power circuit applications, such as energy harvesting interfaces, photovoltaic systems, and battery management systems for the WoT devices. As power circuits rely on the feedback from voltage/current sensors, the system performance is likely to be affected by the sensor failure rates, sensor dynamic characteristics, and their interface circuits. This study investigated how the operational availability of the power circuits is affected by the sensor failure rates by performing a quantitative reliability analysis. In the analysis process, this paper also includes the effects of various reconstruction and estimation techniques used in power processing circuits (e.g., energy harvesting circuits and photovoltaic systems). This paper also reports how the transient control performance of power circuits is affected by sensor interface circuits. With the frequency domain stability analysis and circuit simulation, it was verified that the interface circuit dynamics may affect the transient response characteristics of power circuits. The verification results in this paper showed that the reliability and control performance of the power circuits can be affected by the sensor types, fault tolerant approaches against sensor failures, and the response characteristics of the sensor interfaces. The analysis results were also verified by experiments using a power circuit prototype. PMID:27608020

  14. A comparison of two prospective risk analysis methods: Traditional FMEA and a modified healthcare FMEA.

    PubMed

    Rah, Jeong-Eun; Manger, Ryan P; Yock, Adam D; Kim, Gwe-Ya

    2016-12-01

    To examine the abilities of a traditional failure mode and effects analysis (FMEA) and modified healthcare FMEA (m-HFMEA) scoring methods by comparing the degree of congruence in identifying high risk failures. The authors applied two prospective methods of the quality management to surface image guided, linac-based radiosurgery (SIG-RS). For the traditional FMEA, decisions on how to improve an operation were based on the risk priority number (RPN). The RPN is a product of three indices: occurrence, severity, and detectability. The m-HFMEA approach utilized two indices, severity and frequency. A risk inventory matrix was divided into four categories: very low, low, high, and very high. For high risk events, an additional evaluation was performed. Based upon the criticality of the process, it was decided if additional safety measures were needed and what they comprise. The two methods were independently compared to determine if the results and rated risks matched. The authors' results showed an agreement of 85% between FMEA and m-HFMEA approaches for top 20 risks of SIG-RS-specific failure modes. The main differences between the two approaches were the distribution of the values and the observation that failure modes (52, 54, 154) with high m-HFMEA scores do not necessarily have high FMEA-RPN scores. In the m-HFMEA analysis, when the risk score is determined, the basis of the established HFMEA Decision Tree™ or the failure mode should be more thoroughly investigated. m-HFMEA is inductive because it requires the identification of the consequences from causes, and semi-quantitative since it allows the prioritization of high risks and mitigation measures. It is therefore a useful tool for the prospective risk analysis method to radiotherapy.

  15. [THE FAILURE MODES AND EFFECTS ANALYSIS FACILITATES A SAFE, TIME AND MONEY SAVING OPEN ACCESS COLONOSCOPY SERVICE].

    PubMed

    Gingold-Belfer, Rachel; Niv, Yaron; Horev, Nehama; Gross, Shuli; Sahar, Nadav; Dickman, Ram

    2017-04-01

    Failure modes and effects analysis (FMEA) is used for the identification of potential risks in health care processes. We used a specific FMEA - based form for direct referral for colonoscopy and assessed it for procedurerelated perforations. Ten experts in endoscopy evaluated and computed the entire referral process, modes of preparation for the endoscopic procedure, the endoscopic procedure itself and the discharge process. We used FMEA assessing for likelihood of occurrence, detection and severity and calculated the risk profile number (RPN) for each of the above points. According to the highest RPN results we designed a specific open access referral form and then compared the occurrence of colonic perforations (between 2010 and 2013) in patients who were referred through the open access arm (Group 1) to those who had a prior clinical consultation (non-open access, Group 2). Our experts in endoscopy (5 physicians and 5 nurses) identified 3 categories of failure modes that, on average, reached the highest RPNs. We identified 9,558 colonoscopies in group 1, and 12,567 in group 2. Perforations were identified in three patients from the open access group (1:3186, 0.03%) and in 10 from group 2 (1:1256, 0.07%) (p = 0.024). Direct referral for colonoscopy saved 9,558 pre-procedure consultations and the sum of $850,000. The FMEA tool-based specific referral form facilitates a safe, time and money saving open access colonoscopy service. Our form may be adopted by other gastroenterological clinics in Israel.

  16. Finite element analysis of the failure mechanism of gentle slopes in weak disturbed clays

    NASA Astrophysics Data System (ADS)

    Lollino, Piernicola; Mezzina, Giuseppe; Cotecchia, Federica

    2014-05-01

    Italian south-eastern Apennines are affected by a large number of deep slow active landslide processes that interact with urban structures and infrastructures throughout the region, thus causing damages and economic losses. For most landslide processes in the region, the main predisposing factors for instability are represented by the piezometric regime and the extremely poor mechanical properties of the weak disturbed clays in the lower and central portions of the slopes that are overlaid in some cases by a stiffer cap layer, formed of rocky flysch, e.g. alternations of rock and soil strata. Based on phenomenological approaches, landslide processes are deemed to be triggered within the weaker clay layer and later on to develop upward to the stiffer cap, with the shear bands reaching also high depths. The paper presents the results of two-dimensional numerical analyses of the failure mechanisms developing in the unstable slopes of the region, carried out by means of the finite element method (Plaxis 2011) applied to slope conditions representative for the region. In particular, the effects of slope inclination, along with the thickness and the strength of the material forming the caprock at the top of the slope, on the depth of the sliding surface, the mobilised strengths, the evolution of the landslide process and the predisposing factors of landsliding have been explored by means of the finite element analysis of an ideal case study representative of the typical geomechanical context of the region. In particular, the increase of slope inclination is shown to raise the depth of the shear band as well as to extend landslide scarp upwards, in accordance with the field evidence. Moreover, the numerical results indicate how the increase of the caprock thickness tends to confine the development of the shear band to the underlying weaker clay layer, so that the depth of the shear band is also observed to reduce, and when the stiffer top stratum becomes involved in the retrogression of the failure process. The numerical results allow also for the investigation of the variation in seepage conditions that combine with the variations in litostratigraphy in determining the variations of the features of the failure mechanism.

  17. Development of GENOA Progressive Failure Parallel Processing Software Systems

    NASA Technical Reports Server (NTRS)

    Abdi, Frank; Minnetyan, Levon

    1999-01-01

    A capability consisting of software development and experimental techniques has been developed and is described. The capability is integrated into GENOA-PFA to model polymer matrix composite (PMC) structures. The capability considers the physics and mechanics of composite materials and structure by integration of a hierarchical multilevel macro-scale (lamina, laminate, and structure) and micro scale (fiber, matrix, and interface) simulation analyses. The modeling involves (1) ply layering methodology utilizing FEM elements with through-the-thickness representation, (2) simulation of effects of material defects and conditions (e.g., voids, fiber waviness, and residual stress) on global static and cyclic fatigue strengths, (3) including material nonlinearities (by updating properties periodically) and geometrical nonlinearities (by Lagrangian updating), (4) simulating crack initiation. and growth to failure under static, cyclic, creep, and impact loads. (5) progressive fracture analysis to determine durability and damage tolerance. (6) identifying the percent contribution of various possible composite failure modes involved in critical damage events. and (7) determining sensitivities of failure modes to design parameters (e.g., fiber volume fraction, ply thickness, fiber orientation. and adhesive-bond thickness). GENOA-PFA progressive failure analysis is now ready for use to investigate the effects on structural responses to PMC material degradation from damage induced by static, cyclic (fatigue). creep, and impact loading in 2D/3D PMC structures subjected to hygrothermal environments. Its use will significantly facilitate targeting design parameter changes that will be most effective in reducing the probability of a given failure mode occurring.

  18. Medical students' personal experience of high-stakes failure: case studies using interpretative phenomenological analysis.

    PubMed

    Patel, R S; Tarrant, C; Bonas, S; Shaw, R L

    2015-05-12

    Failing a high-stakes assessment at medical school is a major event for those who go through the experience. Students who fail at medical school may be more likely to struggle in professional practice, therefore helping individuals overcome problems and respond appropriately is important. There is little understanding about what factors influence how individuals experience failure or make sense of the failing experience in remediation. The aim of this study was to investigate the complexity surrounding the failure experience from the student's perspective using interpretative phenomenological analysis (IPA). The accounts of three medical students who had failed final re-sit exams, were subjected to in-depth analysis using IPA methodology. IPA was used to analyse each transcript case-by-case allowing the researcher to make sense of the participant's subjective world. The analysis process allowed the complexity surrounding the failure to be highlighted, alongside a narrative describing how students made sense of the experience. The circumstances surrounding students as they approached assessment and experienced failure at finals were a complex interaction between academic problems, personal problems (specifically finance and relationships), strained relationships with friends, family or faculty, and various mental health problems. Each student experienced multi-dimensional issues, each with their own individual combination of problems, but experienced remediation as a one-dimensional intervention with focus only on improving performance in written exams. What these students needed to be included was help with clinical skills, plus social and emotional support. Fear of termination of the their course was a barrier to open communication with staff. These students' experience of failure was complex. The experience of remediation is influenced by the way in which students make sense of failing. Generic remediation programmes may fail to meet the needs of students for whom personal, social and mental health issues are a part of the picture.

  19. Incorporating seismic observations into 2D conduit flow modeling

    NASA Astrophysics Data System (ADS)

    Collier, L.; Neuberg, J.

    2006-04-01

    Conduit flow modeling aims to understand the conditions of magma at depth, and to provide insight into the physical processes that occur inside the volcano. Low-frequency events, characteristic to many volcanoes, are thought to contain information on the state of magma at depth. Therefore, by incorporating information from low-frequency seismic analysis into conduit flow modeling a greater understanding of magma ascent and its interdependence on magma conditions and physical processes is possible. The 2D conduit flow model developed in this study demonstrates the importance of lateral pressure and parameter variations on overall magma flow dynamics, and the substantial effect bubbles have on magma shear viscosity and on magma ascent. The 2D nature of the conduit flow model developed here allows in depth investigation into processes which occur at, or close to the wall, such as magma cooling and brittle failure of melt. These processes are shown to have a significant effect on magma properties and therefore, on flow dynamics. By incorporating low-frequency seismic information, an advanced conduit flow model is developed including the consequences of brittle failure of melt, namely friction-controlled slip and gas loss. This model focuses on the properties and behaviour of magma at depth within the volcano, and their interaction with the formation of seismic events by brittle failure of melt.

  20. Failure analysis of ceramic clinical cases using qualitative fractography.

    PubMed

    Scherrer, Susanne S; Quinn, Janet B; Quinn, George D; Kelly, J Robert

    2006-01-01

    To educate dental academic staff and clinicians on the application of descriptive (qualitative) fractography for analyses of clinical and laboratory failures of brittle materials such as glass and ceramic. The fracture surface topography of failed glass, glass fiber-reinforced composite, and ceramic restorations (Procera, Cerestore, In-Ceram, porcelain-fused-to-metal) was examined utilizing a scanning electron microscope. Replicas and original failed parts were scrutinized for classic fractographic features such as hackle, wake hackle, twist hackle, arrest lines, and mirrors. Failed surfaces of the veneering porcelain of ceramic and porcelain-fused-to-metal crowns exhibited hackle, wake hackle, twist hackle, arrest lines, and compression curl, which were produced by the interaction of the advancing crack with the microstructure of the material. Fracture surfaces of glass and glass fiber-reinforced composite showed additional features, such as velocity hackle and mirrors. The observed features were good indicators of the local direction of crack propagation and were used to trace the crack back to an initial starting area (the origin). Examples of failure analysis in this study are intended to guide the researcher in using qualitative (descriptive) fractography as a tool for understanding the failure process in brittle restorative materials and also for assessing possible design inadequacies.

  1. Coordination and organization of security software process for power information application environment

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    2017-09-01

    As an important part of software engineering, the software process decides the success or failure of software product. The design and development feature of security software process is discussed, so is the necessity and the present significance of using such process. Coordinating the function software, the process for security software and its testing are deeply discussed. The process includes requirement analysis, design, coding, debug and testing, submission and maintenance. In each process, the paper proposed the subprocesses to support software security. As an example, the paper introduces the above process into the power information platform.

  2. Independent Orbiter Assessment (IOA): FMEA/CIL assessment

    NASA Technical Reports Server (NTRS)

    Hinsdale, L. W.; Swain, L. J.; Barnes, J. E.

    1988-01-01

    The McDonnell Douglas Astronautics Company (MDAC) was selected to perform an Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL). Direction was given by the Orbiter and GFE Projects Office to perform the hardware analysis and assessment using the instructions and ground rules defined in NSTS 22206. The IOA analysis featured a top-down approach to determine hardware failure modes, criticality, and potential critical items. To preserve independence, the analysis was accomplished without reliance upon the results contained within the NASA and Prime Contractor FMEA/CIL documentation. The assessment process compared the independently derived failure modes and criticality assignments to the proposed NASA post 51-L FMEA/CIL documentation. When possible, assessment issues were discussed and resolved with the NASA subsystem managers. Unresolved issues were elevated to the Orbiter and GFE Projects Office manager, Configuration Control Board (CCB), or Program Requirements Control Board (PRCB) for further resolution. The most important Orbiter assessment finding was the previously unknown stuck autopilot push-button criticality 1/1 failure mode. The worst case effect could cause loss of crew/vehicle when the microwave landing system is not active. It is concluded that NASA and Prime Contractor Post 51-L FMEA/CIL documentation assessed by IOA is believed to be technically accurate and complete. All CIL issues were resolved. No FMEA issues remain that have safety implications. Consideration should be given, however, to upgrading NSTS 22206 with definitive ground rules which more clearly spell out the limits of redundancy.

  3. Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Cheng, Ron-Bin

    2010-01-01

    A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.

  4. A numerical model for predicting crack path and modes of damage in unidirectional metal matrix composites

    NASA Technical Reports Server (NTRS)

    Bakuckas, J. G.; Tan, T. M.; Lau, A. C. W.; Awerbuch, J.

    1993-01-01

    A finite element-based numerical technique has been developed to simulate damage growth in unidirectional composites. This technique incorporates elastic-plastic analysis, micromechanics analysis, failure criteria, and a node splitting and node force relaxation algorithm to create crack surfaces. Any combination of fiber and matrix properties can be used. One of the salient features of this technique is that damage growth can be simulated without pre-specifying a crack path. In addition, multiple damage mechanisms in the forms of matrix cracking, fiber breakage, fiber-matrix debonding and plastic deformation are capable of occurring simultaneously. The prevailing failure mechanism and the damage (crack) growth direction are dictated by the instantaneous near-tip stress and strain fields. Once the failure mechanism and crack direction are determined, the crack is advanced via the node splitting and node force relaxation algorithm. Simulations of the damage growth process in center-slit boron/aluminum and silicon carbide/titanium unidirectional specimens were performed. The simulation results agreed quite well with the experimental observations.

  5. Failure Analysis of Discrete Damaged Tailored Extension-Shear-Coupled Stiffened Composite Panels

    NASA Technical Reports Server (NTRS)

    Baker, Donald J.

    2005-01-01

    The results of an analytical and experimental investigation of the failure of composite is tiffener panels with extension-shear coupling are presented. This tailored concept, when used in the cover skins of a tiltrotor aircraft wing has the potential for increasing the aeroelastic stability margins and improving the aircraft productivity. The extension-shear coupling is achieved by using unbalanced 45 plies in the skin. The failure analysis of two tailored panel configurations that have the center stringer and adjacent skin severed is presented. Finite element analysis of the damaged panels was conducted using STAGS (STructural Analysis of General Shells) general purpose finite element program that includes a progressive failure capability for laminated composite structures that is based on point-stress analysis, traditional failure criteria, and ply discounting for material degradation. The progressive failure predicted the path of the failure and maximum load capability. There is less than 12 percent difference between the predicted failure load and experimental failure load. There is a good match of the panel stiffness and strength between the progressive failure analysis and the experimental results. The results indicate that the tailored concept would be feasible to use in the wing skin of a tiltrotor aircraft.

  6. Science, practice, and human errors in controlling Clostridium botulinum in heat-preserved food in hermetic containers.

    PubMed

    Pflug, Irving J

    2010-05-01

    The incidence of botulism in canned food in the last century is reviewed along with the background science; a few conclusions are reached based on analysis of published data. There are two primary aspects to botulism control: the design of an adequate process and the delivery of the adequate process to containers of food. The probability that the designed process will not be adequate to control Clostridium botulinum is very small, probably less than 1.0 x 10(-6), based on containers of food, whereas the failure of the operator of the processing equipment to deliver the specified process to containers of food may be of the order of 1 in 40, to 1 in 100, based on processing units (retort loads). In the commercial food canning industry, failure to deliver the process will probably be of the order of 1.0 x 10(-4) to 1.0 x 10(-6) when U.S. Food and Drug Administration (FDA) regulations are followed. Botulism incidents have occurred in food canning plants that have not followed the FDA regulations. It is possible but very rare to have botulism result from postprocessing contamination. It may thus be concluded that botulism incidents in canned food are primarily the result of human failure in the delivery of the designed or specified process to containers of food that, in turn, result in the survival, outgrowth, and toxin production of C. botulinum spores. Therefore, efforts in C. botulinum control should be concentrated on reducing human errors in the delivery of the specified process to containers of food.

  7. Safety and reliability analysis in a polyvinyl chloride batch process using dynamic simulator-case study: Loss of containment incident.

    PubMed

    Rizal, Datu; Tani, Shinichi; Nishiyama, Kimitoshi; Suzuki, Kazuhiko

    2006-10-11

    In this paper, a novel methodology in batch plant safety and reliability analysis is proposed using a dynamic simulator. A batch process involving several safety objects (e.g. sensors, controller, valves, etc.) is activated during the operational stage. The performance of the safety objects is evaluated by the dynamic simulation and a fault propagation model is generated. By using the fault propagation model, an improved fault tree analysis (FTA) method using switching signal mode (SSM) is developed for estimating the probability of failures. The timely dependent failures can be considered as unavailability of safety objects that can cause the accidents in a plant. Finally, the rank of safety object is formulated as performance index (PI) and can be estimated using the importance measures. PI shows the prioritization of safety objects that should be investigated for safety improvement program in the plants. The output of this method can be used for optimal policy in safety object improvement and maintenance. The dynamic simulator was constructed using Visual Modeler (VM, the plant simulator, developed by Omega Simulation Corp., Japan). A case study is focused on the loss of containment (LOC) incident at polyvinyl chloride (PVC) batch process which is consumed the hazardous material, vinyl chloride monomer (VCM).

  8. Analysis of Long Bone and Vertebral Failure Patterns.

    DTIC Science & Technology

    1982-09-30

    processes further supported the findings of • :the scanning electron microscopy studies . In the impacted animals, the cartilage surface was eroded... cartilage matrix. In the six years post-impaction group, the articular cartilage had converted to fibrocartilage instead of normal hyaline cartilage . The...columns of four rhesus monkeys have been collected and are being processed for study with light microscopy and scanning electron microscopy. The baboon

  9. Progressive Damage and Failure Analysis of Composite Laminates

    NASA Astrophysics Data System (ADS)

    Joseph, Ashith P. K.

    Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.

  10. Failure analysis of the fractured wires in sternal perichronal loops.

    PubMed

    Chao, Jesús; Voces, Roberto; Peña, Carmen

    2011-10-01

    We report failure analysis of sternal wires in two cases in which a perichronal fixation technique was used to close the sternotomy. Various characteristics of the retrieved wires were compared to those of unused wires of the same grade and same manufacturer and with surgical wire specifications. In both cases, wire fracture was un-branched and transgranular and proceeded by a high cycle fatigue process, apparently in the absence of corrosion. However, stress anlysis indicates that the effective stress produced during strong coughing is lower than the yield strength. Our findings suggest that in order to reduce the risk for sternal dehiscence, the diameter of the wire used should be increased. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Collaborative analysis of wheat endosperm compressive material properties

    USDA-ARS?s Scientific Manuscript database

    The objective measurement of cereal endosperm texture, for wheat (Triticum L.) in particular, is relevant to the milling, processing and utilization of grain. The objective of this study was to evaluate the inter-laboratory results of compression failure testing of wheat endosperm specimens of defi...

  12. Influence of Processing Conditions on the Mechanical Behavior and Morphology of Injection Molded Poly(lactic-co-glycolic acid) 85:15

    PubMed Central

    Fancello, Eduardo Alberto

    2017-01-01

    Two groups of PLGA specimens with different geometries (notched and unnotched) were injection molded under two melting temperatures and flow rates. The mechanical properties, morphology at the fracture surface, and residual stresses were evaluated for both processing conditions. The morphology of the fractured surfaces for both specimens showed brittle and smooth fracture features for the majority of the specimens. Fracture images of the notched specimens suggest that the surface failure mechanisms are different from the core failure. Polarized light techniques indicated birefringence in all specimens, especially those molded with lower temperature, which suggests residual stress due to rapid solidification. DSC analysis confirmed the existence of residual stress in all PLGA specimens. The specimens molded using the lower injection temperature and the low flow rate presented lower loss tangent values according to the DMA and higher residual stress as shown by DSC, and the photoelastic analysis showed extensive birefringence. PMID:28848605

  13. A knowledge acquisition process to analyse operational problems in solid waste management facilities.

    PubMed

    Dokas, Ioannis M; Panagiotakopoulos, Demetrios C

    2006-08-01

    The available expertise on managing and operating solid waste management (SWM) facilities varies among countries and among types of facilities. Few experts are willing to record their experience, while few researchers systematically investigate the chains of events that could trigger operational failures in a facility; expertise acquisition and dissemination, in SWM, is neither popular nor easy, despite the great need for it. This paper presents a knowledge acquisition process aimed at capturing, codifying and expanding reliable expertise and propagating it to non-experts. The knowledge engineer (KE), the person performing the acquisition, must identify the events (or causes) that could trigger a failure, determine whether a specific event could trigger more than one failure, and establish how various events are related among themselves and how they are linked to specific operational problems. The proposed process, which utilizes logic diagrams (fault trees) widely used in system safety and reliability analyses, was used for the analysis of 24 common landfill operational problems. The acquired knowledge led to the development of a web-based expert system (Landfill Operation Management Advisor, http://loma.civil.duth.gr), which estimates the occurrence possibility of operational problems, provides advice and suggests solutions.

  14. Mechanics of rainfall-induced flow failure in unsaturated shallow slopes (Invited)

    NASA Astrophysics Data System (ADS)

    Buscarnera, G.

    2013-12-01

    The increase in pore water pressure due to rain infiltration can be a dominant component in the activation of slope instabilities. This work shows an application of the theory of material stability to the triggering analysis of this important class of natural hazards. The goal is to identify the mechanisms through which the process of rain infiltration promotes instabilities of the flow-type in the soil covers. The interplay between increase in pore water pressure and failure mechanisms is investigated at material point level. To account for multiple failure mechanisms, the second-order energy input is linked to the controllability theory and used to define different types of stability indices, each associated with a specific mode of slope failure. It is shown that the theory can be used to assess both shear failure and static liquefaction in saturated and unsaturated soil covers. In particular, it is shown that these instability modes are regulated by the hydro-mechanical characteristics of the soil covers, as well as by their mutual coupling. This finding discloses the importance of the constitutive functions that simulate the interaction between the response of the solid skeleton and the fluid-retention characteristics of the soil. As a consequence, they suggest that even material properties that are not be to directly associated with the shearing resistance (e.g., the potential for wetting compaction) may play a role in the initiation of catastrophic slope failures. According to the proposed interpretation, the process of pore pressure increase can be seen as the trigger of uncontrolled strains, which can anticipate the onset of frictional failure and promote a solid-to-fluid transition.

  15. Analysis of fault-tolerant neurocontrol architectures

    NASA Technical Reports Server (NTRS)

    Troudet, T.; Merrill, W.

    1992-01-01

    The fault-tolerance of analog parallel distributed implementations of a multivariable aircraft neurocontroller is analyzed by simulating weight and neuron failures in a simplified scheme of analog processing based on the functional architecture of the ETANN chip (Electrically Trainable Artificial Neural Network). The neural information processing is found to be only partially distributed throughout the set of weights of the neurocontroller synthesized with the backpropagation algorithm. Although the degree of distribution of the neural processing, and consequently the fault-tolerance of the neurocontroller, could be enhanced using Locally Distributed Weight and Neuron Approaches, a satisfactory level of fault-tolerance could only be obtained by retraining the degrated VLSI neurocontroller. The possibility of maintaining neurocontrol performance and stability in the presence of single weight of neuron failures was demonstrated through an automated retraining procedure of the neurocontroller based on a pre-programmed choice and sequence of the training parameters.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Aiman; Laguna, Ignacio; Sato, Kento

    Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enablesmore » failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.« less

  17. SU-E-T-635: Process Mapping of Eye Plaque Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, J; Kim, Y

    Purpose: To apply a risk-based assessment and analysis technique (AAPM TG 100) to eye plaque brachytherapy treatment of ocular melanoma. Methods: The role and responsibility of personnel involved in the eye plaque brachytherapy is defined for retinal specialist, radiation oncologist, nurse and medical physicist. The entire procedure was examined carefully. First, major processes were identified and then details for each major process were followed. Results: Seventy-one total potential modes were identified. Eight major processes (corresponding detailed number of modes) are patient consultation (2 modes), pretreatment tumor localization (11), treatment planning (13), seed ordering and calibration (10), eye plaque assembly (10),more » implantation (11), removal (11), and deconstruction (3), respectively. Half of the total modes (36 modes) are related to physicist while physicist is not involved in processes such as during the actual procedure of suturing and removing the plaque. Conclusion: Not only can failure modes arise from physicist-related procedures such as treatment planning and source activity calibration, but it can also exist in more clinical procedures by other medical staff. The improvement of the accurate communication for non-physicist-related clinical procedures could potentially be an approach to prevent human errors. More rigorous physics double check would reduce the error for physicist-related procedures. Eventually, based on this detailed process map, failure mode and effect analysis (FMEA) will identify top tiers of modes by ranking all possible modes with risk priority number (RPN). For those high risk modes, fault tree analysis (FTA) will provide possible preventive action plans.« less

  18. Identification and assessment of common errors in the admission process of patients in Isfahan Fertility and Infertility Center based on "failure modes and effects analysis".

    PubMed

    Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila

    2016-01-01

    Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of "failure modes and effects analysis" (FMEA). In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members' decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors.

  19. Fault growth and acoustic emissions in confined granite

    USGS Publications Warehouse

    Lockner, David A.; Byerlee, James D.

    1992-01-01

    The failure process in a brittle granite was studied by using acoustic emission techniques to obtain three dimensional locations of the microfracturing events. During a creep experiment the nucleation of faulting coincided with the onset of tertiary creep, but the development of the fault could not be followed because the failure occurred catastrophically. A technique has been developed that enables the failure process to be stabilized by controlling the axial stress to maintain a constant acoustic emission rate. As a result the post-failure stress-strain curve has been followed quasi-statically, extending to hours the fault growth process that normally would occur violently in a fraction of a second. The results from the rate-controlled experiments show that the fault plane nucleated at a point on the sample surface after the stress-strain curve reached its peak. Before nucleation, the microcrack growth was distributed throughout the sample. The fault plane then grew outward from the nucleation site and was accompanied by a gradual drop in stress. Acoustic emission locations showed that the fault propagated as a fracture front (process zone) with dimensions of 1 to 3 cm. As the fracture front passed by a given fixed point on the fault plane, the subsequent acoustic emission would drop. When growth was allowed to progress until the fault bisected the sample, the stress dropped to the frictional strength. These observations are in accord with the behavior predicted by Rudnicki and Rice's bifurcation analysis but conflict with experiments used to infer that shear localization would occur in brittle rock while the material is still hardening.

  20. Fracture analysis of tube boiler for physical explosion accident.

    PubMed

    Kim, Eui Soo

    2017-09-01

    Material and failure analysis techniques are key tools for determining causation in case of explosive and bursting accident result from material and process defect of product in the field of forensic science. The boiler rupture generated by defect of the welding division, corrosion, overheating and degradation of the material have devastating power. If weak division of boiler burner is fractured by internal pressure, saturated vapor and water is vaporized suddenly. At that time, volume of the saturated vapor and water increases up to thousands of volume. This failure of boiler burner can lead to a fatal disaster. In order to prevent an explosion and of the boiler, it is critical to introduce a systematic investigation and prevention measures in advance. In this research, the cause of boiler failure is investigated through forensic engineering method. Specifically, the failure mechanism will be identified by fractography using scanning electron microscopes (SEM) and Optical Microscopes (OM) and mechanical characterizations. This paper presents a failure analysis of household welding joints for the water tank of a household boiler burner. Visual inspection was performed to find out the characteristics of the fracture of the as-received material. Also, the micro-structural changes such as grain growth and carbide coarsening were examined by optical microscope. Detailed studies of fracture surfaces were made to find out the crack propagation on the weld joint of a boiler burner. It was concluded that the rupture may be caused by overheating induced by insufficient water on the boiler, and it could be accelerated by the metal temperature increase. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Categorizing accident sequences in the external radiotherapy for risk analysis

    PubMed Central

    2013-01-01

    Purpose This study identifies accident sequences from the past accidents in order to help the risk analysis application to the external radiotherapy. Materials and Methods This study reviews 59 accidental cases in two retrospective safety analyses that have collected the incidents in the external radiotherapy extensively. Two accident analysis reports that accumulated past incidents are investigated to identify accident sequences including initiating events, failure of safety measures, and consequences. This study classifies the accidents by the treatments stages and sources of errors for initiating events, types of failures in the safety measures, and types of undesirable consequences and the number of affected patients. Then, the accident sequences are grouped into several categories on the basis of similarity of progression. As a result, these cases can be categorized into 14 groups of accident sequence. Results The result indicates that risk analysis needs to pay attention to not only the planning stage, but also the calibration stage that is committed prior to the main treatment process. It also shows that human error is the largest contributor to initiating events as well as to the failure of safety measures. This study also illustrates an event tree analysis for an accident sequence initiated in the calibration. Conclusion This study is expected to provide sights into the accident sequences for the prospective risk analysis through the review of experiences. PMID:23865005

  2. Use-related risk analysis for medical devices based on improved FMEA.

    PubMed

    Liu, Long; Shuai, Ma; Wang, Zhu; Li, Ping

    2012-01-01

    In order to effectively analyze and control use-related risk of medical devices, quantitative methodologies must be applied. Failure Mode and Effects Analysis (FMEA) is a proactive technique for error detection and risk reduction. In this article, an improved FMEA based on Fuzzy Mathematics and Grey Relational Theory is developed to better carry out user-related risk analysis for medical devices. As an example, the analysis process using this improved FMEA method for a certain medical device (C-arm X-ray machine) is described.

  3. User-Defined Material Model for Progressive Failure Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F. Jr.; Reeder, James R. (Technical Monitor)

    2006-01-01

    An overview of different types of composite material system architectures and a brief review of progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model (or UMAT) for use with the ABAQUS/Standard1 nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details and use of the UMAT subroutine are described in the present paper. Parametric studies for composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented.

  4. An efficient scan diagnosis methodology according to scan failure mode for yield enhancement

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Tae; Seo, Nam-Sik; Oh, Ghil-Geun; Kim, Dae-Gue; Lee, Kyu-Taek; Choi, Chi-Young; Kim, InSoo; Min, Hyoung Bok

    2008-12-01

    Yield has always been a driving consideration during fabrication of modern semiconductor industry. Statistically, the largest portion of wafer yield loss is defective scan failure. This paper presents efficient failure analysis methods for initial yield ramp up and ongoing product with scan diagnosis. Result of our analysis shows that more than 60% of the scan failure dies fall into the category of shift mode in the very deep submicron (VDSM) devices. However, localization of scan shift mode failure is very difficult in comparison to capture mode failure because it is caused by the malfunction of scan chain. Addressing the biggest challenge, we propose the most suitable analysis method according to scan failure mode (capture / shift) for yield enhancement. In the event of capture failure mode, this paper describes the method that integrates scan diagnosis flow and backside probing technology to obtain more accurate candidates. We also describe several unique techniques, such as bulk back-grinding solution, efficient backside probing and signal analysis method. Lastly, we introduce blocked chain analysis algorithm for efficient analysis of shift failure mode. In this paper, we contribute to enhancement of the yield as a result of the combination of two methods. We confirm the failure candidates with physical failure analysis (PFA) method. The direct feedback of the defective visualization is useful to mass-produce devices in a shorter time. The experimental data on mass products show that our method produces average reduction by 13.7% in defective SCAN & SRAM-BIST failure rates and by 18.2% in wafer yield rates.

  5. Operations analysis (study 2.1). Contingency analysis. [of failure modes anticipated during space shuttle upper stage planning

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Future operational concepts for the space transportation system were studied in terms of space shuttle upper stage failure contingencies possible during deployment, retrieval, or space servicing of automated satellite programs. Problems anticipated during mission planning were isolated using a modified 'fault tree' technique, normally used in safety analyses. A comprehensive space servicing hazard analysis is presented which classifies possible failure modes under the catagories of catastrophic collision, failure to rendezvous and dock, servicing failure, and failure to undock. The failure contingencies defined are to be taken into account during design of the upper stage.

  6. Proceedings of the 21st Project Integration Meeting

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Progress made by the Flat Plate Solar Array Project during the period April 1982 to January 1983 is described. Reports on polysilicon refining, thin film solar cell and module technology development, central station electric utility activities, silicon sheet growth and characteristics, advanced photovoltaic materials, cell and processes research, module technology, environmental isolation, engineering sciences, module performance and failure analysis and project analysis and integration are included.

  7. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development

    PubMed Central

    Honig, Shanee; Oron-Gilad, Tal

    2018-01-01

    While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and feelings toward robots, and how these effects can be mitigated. Fifty-two studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction (HRI), and mitigating failures. Since little research has been done on these topics within the HRI community, insights from the fields of human computer interaction (HCI), human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing, RF-HIP), that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1) communicating failures, (2) perception and comprehension of failures, and (3) solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a tool to promote the development of user-centered failure-handling strategies for HRIs.

  8. Product Quality Improvement Using FMEA for Electric Parking Brake (EPB)

    NASA Astrophysics Data System (ADS)

    Dumitrescu, C. D.; Gruber, G. C.; Tişcă, I. A.

    2016-08-01

    One of the most frequently used methods to improve product quality is complex FMEA. (Failure Modes and Effects Analyses). In the literature various FMEA is known, depending on the mode and depending on the targets; we mention here some of these names: Failure Modes and Effects Analysis Process, or analysis Failure Mode and Effects Reported (FMECA). Whatever option is supported by the work team, the goal of the method is the same: optimize product design activities in research, design processes, implementation of manufacturing processes, optimization of mining product to beneficiaries. According to a market survey conducted on parts suppliers to vehicle manufacturers FMEA method is used in 75%. One purpose of the application is that after the research and product development is considered resolved, any errors which may be detected; another purpose of applying the method is initiating appropriate measures to avoid mistakes. Achieving these two goals leads to a high level distribution in applying, to avoid errors already in the design phase of the product, thereby avoiding the emergence and development of additional costs in later stages of product manufacturing. During application of FMEA method using standardized forms; with their help will establish the initial assemblies of product structure, in which all components will be viewed without error. The work is an application of the method FMEA quality components to optimize the structure of the electrical parking brake (Electric Parching Brake - E.P.B). This is a component attached to the roller system which ensures automotive replacement of conventional mechanical parking brake while ensuring its comfort, functionality, durability and saves space in the passenger compartment. The paper describes the levels at which they appealed in applying FMEA, working arrangements in the 4 distinct levels of analysis, and how to determine the number of risk (Risk Priority Number); the analysis of risk factors and established authors who have imposed measures to reduce / eliminate risk completely exploiting this complex product.

  9. Reliability-based management of buried pipelines considering external corrosion defects

    NASA Astrophysics Data System (ADS)

    Miran, Seyedeh Azadeh

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.

  10. Improved Monitoring of Semi-Continuous Anaerobic Digestion of Sugarcane Waste: Effects of Increasing Organic Loading Rate on Methanogenic Community Dynamics

    PubMed Central

    Leite, Athaydes Francisco; Janke, Leandro; Lv, Zuopeng; Harms, Hauke; Richnow, Hans-Hermann; Nikolausz, Marcell

    2015-01-01

    The anaerobic digestion of filter cake and its co-digestion with bagasse, and the effect of gradual increase of the organic loading rate (OLR) from start-up to overload were investigated. Understanding the influence of environmental and technical parameters on the development of particular methanogenic pathway in the biogas process was an important aim for the prediction and prevention of process failure. The rapid accumulation of volatile organic acids at high OLR of 3.0 to 4.0 gvs·L−1·day−1 indicated strong process inhibition. Methanogenic community dynamics of the reactors was monitored by stable isotope composition of biogas and molecular biological analysis. A potential shift toward the aceticlastic methanogenesis was observed along with the OLR increase under stable reactor operating conditions. Reactor overloading and process failure were indicated by the tendency to return to a predominance of hydrogenotrophic methanogenesis with rising abundances of the orders Methanobacteriales and Methanomicrobiales and drop of the genus Methanosarcina abundance. PMID:26404240

  11. Post-Challenger evaluation of space shuttle risk assessment and management

    NASA Technical Reports Server (NTRS)

    1988-01-01

    As the shock of the Space Shuttle Challenger accident began to subside, NASA initiated a wide range of actions designed to ensure greater safety in various aspects of the Shuttle system and an improved focus on safety throughout the National Space Transportation System (NSTS) Program. Certain specific features of the NASA safety process are examined: the Critical Items List (CIL) and the NASA review of the Shuttle primary and backup units whose failure might result in the loss of life, the Shuttle vehicle, or the mission; the failure modes and effects analyses (FMEA); and the hazard analysis and their review. The conception of modern risk management, including the essential element of objective risk assessment is described and it is contrasted with NASA's safety process in general terms. The discussion, findings, and recommendations regarding particular aspects of the NASA STS safety assurance process are reported. The 11 subsections each deal with a different aspect of the process. The main lessons learned by SCRHAAC in the course of the audit are summarized.

  12. Ground-based LiDAR application to characterize sea cliff instability processes along a densely populated coastline in Southern Italy

    NASA Astrophysics Data System (ADS)

    Esposito, Giuseppe; Semaan, Fouad; Salvini, Riccardo; Troise, Claudia; Somma, Renato; Matano, Fabio; Sacchi, Marco

    2017-04-01

    Sea cliff retreatment along the coastline of the Campi Flegrei volcanic area (Southern Italy) is becoming a threat for public and private structures due to the massive urbanization occurred in the last few decades. In this area, geological features of the outcropping rocks represent one of the most important factors conditioning the sea cliff retreatment. In fact, pyroclastic deposits formed by pumices, scoria, ashes and lapilli are arranged in weakly to moderately welded layers of variable thicknesses, resulting very erodible and prone to landslide processes. Available methods to evaluate topographic changes and retreat rates of sea cliffs include a variety of geomatic techniques, like terrestrial and aerial photogrammetry and LiDAR (Light Detection And Ranging). By means of such techniques, it is in fact possible to obtain high resolution topography of sea cliffs and perform multi-temporal change detection analysis. In this contribution, we present an application of Terrestrial Laser Scanning (TLS or ground-based LiDAR) aimed to identify and quantify instability processes acting along the Torrefumo coastal cliff, in the Campi Flegrei area. Specifically, we acquired a series of 3D point clouds on the years 2013 and 2016, and compared them through a cloud-to-cloud distance computation. Furthermore, a statistical analysis was applied to the change detection results. In this way, an inventory of the cliff failures occurred along the Torrefumo cliff in the 2013-2016 time span was created, as well as the spatial and volumetric distribution of these failures was evaluated. The volumetric analysis shows that large collapses occurred rarely, whereas the spatial analysis shows that the majority of failures occurred in the middle and upper parts of the cliff face. Results also show that both rock fall and surficial erosion processes contribute to the cliff retreatment, acting in turn according to the geological properties of the involved pyroclastic deposits. The presented TLS approach proves to be a cost and time efficient method for characterizing the geomorphic changes involving the sea cliff surfaces over a short-time period (i.e. monthly or yearly). The accuracy of the acquired data allows the characterization of a full range of failures to be located and quantified with a level of detail not reachable using traditional techniques. Results obtained in this research will be used in future applications to assess hazard conditions affecting the anthropic structures built close to the cliff top.

  13. Modeling and Hazard Analysis Using STPA

    NASA Astrophysics Data System (ADS)

    Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka

    2010-09-01

    A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis and following the NASA standards for safety-critical systems, the results of our experimental application of STPA can be compared with these more traditional safety engineering approaches in terms of the problems identified and the resources required to use it.

  14. Snow fracture: From micro-cracking to global failure

    NASA Astrophysics Data System (ADS)

    Capelli, Achille; Reiweger, Ingrid; Schweizer, Jürg

    2017-04-01

    Slab avalanches are caused by a crack forming and propagating in a weak layer within the snow cover, which eventually causes the detachment of the overlying cohesive slab. The gradual damage process leading to the nucleation of the initial failure is still not entirely understood. Therefore, we studied the damage process preceding snow failure by analyzing the acoustic emissions (AE) generated by bond failure or micro-cracking. The AE allow studying the ongoing progressive failure in a non-destructive way. We performed fully load-controlled failure experiments on snow samples presenting a weak layer and recorded the generated AE. The size and frequency of the generated AE increased before failure revealing an acceleration of the damage process with increased size and frequency of damage and/or microscopic cracks. The AE energy was power-law distributed and the exponent (b-value) decreased approaching failure. The waiting time followed an exponential distribution with increasing exponential coefficient λ before failure. The decrease of the b-value and the increase of λ correspond to a change in the event distribution statistics indicating a transition from homogeneously distributed uncorrelated damage producing mostly small AE to localized damage, which cause larger correlated events which leads to brittle failure. We observed brittle failure for the fast experiment and a more ductile behavior for the slow experiments. This rate dependence was reflected also in the AE signature. In the slow experiments the b value and λ were almost constant, and the energy rate increase was moderate indicating that the damage process was in a stable state - suggesting the damage and healing processes to be balanced. On a shorter time scale, however, the AE parameters varied indicating that the damage process was not steady but consisted of a sum of small bursts. We assume that the bursts may have been generated by cascades of correlated micro-cracks caused by localization of stresses at a small scale. The healing process may then have prevented the self-organization of this small scale damage and, therefore, the total failure of the sample.

  15. Frequency Analysis of Failure Scenarios from Shale Gas Development.

    PubMed

    Abualfaraj, Noura; Gurian, Patrick L; Olson, Mira S

    2018-04-29

    This study identified and prioritized potential failure scenarios for natural gas drilling operations through an elicitation of people who work in the industry. A list of twelve failure scenarios of concern was developed focusing on specific events that may occur during the shale gas extraction process involving an operational failure or a violation of regulations. Participants prioritized the twelve scenarios based on their potential impact on the health and welfare of the general public, potential impact on worker safety, how well safety guidelines protect against their occurrence, and how frequently they occur. Illegal dumping of flowback water, while rated as the least frequently occurring scenario, was considered the scenario least protected by safety controls and the one of most concern to the general public. In terms of worker safety, the highest concern came from improper or inadequate use of personal protective equipment (PPE). While safety guidelines appear to be highly protective regarding PPE usage, inadequate PPE is the most directly witnessed failure scenario. Spills of flowback water due to equipment failure are of concern both with regards to the welfare of the general public and worker safety as they occur more frequently than any other scenario examined in this study.

  16. Frequency Analysis of Failure Scenarios from Shale Gas Development

    PubMed Central

    Abualfaraj, Noura; Olson, Mira S.

    2018-01-01

    This study identified and prioritized potential failure scenarios for natural gas drilling operations through an elicitation of people who work in the industry. A list of twelve failure scenarios of concern was developed focusing on specific events that may occur during the shale gas extraction process involving an operational failure or a violation of regulations. Participants prioritized the twelve scenarios based on their potential impact on the health and welfare of the general public, potential impact on worker safety, how well safety guidelines protect against their occurrence, and how frequently they occur. Illegal dumping of flowback water, while rated as the least frequently occurring scenario, was considered the scenario least protected by safety controls and the one of most concern to the general public. In terms of worker safety, the highest concern came from improper or inadequate use of personal protective equipment (PPE). While safety guidelines appear to be highly protective regarding PPE usage, inadequate PPE is the most directly witnessed failure scenario. Spills of flowback water due to equipment failure are of concern both with regards to the welfare of the general public and worker safety as they occur more frequently than any other scenario examined in this study. PMID:29710821

  17. Comprehensive, Quantitative Risk Assessment of CO{sub 2} Geologic Sequestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lepinski, James

    2013-09-30

    A Quantitative Failure Modes and Effects Analysis (QFMEA) was developed to conduct comprehensive, quantitative risk assessments on CO{sub 2} capture, transportation, and sequestration or use in deep saline aquifers, enhanced oil recovery operations, or enhanced coal bed methane operations. The model identifies and characterizes potential risks; identifies the likely failure modes, causes, effects and methods of detection; lists possible risk prevention and risk mitigation steps; estimates potential damage recovery costs, mitigation costs and costs savings resulting from mitigation; and ranks (prioritizes) risks according to the probability of failure, the severity of failure, the difficulty of early failure detection and themore » potential for fatalities. The QFMEA model generates the necessary information needed for effective project risk management. Diverse project information can be integrated into a concise, common format that allows comprehensive, quantitative analysis, by a cross-functional team of experts, to determine: What can possibly go wrong? How much will damage recovery cost? How can it be prevented or mitigated? What is the cost savings or benefit of prevention or mitigation? Which risks should be given highest priority for resolution? The QFMEA model can be tailored to specific projects and is applicable to new projects as well as mature projects. The model can be revised and updated as new information comes available. It accepts input from multiple sources, such as literature searches, site characterization, field data, computer simulations, analogues, process influence diagrams, probability density functions, financial analysis models, cost factors, and heuristic best practices manuals, and converts the information into a standardized format in an Excel spreadsheet. Process influence diagrams, geologic models, financial models, cost factors and an insurance schedule were developed to support the QFMEA model. Comprehensive, quantitative risk assessments were conducted on three (3) sites using the QFMEA model: (1) SACROC Northern Platform CO{sub 2}-EOR Site in the Permian Basin, Scurry County, TX, (2) Pump Canyon CO{sub 2}-ECBM Site in the San Juan Basin, San Juan County, NM, and (3) Farnsworth Unit CO{sub 2}-EOR Site in the Anadarko Basin, Ochiltree County, TX. The sites were sufficiently different from each other to test the robustness of the QFMEA model.« less

  18. Effects of Gas Pressure on the Failure Characteristics of Coal

    NASA Astrophysics Data System (ADS)

    Xie, Guangxiang; Yin, Zhiqiang; Wang, Lei; Hu, Zuxiang; Zhu, Chuanqi

    2017-07-01

    Several experiments were conducted using self-developed equipment for visual gas-solid coupling mechanics. The raw coal specimens were stored in a container filled with gas (99% CH4) under different initial gas pressure conditions (0.0, 0.5, 1.0, and 1.5 MPa) for 24 h prior to testing. Then, the specimens were tested in a rock-testing machine, and the mechanical properties, surface deformation and failure modes were recorded using strain gauges, an acoustic emission (AE) system and a camera. An analysis of the fractals of fragments and dissipated energy was performed to understand the changes observed in the stress-strain and crack propagation behaviour of the gas-containing coal specimens. The results demonstrate that increased gas pressure leads to a reduction in the uniaxial compression strength (UCS) of gas-containing coal and the critical dilatancy stress. The AE, surface deformation and fractal analysis results show that the failure mode changes during the gas state. Interestingly, a higher initial gas pressure will cause the damaged cracks and failure of the gas-containing coal samples to become severe. The dissipated energy characteristic in the failure process of a gas-containing coal sample is analysed using a combination of fractal theory and energy principles. Using the theory of fracture mechanics, based on theoretical analyses and calculations, the stress intensity factor of crack tips increases as the gas pressure increases, which is the main cause of the reduction in the UCS and critical dilatancy stress and explains the influence of gas in coal failure. More serious failure is created in gas-containing coal under a high gas pressure and low exterior load.

  19. Failure of flight feathers under uniaxial compression.

    PubMed

    Schelestow, Kristina; Troncoso, Omar P; Torres, Fernando G

    2017-09-01

    Flight feathers are light weight engineering structures. They have a central shaft divided in two parts: the calamus and the rachis. The rachis is a thinly walled conical shell filled with foam, while the calamus is a hollow tube-like structure. Due to the fact that bending loads are produced during birds' flight, the resistance to bending of feathers has been reported in different studies. However, the analysis of bent feathers has shown that compression could induce failure by buckling. Here, we have studied the compression of feathers in order to assess the failure mechanisms involved. Axial compression tests were carried out on the rachis and the calamus of dove and pelican feathers. The failure mechanisms and folding structures that resulted from the compression tests were observed from images obtained by scanning electron microscopy (SEM). The rachis and calamus fail due to structural instability. In the case of the calamus, this instability leads to a progressive folding process. In contrast, the rachis undergoes a typical Euler column-type buckling failure. The study of failed specimens showed that delamination buckling, cell collapse and cell densification are the primary failure mechanisms of the rachis structure. The role of the foam is also discussed with regard to the mechanical response of the samples and the energy dissipated during the compression tests. Critical stress values were calculated using delamination buckling models and were found to be in very good agreement with the experimental values measured. Failure analysis and mechanical testing have confirmed that flight feathers are complex thin walled structures with mechanical adaptations that allow them to fulfil their functions. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline

    NASA Astrophysics Data System (ADS)

    Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.

    2017-05-01

    In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.

  1. Failure Mode, Effects, and Criticality Analysis (FMECA)

    DTIC Science & Technology

    1993-04-01

    Preliminary Failure Modes, Effects and Criticality Analysis (FMECA) of the Brayton Isotope Power System Ground Demonstration System, Report No. TID 27301...No. TID/SNA - 3015, Aeroject Nuclear Systems Co., Sacramento, California: 1970. 95. Taylor , J.R. A Formalization of Failure Mode Analysis of Control...Roskilde, Denmark: 1973. 96. Taylor , J.R. A Semi-Automatic Method for Oualitative Failure Mode Analysis. Report No. RISO-M-1707. Available from a

  2. Comprehension and retrieval of failure cases in airborne observatories

    NASA Technical Reports Server (NTRS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-01-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  3. Comprehension and retrieval of failure cases in airborne observatories

    NASA Astrophysics Data System (ADS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-05-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  4. Leveraging electronic health record documentation for Failure Mode and Effects Analysis team identification

    PubMed Central

    Carson, Matthew B; Lee, Young Ji; Benacka, Corrine; Mutharasan, R. Kannan; Ahmad, Faraz S; Kansal, Preeti; Yancy, Clyde W; Anderson, Allen S; Soulakis, Nicholas D

    2017-01-01

    Objective: Using Failure Mode and Effects Analysis (FMEA) as an example quality improvement approach, our objective was to evaluate whether secondary use of orders, forms, and notes recorded by the electronic health record (EHR) during daily practice can enhance the accuracy of process maps used to guide improvement. We examined discrepancies between expected and observed activities and individuals involved in a high-risk process and devised diagnostic measures for understanding discrepancies that may be used to inform quality improvement planning. Methods: Inpatient cardiology unit staff developed a process map of discharge from the unit. We matched activities and providers identified on the process map to EHR data. Using four diagnostic measures, we analyzed discrepancies between expectation and observation. Results: EHR data showed that 35% of activities were completed by unexpected providers, including providers from 12 categories not identified as part of the discharge workflow. The EHR also revealed sub-components of process activities not identified on the process map. Additional information from the EHR was used to revise the process map and show differences between expectation and observation. Conclusion: Findings suggest EHR data may reveal gaps in process maps used for quality improvement and identify characteristics about workflow activities that can identify perspectives for inclusion in an FMEA. Organizations with access to EHR data may be able to leverage clinical documentation to enhance process maps used for quality improvement. While focused on FMEA protocols, findings from this study may be applicable to other quality activities that require process maps. PMID:27589944

  5. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  6. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  7. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  8. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  9. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  10. On the selection of significant variables in a model for the deteriorating process of facades

    NASA Astrophysics Data System (ADS)

    Serrat, C.; Gibert, V.; Casas, J. R.; Rapinski, J.

    2017-10-01

    In previous works the authors of this paper have introduced a predictive system that uses survival analysis techniques for the study of time-to-failure in the facades of a building stock. The approach is population based, in order to obtain information on the evolution of the stock across time, and to help the manager in the decision making process on global maintenance strategies. For the decision making it is crutial to determine those covariates -like materials, morphology and characteristics of the facade, orientation or environmental conditions- that play a significative role in the progression of different failures. The proposed platform also incorporates an open source GIS plugin that includes survival and test moduli that allow the investigator to model the time until a lesion taking into account the variables collected during the inspection process. The aim of this paper is double: a) to shortly introduce the predictive system, as well as the inspection and the analysis methodologies and b) to introduce and illustrate the modeling strategy for the deteriorating process of an urban front. The illustration will be focused on the city of L’Hospitalet de Llobregat (Barcelona, Spain) in which more than 14,000 facades have been inspected and analyzed.

  11. [Use of bivariate survival curves for analyzing mortality of heart failure and sudden death in dilated cardiomiopathy].

    PubMed

    Gregori, Dario; Rosato, Rosalba; Zecchin, Massimo; Di Lenarda, Andrea

    2005-01-01

    This paper discusses the use of bivariate survival curves estimators within the competing risk framework. Competing risks models are used for the analysis of medical data with more than one cause of death. The case of dilated cardiomiopathy is explored. Bivariate survival curves plot the conjoint mortality processes. The different graphic representation of bivariate survival analysis is the major contribute of this methodology to the competing risks analysis.

  12. Reliability culture at La Silla Paranal Observatory

    NASA Astrophysics Data System (ADS)

    Gonzalez, Sergio

    2010-07-01

    The Maintenance Department at the La Silla - Paranal Observatory has been an important base to keep the operations of the observatory at a good level of reliability and availability. Several strategies have been implemented and improved in order to cover these requirements and keep the system and equipment working properly when it is required. For that reason, one of the latest improvements has been the introduction of the concept of reliability, which implies that we don't simply speak about reliability concepts. It involves much more than that. It involves the use of technologies, data collecting, data analysis, decision making, committees concentrated in analysis of failure modes and how they can be eliminated, aligning the results with the requirements of our internal partners and establishing steps to achieve success. Some of these steps have already been implemented: data collection, use of technologies, analysis of data, development of priority tools, committees dedicated to analyze data and people dedicated to reliability analysis. This has permitted us to optimize our process, analyze where we can improve, avoid functional failures, reduce the failures range in several systems and subsystems; all this has had a positive impact in terms of results for our Observatory. All these tools are part of the reliability culture that allows our system to operate with a high level of reliability and availability.

  13. Failure analysis of fuel cell electrodes using three-dimensional multi-length scale X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.

    2016-10-01

    X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.

  14. Generalization of the slip line field theory for temperature sensitive visco-plastic materials

    NASA Astrophysics Data System (ADS)

    Paesold, Martin; Peters, Max; Regenauer-Lieb, Klaus; Veveakis, Manolis; Bassom, Andrew

    2015-04-01

    Geological processes can be a combination of various effects such as heat production or consumption, chemical reactions or fluid flow. These individual effects are coupled to each other via feedbacks and the mathematical analysis becomes challenging due to these interdependencies. Here, we concentrate solely on thermo-mechanical coupling and a main result of this work is that the coupling can depend on material parameters and boundary conditions and the coupling is more or less pronounced depending on theses parameters. The transitions from weak to strong coupling can be studied in the context of a bifurcation analysis. classically, Material instabilities in solids are approached as material bifurcations of a rate-independent, isothermal, elasto-plastic solid. However, previous research has shown that temperature and deformation rate are important factors and are fully coupled with the mechanical deformation. Early experiments in steel revealed a distinct pattern of localized heat dissipation and plastic deformation known as heat lines. Further, earth materials, soils, rocks and ceramics are known to be greatly influenced by temperature with strain localization being strongly affected by thermal loading. In this work, we provide a theoretical framework for the evolution of plastic deformation for such coupled systems, with a two-pronged approach to the prediction of localized failure. First, slip line field theory is employed to predict the geometry of the failure patterns and second, failure criteria are derived from an energy bifurcation analysis. The bifurcation analysis is concerned with the local energy balance of a material and compares the effects of heat diffusion terms and heat production terms where the heat production is due to mechanical processes. Commonly, the heat is produced locally along the slip lines and if the heat production outweighs diffusion the material is locally weakened which eventually leads to failure. The effect of diffusion and heat production is captured by a dimensionless quantity, the Gruntfest number, and only if the Gruntfest number is larger than a critical value localized failure occurs. This critical Gruntfest number depends on boundary conditions such as temperature or pressure and hence this critical value gives rise to localization criteria. We find that the results of this approach agree with earlier contributions to the theory of plasticity but gives the advantage of a unified framework which might prove useful in numerical schemes for visco-plasticity.

  15. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Technical Reports Server (NTRS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  16. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Astrophysics Data System (ADS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  17. Degradation modeling of mid-power white-light LEDs by using Wiener process.

    PubMed

    Huang, Jianlin; Golubović, Dušan S; Koh, Sau; Yang, Daoguo; Li, Xiupeng; Fan, Xuejun; Zhang, G Q

    2015-07-27

    The IES standard TM-21-11 provides a guideline for lifetime prediction of LED devices. As it uses average normalized lumen maintenance data and performs non-linear regression for lifetime modeling, it cannot capture dynamic and random variation of the degradation process of LED devices. In addition, this method cannot capture the failure distribution, although it is much more relevant in reliability analysis. Furthermore, the TM-21-11 only considers lumen maintenance for lifetime prediction. Color shift, as another important performance characteristic of LED devices, may also render significant degradation during service life, even though the lumen maintenance has not reached the critical threshold. In this study, a modified Wiener process has been employed for the modeling of the degradation of LED devices. By using this method, dynamic and random variations, as well as the non-linear degradation behavior of LED devices, can be easily accounted for. With a mild assumption, the parameter estimation accuracy has been improved by including more information into the likelihood function while neglecting the dependency between the random variables. As a consequence, the mean time to failure (MTTF) has been obtained and shows comparable result with IES TM-21-11 predictions, indicating the feasibility of the proposed method. Finally, the cumulative failure distribution was presented corresponding to different combinations of lumen maintenance and color shift. The results demonstrate that a joint failure distribution of LED devices could be modeled by simply considering their lumen maintenance and color shift as two independent variables.

  18. Use of Modal Acoustic Emission to Monitor Damage Progression in Carbon Fiber/Epoxy Tows and Implications for Composite Structures

    NASA Technical Reports Server (NTRS)

    Waller, Jess M.; Saulsberry, Regor L.; Nichols, Charles T.; Wentzel, Daniel J.

    2010-01-01

    This slide presentation reviews the use of Modal Acoustic Emission to monitor damage progression to carbon fiber/epoxy tows. There is a risk for catastrophic failure of composite overwrapped pressure vessels (COPVs) due to burst-before-leak (BBL) stress rupture (SR) failure of carbon-epoxy (C/Ep) COPVs. A lack of quantitative nondestructive evaluation (NDE) is causing problems in current and future spacecraft designs. It is therefore important to develop and demonstrate critical NDE that can be implemented during stages of the design process since the observed rupture can occur with little of no advanced warning. Therefore a program was required to develop quantitative acoustic emission (AE) procedures specific to C/Ep overwraps, but which also have utility for monitoring damage accumulation in composite structure in general, and to lay the groundwork for establishing critical thresholds for accumulated damage in composite structures, such as COPVs, so that precautionary or preemptive engineering steps can be implemented to minimize of obviate the risk of catastrophic failure. A computed Felicity Ratio (FR) coupled with fast Fourier Transform (FFT) frequency analysis shows promise as an analytical pass/fail criterion. The FR analysis and waveform and FFT analysis are reviewed

  19. Failure mode analysis to predict product reliability.

    NASA Technical Reports Server (NTRS)

    Zemanick, P. P.

    1972-01-01

    The failure mode analysis (FMA) is described as a design tool to predict and improve product reliability. The objectives of the failure mode analysis are presented as they influence component design, configuration selection, the product test program, the quality assurance plan, and engineering analysis priorities. The detailed mechanics of performing a failure mode analysis are discussed, including one suggested format. Some practical difficulties of implementation are indicated, drawn from experience with preparing FMAs on the nuclear rocket engine program.

  20. Touchstone for success

    NASA Astrophysics Data System (ADS)

    Longdon, Norman; Dauphin, J.; Dunn, B. D.; Judd, M. D.; Levadou, F. G.; Zwaal, A.

    1992-04-01

    This booklet is addressed to the users of the Materials and Processes Laboratories of the European Space Research and Technology Centre (ESTEC). The revised edition updates the July 1988 edition featuring the enhancement of existing laboratories and the establishment of a ceramics laboratory. Information on three ESTEC laboratories is presented as well as a look into the future. The three laboratories are the Environmental Effects Laboratory, the Metallic Materials Laboratory, and the Non-metallic Laboratory. The booklet reports on the effects of the space environment on radiation effects (UV and particles), outgassing and contamination, charging-up and discharges, particulate contaminants, atomic oxygen and debris/impacts. Applications of metallic materials to space hardware are covered in the areas of mechanical properties, corrosion/stress corrosion, fracture testing and interpretation, metallurgical processes and failure analysis. Particular applications of non metallic materials to space hardware that are covered are advanced and reinforced polymers, advanced ceramics, thermal properties, manned ambiance, polymer processing, non-destructive tests (NDT), and failure analysis. Future emphasis will be on the measurement of thermo-optical properties for the Infrared Space Observatory (ISO) and other infrared telescopes, support of the Columbus program, Hermes related problems such as 'warm' composites and 'hot' reinforced ceramics for thermal insulation, materials for extravehicular activity (EVA), and NDT.

  1. Transforming information from silicon testing and design characterization into numerical data sets for yield learning

    NASA Astrophysics Data System (ADS)

    Yang, Thomas; Shen, Yang; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh

    2017-03-01

    Silicon testing results are regularly collected for a particular lot of wafers to study yield loss from test result diagnostics. Product engineers will analyze the diagnostic results and perform a number of physical failure analyses to detect systematic defects which cause yield loss for these sets of wafers in order to feedback the information to process engineers for process improvements. Most of time, the systematic defects that are detected are major issues or just one of the causes for the overall yield loss. This paper will present a working flow for using design analysis techniques combined with diagnostic methods to systematically transform silicon testing information into physical layout information. A new set of the testing results are received from a new lot of wafers for the same product. We can then correlate all the diagnostic results from different periods of time to check which blocks or nets have been highlighted or stop occurring on the failure reports in order to monitor process changes which impact the yield. The design characteristic analysis flow is also implemented to find 1) the block connections on a design that have failed electrical test or 2) frequently used cells that been highlighted multiple times.

  2. Comparison of Models of Stress Relaxation in Failure Analysis for Connectors under Long-term Storage

    NASA Astrophysics Data System (ADS)

    Zhou, Yilin; Wan, Mengru

    2018-03-01

    Reliability requirements of the system equipment under long-term storage are put forward especially for the military products, so that the connectors in the equipment also need long-term storage life correspondingly. In this paper, the effects of stress relaxation of the elastic components on electrical contact of the connectors in long-term storage process were studied from the failure mechanism and degradation models. A wire spring connector was taken as an example to discuss the life prediction method for electrical contacts of the connectors based on stress relaxation degradation under long -term storage.

  3. Failure Analysis at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Salazar, Victoria L.; Wright, M. Clara

    2010-01-01

    History has shown that failures occur in every engineering endeavor, and what we learn from those failures contributes to the knowledge base to safely complete future missions. The necessity of failure analysis is at its apex at the end of one aged program and at the beginning of a new and untested program. The information that we gain through failure analysis corrects the deficiencies in the current vehicle to make the next generation of vehicles more efficient and safe. The Failure Analysis and Materials Evaluation Branch in the Materials Science Division at the Kennedy Space Center performs metallurgical, mechanical, electrical, and non-metallic materials failure analyses and accident investigations on both flight hardware and ground support equipment for the Space Shuttle, International Space Station, Constellation, and Launch Services Programs. This paper will explore a variety of failure case studies at the Kennedy Space Center and the lessons learned that can be applied in future programs.

  4. Ku-band signal design study. [space shuttle orbiter data processing network

    NASA Technical Reports Server (NTRS)

    Rubin, I.

    1978-01-01

    Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.

  5. Seismic characteristics of tensile fracture growth induced by hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.; Van der Baan, M.; Boroumand, N.

    2014-12-01

    Hydraulic fracturing is a process of injecting high-pressure slurry into a rockmass to enhance its permeability. Variants of this process are used for unconventional oil and gas development, engineered geothermal systems and block-cave mining; similar processes occur within volcanic systems. Opening of hydraulic fractures is well documented by mineback trials and tiltmeter monitoring and is a physical requirement to accommodate the volume of injected fluid. Numerous microseismic monitoring investigations acquired in the audio-frequency band are interpreted to show a prevalence of shear-dominated failure mechanisms surrounding the tensile fracture. Moreover, the radiated seismic energy in the audio-frequency band appears to be a miniscule fraction (<< 1%) of the net injected energy, i.e., the integral of the product of fluid pressure and injection rate. We use a simple penny-shaped crack model as a predictive framework to describe seismic characteristics of tensile opening during hydraulic fracturing. This model provides a useful scaling relation that links seismic moment to effective fluid pressure within the crack. Based on downhole recordings corrected for attenuation, a significant fraction of observed microseismic events are characterized by S/P amplitude ratio < 5. Despite the relatively small aperture of the monitoring arrays, which precludes both full moment-tensor analysis and definitive identification of nodal planes or axes, this ratio provides a strong indication that observed microseismic source mechanisms have a component of tensile failure. In addition, we find some instances of periodic spectral notches that can be explained by an opening/closing failure mechanism, in which fracture propagation outpaces fluid velocity within the crack. Finally, aseismic growth of tensile fractures may be indicative of a scenario in which injected energy is consumed to create new fracture surfaces. Taken together, our observations and modeling provide evidence that failure mechanisms documented by passive monitoring of hydraulic fractures may contain a significant component of tensile failure, including fracture opening and closing, although creation of extensive new fracture surfaces may be a seismically inefficient process that radiates at sub-audio frequencies.

  6. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.

  7. Failure Mode Identification Through Clustering Analysis

    NASA Technical Reports Server (NTRS)

    Arunajadai, Srikesh G.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Research has shown that nearly 80% of the costs and problems are created in product development and that cost and quality are essentially designed into products in the conceptual stage. Currently, failure identification procedures (such as FMEA (Failure Modes and Effects Analysis), FMECA (Failure Modes, Effects and Criticality Analysis) and FTA (Fault Tree Analysis)) and design of experiments are being used for quality control and for the detection of potential failure modes during the detail design stage or post-product launch. Though all of these methods have their own advantages, they do not give information as to what are the predominant failures that a designer should focus on while designing a product. This work uses a functional approach to identify failure modes, which hypothesizes that similarities exist between different failure modes based on the functionality of the product/component. In this paper, a statistical clustering procedure is proposed to retrieve information on the set of predominant failures that a function experiences. The various stages of the methodology are illustrated using a hypothetical design example.

  8. Peter Hacke | NREL

    Science.gov Websites

    photovoltaic (PV) modules, inspections for root cause of module failures in the field, and accelerated lifetime delamination. His research interests are in modeling of degradation processes of PV modules, module integrated analysis of PV degradation data. He also explores accelerated multi-stress and combined stress testing to

  9. Internal Erosion During Soil PipeFlow: State of Science for Experimental and Numerical Analysis

    EPA Science Inventory

    Many field observations have led to speculation on the role of piping in embankment failures, landslides, and gully erosion. However, there has not been a consensus on the subsurface flow and erosion processes involved, and inconsistent use of terms have exacerbated the problem. ...

  10. Process-based quality management for clinical implementation of adaptive radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noel, Camille E.; Santanam, Lakshmi; Parikh, Parag J.

    Purpose: Intensity-modulated adaptive radiotherapy (ART) has been the focus of considerable research and developmental work due to its potential therapeutic benefits. However, in light of its unique quality assurance (QA) challenges, no one has described a robust framework for its clinical implementation. In fact, recent position papers by ASTRO and AAPM have firmly endorsed pretreatment patient-specific IMRT QA, which limits the feasibility of online ART. The authors aim to address these obstacles by applying failure mode and effects analysis (FMEA) to identify high-priority errors and appropriate risk-mitigation strategies for clinical implementation of intensity-modulated ART. Methods: An experienced team of twomore » clinical medical physicists, one clinical engineer, and one radiation oncologist was assembled to perform a standard FMEA for intensity-modulated ART. A set of 216 potential radiotherapy failures composed by the forthcoming AAPM task group 100 (TG-100) was used as the basis. Of the 216 failures, 127 were identified as most relevant to an ART scheme. Using the associated TG-100 FMEA values as a baseline, the team considered how the likeliness of occurrence (O), outcome severity (S), and likeliness of failure being undetected (D) would change for ART. New risk priority numbers (RPN) were calculated. Failures characterized by RPN ≥ 200 were identified as potentially critical. Results: FMEA revealed that ART RPN increased for 38% (n = 48/127) of potential failures, with 75% (n = 36/48) attributed to failures in the segmentation and treatment planning processes. Forty-three of 127 failures were identified as potentially critical. Risk-mitigation strategies include implementing a suite of quality control and decision support software, specialty QA software/hardware tools, and an increase in specially trained personnel. Conclusions: Results of the FMEA-based risk assessment demonstrate that intensity-modulated ART introduces different (but not necessarily more) risks than standard IMRT and may be safely implemented with the proper mitigations.« less

  11. Process-based quality management for clinical implementation of adaptive radiotherapy

    PubMed Central

    Noel, Camille E.; Santanam, Lakshmi; Parikh, Parag J.; Mutic, Sasa

    2014-01-01

    Purpose: Intensity-modulated adaptive radiotherapy (ART) has been the focus of considerable research and developmental work due to its potential therapeutic benefits. However, in light of its unique quality assurance (QA) challenges, no one has described a robust framework for its clinical implementation. In fact, recent position papers by ASTRO and AAPM have firmly endorsed pretreatment patient-specific IMRT QA, which limits the feasibility of online ART. The authors aim to address these obstacles by applying failure mode and effects analysis (FMEA) to identify high-priority errors and appropriate risk-mitigation strategies for clinical implementation of intensity-modulated ART. Methods: An experienced team of two clinical medical physicists, one clinical engineer, and one radiation oncologist was assembled to perform a standard FMEA for intensity-modulated ART. A set of 216 potential radiotherapy failures composed by the forthcoming AAPM task group 100 (TG-100) was used as the basis. Of the 216 failures, 127 were identified as most relevant to an ART scheme. Using the associated TG-100 FMEA values as a baseline, the team considered how the likeliness of occurrence (O), outcome severity (S), and likeliness of failure being undetected (D) would change for ART. New risk priority numbers (RPN) were calculated. Failures characterized by RPN ≥ 200 were identified as potentially critical. Results: FMEA revealed that ART RPN increased for 38% (n = 48/127) of potential failures, with 75% (n = 36/48) attributed to failures in the segmentation and treatment planning processes. Forty-three of 127 failures were identified as potentially critical. Risk-mitigation strategies include implementing a suite of quality control and decision support software, specialty QA software/hardware tools, and an increase in specially trained personnel. Conclusions: Results of the FMEA-based risk assessment demonstrate that intensity-modulated ART introduces different (but not necessarily more) risks than standard IMRT and may be safely implemented with the proper mitigations. PMID:25086527

  12. Process-based quality management for clinical implementation of adaptive radiotherapy.

    PubMed

    Noel, Camille E; Santanam, Lakshmi; Parikh, Parag J; Mutic, Sasa

    2014-08-01

    Intensity-modulated adaptive radiotherapy (ART) has been the focus of considerable research and developmental work due to its potential therapeutic benefits. However, in light of its unique quality assurance (QA) challenges, no one has described a robust framework for its clinical implementation. In fact, recent position papers by ASTRO and AAPM have firmly endorsed pretreatment patient-specific IMRT QA, which limits the feasibility of online ART. The authors aim to address these obstacles by applying failure mode and effects analysis (FMEA) to identify high-priority errors and appropriate risk-mitigation strategies for clinical implementation of intensity-modulated ART. An experienced team of two clinical medical physicists, one clinical engineer, and one radiation oncologist was assembled to perform a standard FMEA for intensity-modulated ART. A set of 216 potential radiotherapy failures composed by the forthcoming AAPM task group 100 (TG-100) was used as the basis. Of the 216 failures, 127 were identified as most relevant to an ART scheme. Using the associated TG-100 FMEA values as a baseline, the team considered how the likeliness of occurrence (O), outcome severity (S), and likeliness of failure being undetected (D) would change for ART. New risk priority numbers (RPN) were calculated. Failures characterized by RPN ≥ 200 were identified as potentially critical. FMEA revealed that ART RPN increased for 38% (n = 48/127) of potential failures, with 75% (n = 36/48) attributed to failures in the segmentation and treatment planning processes. Forty-three of 127 failures were identified as potentially critical. Risk-mitigation strategies include implementing a suite of quality control and decision support software, specialty QA software/hardware tools, and an increase in specially trained personnel. Results of the FMEA-based risk assessment demonstrate that intensity-modulated ART introduces different (but not necessarily more) risks than standard IMRT and may be safely implemented with the proper mitigations.

  13. Using Seismic Signals to Forecast Volcanic Processes

    NASA Astrophysics Data System (ADS)

    Salvage, R.; Neuberg, J. W.

    2012-04-01

    Understanding seismic signals generated during volcanic unrest have the ability to allow scientists to more accurately predict and understand active volcanoes since they are intrinsically linked to rock failure at depth (Voight, 1988). In particular, low frequency long period signals (LP events) have been related to the movement of fluid and the brittle failure of magma at depth due to high strain rates (Hammer and Neuberg, 2009). This fundamentally relates to surface processes. However, there is currently no physical quantitative model for determining the likelihood of an eruption following precursory seismic signals, or the timing or type of eruption that will ensue (Benson et al., 2010). Since the beginning of its current eruptive phase, accelerating LP swarms (< 10 events per hour) have been a common feature at Soufriere Hills volcano, Montserrat prior to surface expressions such as dome collapse or eruptions (Miller et al., 1998). The dynamical behaviour of such swarms can be related to accelerated magma ascent rates since the seismicity is thought to be a consequence of magma deformation as it rises to the surface. In particular, acceleration rates can be successfully used in collaboration with the inverse material failure law; a linear relationship against time (Voight, 1988); in the accurate prediction of volcanic eruption timings. Currently, this has only been investigated for retrospective events (Hammer and Neuberg, 2009). The identification of LP swarms on Montserrat and analysis of their dynamical characteristics allows a better understanding of the nature of the seismic signals themselves, as well as their relationship to surface processes such as magma extrusion rates. Acceleration and deceleration rates of seismic swarms provide insights into the plumbing system of the volcano at depth. The application of the material failure law to multiple LP swarms of data allows a critical evaluation of the accuracy of the method which further refines current understanding of the relationship between seismic signals and volcanic eruptions. It is hoped that such analysis will assist the development of real time forecasting models.

  14. STRESS AND FAILURE ANALYSIS OF RAPIDLY ROTATING ASTEROID (29075) 1950 DA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirabayashi, Masatoshi; Scheeres, Daniel J., E-mail: masatoshi.hirabayashi@colorado.edu

    Rozitis et al. recently reported that near-Earth asteroid (29075) 1950 DA, whose bulk density ranges from 1.0 g cm{sup –3} to 2.4 g cm{sup –3}, is a rubble pile and requires a cohesive strength of at least 44-76 Pa to keep from failing due to its fast spin period. Since their technique for giving failure conditions required the averaged stress over the whole volume, it discarded information about the asteroid's failure mode and internal stress condition. This paper develops a finite element model and revisits the stress and failure analysis of 1950 DA. For the modeling, we do not consider material hardening andmore » softening. Under the assumption of an associated flow rule and uniform material distribution, we identify the deformation process of 1950 DA when its constant cohesion reaches the lowest value that keeps its current shape. The results show that to avoid structural failure the internal core requires a cohesive strength of at least 75-85 Pa. It suggests that for the failure mode of this body, the internal core first fails structurally, followed by the surface region. This implies that if cohesion is constant over the whole volume, the equatorial ridge of 1950 DA results from a material flow going outward along the equatorial plane in the internal core, but not from a landslide as has been hypothesized. This has additional implications for the likely density of the interior of the body.« less

  15. Semiparametric modeling and estimation of the terminal behavior of recurrent marker processes before failure events.

    PubMed

    Chan, Kwun Chuen Gary; Wang, Mei-Cheng

    2017-01-01

    Recurrent event processes with marker measurements are mostly and largely studied with forward time models starting from an initial event. Interestingly, the processes could exhibit important terminal behavior during a time period before occurrence of the failure event. A natural and direct way to study recurrent events prior to a failure event is to align the processes using the failure event as the time origin and to examine the terminal behavior by a backward time model. This paper studies regression models for backward recurrent marker processes by counting time backward from the failure event. A three-level semiparametric regression model is proposed for jointly modeling the time to a failure event, the backward recurrent event process, and the marker observed at the time of each backward recurrent event. The first level is a proportional hazards model for the failure time, the second level is a proportional rate model for the recurrent events occurring before the failure event, and the third level is a proportional mean model for the marker given the occurrence of a recurrent event backward in time. By jointly modeling the three components, estimating equations can be constructed for marked counting processes to estimate the target parameters in the three-level regression models. Large sample properties of the proposed estimators are studied and established. The proposed models and methods are illustrated by a community-based AIDS clinical trial to examine the terminal behavior of frequencies and severities of opportunistic infections among HIV infected individuals in the last six months of life.

  16. Graphical Displays Assist In Analysis Of Failures

    NASA Technical Reports Server (NTRS)

    Pack, Ginger; Wadsworth, David; Razavipour, Reza

    1995-01-01

    Failure Environment Analysis Tool (FEAT) computer program enables people to see and better understand effects of failures in system. Uses digraph models to determine what will happen to system if set of failure events occurs and to identify possible causes of selected set of failures. Digraphs or engineering schematics used. Also used in operations to help identify causes of failures after they occur. Written in C language.

  17. Detection of wood failure by image processing method: influence of algorithm, adhesive and wood species

    Treesearch

    Lanying Lin; Sheng He; Feng Fu; Xiping Wang

    2015-01-01

    Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...

  18. A Framework for Creating a Function-based Design Tool for Failure Mode Identification

    NASA Technical Reports Server (NTRS)

    Arunajadai, Srikesh G.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Knowledge of potential failure modes during design is critical for prevention of failures. Currently industries use procedures such as Failure Modes and Effects Analysis (FMEA), Fault Tree analysis, or Failure Modes, Effects and Criticality analysis (FMECA), as well as knowledge and experience, to determine potential failure modes. When new products are being developed there is often a lack of sufficient knowledge of potential failure mode and/or a lack of sufficient experience to identify all failure modes. This gives rise to a situation in which engineers are unable to extract maximum benefits from the above procedures. This work describes a function-based failure identification methodology, which would act as a storehouse of information and experience, providing useful information about the potential failure modes for the design under consideration, as well as enhancing the usefulness of procedures like FMEA. As an example, the method is applied to fifteen products and the benefits are illustrated.

  19. Fatigue failure of osteocyte cellular processes: implications for the repair of bone.

    PubMed

    Dooley, C; Cafferky, D; Lee, T C; Taylor, D

    2014-01-25

    The physical effects of fatigue failure caused by cyclic strain are important and for most materials well understood. However, nothing is known about this mode of failure in living cells. We developed a novel method that allowed us to apply controlled levels of cyclic displacement to networks of osteocytes in bone. We showed that under cyclic loading, fatigue failure takes place in the dendritic processes of osteocytes at cyclic strain levels as low as one tenth of the strain needed for instantaneous rupture. The number of cycles to failure was inversely correlated with the strain level. Further experiments demonstrated that these failures were not artefacts of our methods of sample preparation and testing, and that fatigue failure of cell processes also occurs in vivo. This work is significant as it is the first time it has been possible to conduct fatigue testing on cellular material of any kind. Many types of cells experience repetitive loading which may cause failure or damage requiring repair. It is clinically important to determine how cyclic strain affects cells and how they respond in order to gain a deeper understanding of the physiological processes stimulated in this manner. The more we understand about the natural repair process in bone the more targeted the intervention methods may become if disruption of the repair process occurred. Our results will help to understand how the osteocyte cell network is disrupted in the vicinity of matrix damage, a crucial step in bone remodelling.

  20. Investigating Brittle Rock Failure and Associated Seismicity Using Laboratory Experiments and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Zhao, Qi

    Rock failure process is a complex phenomenon that involves elastic and plastic deformation, microscopic cracking, macroscopic fracturing, and frictional slipping of fractures. Understanding this complex behaviour has been the focus of a significant amount of research. In this work, the combined finite-discrete element method (FDEM) was first employed to study (1) the influence of rock discontinuities on hydraulic fracturing and associated seismicity and (2) the influence of in-situ stress on seismic behaviour. Simulated seismic events were analyzed using post-processing tools including frequency-magnitude distribution (b-value), spatial fractal dimension (D-value), seismic rate, and fracture clustering. These simulations demonstrated that at the local scale, fractures tended to propagate following the rock mass discontinuities; while at reservoir scale, they developed in the direction parallel to the maximum in-situ stress. Moreover, seismic signature (i.e., b-value, D-value, and seismic rate) can help to distinguish different phases of the failure process. The FDEM modelling technique and developed analysis tools were then coupled with laboratory experiments to further investigate the different phases of the progressive rock failure process. Firstly, a uniaxial compression experiment, monitored using a time-lapse ultrasonic tomography method, was carried out and reproduced by the numerical model. Using this combination of technologies, the entire deformation and failure processes were studied at macroscopic and microscopic scales. The results not only illustrated the rock failure and seismic behaviours at different stress levels, but also suggested several precursory behaviours indicating the catastrophic failure of the rock. Secondly, rotary shear experiments were conducted using a newly developed rock physics experimental apparatus ERDmu-T) that was paired with X-ray micro-computed tomography (muCT). This combination of technologies has significant advantages over conventional rotary shear experiments since it allowed for the direct observation of how two rough surfaces interact and deform without perturbing the experimental conditions. Some intriguing observations were made pertaining to key areas of the study of fault evolution, making possible for a more comprehensive interpretation of the frictional sliding behaviour. Lastly, a carefully calibrated FDEM model that was built based on the rotary experiment was utilized to investigate facets that the experiment was not able to resolve, for example, the time-continuous stress condition and the seismic activity on the shear surface. The model reproduced the mechanical behaviour observed in the laboratory experiment, shedding light on the understanding of fault evolution.

Top