Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
Stress Analysis of B-52B and B-52H Air-Launching Systems Failure-Critical Structural Components
NASA Technical Reports Server (NTRS)
Ko, William L.
2005-01-01
The operational life analysis of any airborne failure-critical structural component requires the stress-load equation, which relates the applied load to the maximum tangential tensile stress at the critical stress point. The failure-critical structural components identified are the B-52B Pegasus pylon adapter shackles, B-52B Pegasus pylon hooks, B-52H airplane pylon hooks, B-52H airplane front fittings, B-52H airplane rear pylon fitting, and the B-52H airplane pylon lower sway brace. Finite-element stress analysis was performed on the said structural components, and the critical stress point was located and the stress-load equation was established for each failure-critical structural component. The ultimate load, yield load, and proof load needed for operational life analysis were established for each failure-critical structural component.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
SCADA alarms processing for wind turbine component failure detection
NASA Astrophysics Data System (ADS)
Gonzalez, E.; Reder, M.; Melero, J. J.
2016-09-01
Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.
Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)
2002-01-01
When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.
Development of STS/Centaur failure probabilities liftoff to Centaur separation
NASA Technical Reports Server (NTRS)
Hudson, J. M.
1982-01-01
The results of an analysis to determine STS/Centaur catastrophic vehicle response probabilities for the phases of vehicle flight from STS liftoff to Centaur separation from the Orbiter are presented. The analysis considers only category one component failure modes as contributors to the vehicle response mode probabilities. The relevant component failure modes are grouped into one of fourteen categories of potential vehicle behavior. By assigning failure rates to each component, for each of its failure modes, the STS/Centaur vehicle response probabilities in each phase of flight can be calculated. The results of this study will be used in a DOE analysis to ascertain the hazard from carrying a nuclear payload on the STS.
Space tug propulsion system failure mode, effects and criticality analysis
NASA Technical Reports Server (NTRS)
Boyd, J. W.; Hardison, E. P.; Heard, C. B.; Orourke, J. C.; Osborne, F.; Wakefield, L. T.
1972-01-01
For purposes of the study, the propulsion system was considered as consisting of the following: (1) main engine system, (2) auxiliary propulsion system, (3) pneumatic system, (4) hydrogen feed, fill, drain and vent system, (5) oxygen feed, fill, drain and vent system, and (6) helium reentry purge system. Each component was critically examined to identify possible failure modes and the subsequent effect on mission success. Each space tug mission consists of three phases: launch to separation from shuttle, separation to redocking, and redocking to landing. The analysis considered the results of failure of a component during each phase of the mission. After the failure modes of each component were tabulated, those components whose failure would result in possible or certain loss of mission or inability to return the Tug to ground were identified as critical components and a criticality number determined for each. The criticality number of a component denotes the number of mission failures in one million missions due to the loss of that component. A total of 68 components were identified as critical with criticality numbers ranging from 1 to 2990.
NASA Technical Reports Server (NTRS)
Williams, R. E.; Kruger, R.
1980-01-01
Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.
Solder Reflow Failures in Electronic Components During Manual Soldering
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander; Greenwell, Chris; Felt, Frederick
2008-01-01
This viewgraph presentation reviews the solder reflow failures in electronic components that occur during manual soldering. It discusses the specifics of manual-soldering-induced failures in plastic devices with internal solder joints. The failure analysis turned up that molten solder had squeezed up to the die surface along the die molding compound interface, and the dice were not protected with glassivation allowing solder to short gate and source to the drain contact. The failure analysis concluded that the parts failed due to overheating during manual soldering.
Failure analysis of aluminum alloy components
NASA Technical Reports Server (NTRS)
Johari, O.; Corvin, I.; Staschke, J.
1973-01-01
Analysis of six service failures in aluminum alloy components which failed in aerospace applications is reported. Identification of fracture surface features from fatigue and overload modes was straightforward, though the specimens were not always in a clean, smear-free condition most suitable for failure analysis. The presence of corrosion products and of chemically attacked or mechanically rubbed areas here hindered precise determination of the cause of crack initiation, which was then indirectly inferred from the scanning electron fractography results. In five failures the crack propagation was by fatigue, though in each case the fatigue crack initiated from a different cause. Some of these causes could be eliminated in future components by better process control. In one failure, the cause was determined to be impact during a crash; the features of impact fracture were distinguished from overload fractures by direct comparisons of the received specimens with laboratory-generated failures.
NASA Technical Reports Server (NTRS)
Packard, Michael H.
2002-01-01
Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.
A case study in nonconformance and performance trend analysis
NASA Technical Reports Server (NTRS)
Maloy, Joseph E.; Newton, Coy P.
1990-01-01
As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.
NASA Technical Reports Server (NTRS)
White, A. L.
1983-01-01
This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.
Independent Orbiter Assessment (IOA): Weibull analysis report
NASA Technical Reports Server (NTRS)
Raffaelli, Gary G.
1987-01-01
The Auxiliary Power Unit (APU) and Hydraulic Power Unit (HPU) Space Shuttle Subsystems were reviewed as candidates for demonstrating the Weibull analysis methodology. Three hardware components were identified as analysis candidates: the turbine wheel, the gearbox, and the gas generator. Detailed review of subsystem level wearout and failure history revealed the lack of actual component failure data. In addition, component wearout data were not readily available or would require a separate data accumulation effort by the vendor. Without adequate component history data being available, the Weibull analysis methodology application to the APU and HPU subsystem group was terminated.
Extended Testability Analysis Tool
NASA Technical Reports Server (NTRS)
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
Failure mode analysis to predict product reliability.
NASA Technical Reports Server (NTRS)
Zemanick, P. P.
1972-01-01
The failure mode analysis (FMA) is described as a design tool to predict and improve product reliability. The objectives of the failure mode analysis are presented as they influence component design, configuration selection, the product test program, the quality assurance plan, and engineering analysis priorities. The detailed mechanics of performing a failure mode analysis are discussed, including one suggested format. Some practical difficulties of implementation are indicated, drawn from experience with preparing FMAs on the nuclear rocket engine program.
Analysis of failed nuclear plant components
NASA Astrophysics Data System (ADS)
Diercks, D. R.
1993-12-01
Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T.
Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
Blowout Prevention System Events and Equipment Component Failures : 2016 SafeOCS Annual Report
DOT National Transportation Integrated Search
2017-09-22
The SafeOCS 2016 Annual Report, produced by the Bureau of Transportation Statistics (BTS), summarizes blowout prevention (BOP) equipment failures on marine drilling rigs in the Outer Continental Shelf. It includes an analysis of equipment component f...
Survivorship analysis of failure pattern after revision total hip arthroplasty.
Retpen, J B; Varmarken, J E; Jensen, J S
1989-12-01
Failure, defined as established indication for or performed re-revision of one or both components, was analyzed using survivorship methods in 306 revision total hip arthroplasties. The longevity of revision total hip arthroplasties was inferior to that of previously reported primary total hip arthroplasties. The overall survival curve was two-phased, with a late failure period associated with aseptic loosening of one or both components and an early failure period associated with causes of failure other than loosening. Separate survival curves for aseptic loosening of femoral and acetabular components showed late and almost simultaneous decline, but with a tendency toward a higher rate of failure for the femoral component. No differences in survival could be found between the Stanmore, Lubinus standard, and Lubinus long-stemmed femoral components. A short interval between the index operation and the revision and intraoperative and postoperative complications were risk factors for early failure. Young age was a risk factor for aseptic loosening of the femoral component. Intraoperative fracture of the femoral shaft was not a risk factor for secondary loosening. No difference in survival was found between primary cemented total arthroplasty and primary noncemented hemiarthroplasty.
NASA Technical Reports Server (NTRS)
Monaghan, Mark W.; Gillespie, Amanda M.
2013-01-01
During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.
Reliability and availability analysis of a 10 kW@20 K helium refrigerator
NASA Astrophysics Data System (ADS)
Li, J.; Xiong, L. Y.; Liu, L. Q.; Wang, H. R.; Wang, B. M.
2017-02-01
A 10 kW@20 K helium refrigerator has been established in the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. To evaluate and improve this refrigerator’s reliability and availability, a reliability and availability analysis is performed. According to the mission profile of this refrigerator, a functional analysis is performed. The failure data of the refrigerator components are collected and failure rate distributions are fitted by software Weibull++ V10.0. A Failure Modes, Effects & Criticality Analysis (FMECA) is performed and the critical components with higher risks are pointed out. Software BlockSim V9.0 is used to calculate the reliability and the availability of this refrigerator. The result indicates that compressors, turbine and vacuum pump are the critical components and the key units of this refrigerator. The mitigation actions with respect to design, testing, maintenance and operation are proposed to decrease those major and medium risks.
Giardina, M; Castiglia, F; Tomarchio, E
2014-12-01
Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.
Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun
2017-01-17
This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkinson, V.K.; Young, J.M.
1995-07-01
The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less
Reliability Centred Maintenance (RCM) Analysis of Laser Machine in Filling Lithos at PT X
NASA Astrophysics Data System (ADS)
Suryono, M. A. E.; Rosyidi, C. N.
2018-03-01
PT. X used automated machines which work for sixteen hours per day. Therefore, the machines should be maintained to keep the availability of the machines. The aim of this research is to determine maintenance tasks according to the cause of component’s failure using Reliability Centred Maintenance (RCM) and determine the amount of optimal inspection frequency which must be performed to the machine at filling lithos process. In this research, RCM is used as an analysis tool to determine the critical component and find optimal inspection frequencies to maximize machine’s reliability. From the analysis, we found that the critical machine in filling lithos process is laser machine in Line 2. Then we proceed to determine the cause of machine’s failure. Lastube component has the highest Risk Priority Number (RPN) among other components such as power supply, lens, chiller, laser siren, encoder, conveyor, and mirror galvo. Most of the components have operational consequences and the others have hidden failure consequences and safety consequences. Time-directed life-renewal task, failure finding task, and servicing task can be used to overcome these consequences. The results of data analysis show that the inspection must be performed once a month for laser machine in the form of preventive maintenance to lowering the downtime.
Independent Orbiter Assessment (IOA): Analysis of the mechanical actuation subsystem
NASA Technical Reports Server (NTRS)
Bacher, J. L.; Montgomery, A. D.; Bradway, M. W.; Slaughter, W. T.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). The IOA analysis process utilized available MAS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Analysis of Emergency Diesel Generators Failure Incidents in Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Hunt, Ronderio LaDavis
In early years of operation, emergency diesel generators have had a minimal rate of demand failures. Emergency diesel generators are designed to operate as a backup when the main source of electricity has been disrupted. As of late, EDGs (emergency diesel generators) have been failing at NPPs (nuclear power plants) around the United States causing either station blackouts or loss of onsite and offsite power. These failures occurred from a specific type called demand failures. This thesis evaluated the current problem that raised concern in the nuclear industry which was averaging 1 EDG demand failure/year in 1997 to having an excessive event of 4 EDG demand failure year which occurred in 2011. To determine the next occurrence of the extreme event and possible cause to an event of such happening, two analyses were conducted, the statistical and root cause analysis. Considering the statistical analysis in which an extreme event probability approach was applied to determine the next occurrence year of an excessive event as well as, the probability of that excessive event occurring. Using the root cause analysis in which the potential causes of the excessive event occurred by evaluating, the EDG manufacturers, aging, policy changes/ maintenance practices and failure components. The root cause analysis investigated the correlation between demand failure data and historical data. Final results from the statistical analysis showed expectations of an excessive event occurring in a fixed range of probability and a wider range of probability from the extreme event probability approach. The root-cause analysis of the demand failure data followed historical statistics for the EDG manufacturer, aging and policy changes/ maintenance practices but, indicated a possible cause regarding the excessive event with the failure components. Conclusions showed the next excessive demand failure year, prediction of the probability and the next occurrence year of such failures, with an acceptable confidence level, was difficult but, it was likely that this type of failure will not be a 100 year event. It was noticeable to see that the majority of the EDG demand failures occurred within the main components as of 2005. The overall analysis of this study provided from percentages, indicated that it would be appropriate to make the statement that the excessive event was caused by the overall age (wear and tear) of the Emergency Diesel Generators in Nuclear Power Plants. Future Work will be to better determine the return period of the excessive event once the occurrence has happened for a second time by implementing the extreme event probability approach.
Efficient 3-D finite element failure analysis of compression loaded angle-ply plates with holes
NASA Technical Reports Server (NTRS)
Burns, S. W.; Herakovich, C. T.; Williams, J. G.
1987-01-01
Finite element stress analysis and the tensor polynomial failure criterion predict that failure always initiates at the interface between layers on the hole edge for notched angle-ply laminates loaded in compression. The angular location of initial failure is a function of the fiber orientation in the laminate. The dominant stress components initiating failure are shear. It is shown that approximate symmetry can be used to reduce the computer resources required for the case of unaxial loading.
Finite Element Creep-Fatigue Analysis of a Welded Furnace Roll for Identifying Failure Root Cause
NASA Astrophysics Data System (ADS)
Yang, Y. P.; Mohr, W. C.
2015-11-01
Creep-fatigue induced failures are often observed in engineering components operating under high temperature and cyclic loading. Understanding the creep-fatigue damage process and identifying failure root cause are very important for preventing such failures and improving the lifetime of engineering components. Finite element analyses including a heat transfer analysis and a creep-fatigue analysis were conducted to model the cyclic thermal and mechanical process of a furnace roll in a continuous hot-dip coating line. Typically, the roll has a short life, <1 year, which has been a problem for a long time. The failure occurred in the weld joining an end bell to a roll shell and resulted in the complete 360° separation of the end bell from the roll shell. The heat transfer analysis was conducted to predict the temperature history of the roll by modeling heat convection from hot air inside the furnace. The creep-fatigue analysis was performed by inputting the predicted temperature history and applying mechanical loads. The analysis results showed that the failure was resulted from a creep-fatigue mechanism rather than a creep mechanism. The difference of material properties between the filler metal and the base metal is the root cause for the roll failure, which induces higher creep strain and stress in the interface between the weld and the HAZ.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
NASA Technical Reports Server (NTRS)
Vanschalkwyk, Christiaan Mauritz
1991-01-01
Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.
Independent Orbiter Assessment (IOA): Analysis of the active thermal control subsystem
NASA Technical Reports Server (NTRS)
Sinclair, S. K.; Parkman, W. E.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical (PCIs) items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Active Thermal Control Subsystem (ATCS) are documented. The major purpose of the ATCS is to remove the heat, generated during normal Shuttle operations from the Orbiter systems and subsystems. The four major components of the ATCS contributing to the heat removal are: Freon Coolant Loops; Radiator and Flow Control Assembly; Flash Evaporator System; and Ammonia Boiler System. In order to perform the analysis, the IOA process utilized available ATCS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 310 failure modes analyzed, 101 were determined to be PCIs.
Independent Orbiter Assessment (IOA): Analysis of the remote manipulator system
NASA Technical Reports Server (NTRS)
Tangorra, F.; Grasmeder, R. F.; Montgomery, A. D.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Remote Manipulator System (RMS) are documented. The RMS hardware and software are primarily required for deploying and/or retrieving up to five payloads during a single mission, capture and retrieve free-flying payloads, and for performing Manipulator Foot Restraint operations. Specifically, the RMS hardware consists of the following components: end effector; displays and controls; manipulator controller interface unit; arm based electronics; and the arm. The IOA analysis process utilized available RMS hardware drawings, schematics and documents for defining hardware assemblies, components and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 574 failure modes analyzed, 413 were determined to be PCIs.
Preventing blood transfusion failures: FMEA, an effective assessment method.
Najafpour, Zhila; Hasoumi, Mojtaba; Behzadi, Faranak; Mohamadi, Efat; Jafary, Mohamadreza; Saeedi, Morteza
2017-06-30
Failure Mode and Effect Analysis (FMEA) is a method used to assess the risk of failures and harms to patients during the medical process and to identify the associated clinical issues. The aim of this study was to conduct an assessment of blood transfusion process in a teaching general hospital, using FMEA as the method. A structured FMEA was recruited in our study performed in 2014, and corrective actions were implemented and re-evaluated after 6 months. Sixteen 2-h sessions were held to perform FMEA in the blood transfusion process, including five steps: establishing the context, selecting team members, analysis of the processes, hazard analysis, and developing a risk reduction protocol for blood transfusion. Failure modes with the highest risk priority numbers (RPNs) were identified. The overall RPN scores ranged from 5 to 100 among which, four failure modes were associated with RPNs over 75. The data analysis indicated that failures with the highest RPNs were: labelling (RPN: 100), transfusion of blood or the component (RPN: 100), patient identification (RPN: 80) and sampling (RPN: 75). The results demonstrated that mis-transfusion of blood or blood component is the most important error, which can lead to serious morbidity or mortality. Provision of training to the personnel on blood transfusion, knowledge raising on hazards and appropriate preventative measures, as well as developing standard safety guidelines are essential, and must be implemented during all steps of blood and blood component transfusion.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
A simplified fragility analysis of fan type cable stayed bridges
NASA Astrophysics Data System (ADS)
Khan, R. A.; Datta, T. K.; Ahmad, S.
2005-06-01
A simplified fragility analysis of fan type cable stayed bridges using Probabilistic Risk Analysis (PRA) procedure is presented for determining their failure probability under random ground motion. Seismic input to the bridge support is considered to be a risk consistent response spectrum which is obtained from a separate analysis. For the response analysis, the bridge deck is modeles as a beam supported on spring at different points. The stiffnesses of the springs are determined by a separate 2D static analysis of cable-tower-deck system. The analysis provides a coupled stiffness matrix for the spring system. A continuum method of analysis using dynamic stiffness is used to determine the dynamic properties of the bridges. The response of the bridge deck is obtained by the response spectrum method of analysis as applied to multidegree of freedom system which duly takes into account the quasi-static component of bridge deck vibration. The fragility analysis includes uncertainties arising due to the variation in ground motion, material property, modeling, method of analysis, ductility factor and damage concentration effect. Probability of failure of the bridge deck is determined by the First Order Second Moment (FOSM) method of reliability. A three span double plane symmetrical fan type cable stayed bridge of total span 689 m, is used as an illustrative example. The fragility curves for the bridge deck failure are obtained under a number of parametric variations. Some of the important conclusions of the study indicate that (i) not only vertical component but also the horizontal component of ground motion has considerable effect on the probability of failure; (ii) ground motion with no time lag between support excitations provides a smaller probability of failure as compared to ground motion with very large time lag between support excitation; and (iii) probability of failure may considerably increase soft soil condition.
Failure analysis on optical fiber on swarm flight payload
NASA Astrophysics Data System (ADS)
Bourcier, Frédéric; Fratter, Isabelle; Teyssandier, Florent; Barenes, Magali; Dhenin, Jérémie; Peyriguer, Marie; Petre-Bordenave, Romain
2017-11-01
Failure analysis on optical components is usually carried-out, on standard testing devices such as optical/electronic microscopes and spectrometers, on isolated but representative samples. Such analyses are not contactless and not totally non-invasive, so they cannot be used easily on flight models. Furthermore, for late payload or satellite integration/validation phases with tight schedule issues, it could be necessary to carry out a failure analysis directly on the flight hardware, in cleanroom.
Structural reliability analysis of laminated CMC components
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.
1991-01-01
For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.
Sensor Failure Detection of FASSIP System using Principal Component Analysis
NASA Astrophysics Data System (ADS)
Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina
2018-02-01
In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.
Ferrographic and spectrometer oil analysis from a failed gas turbine engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1982-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, a plasma, an atomic absorption, and an emission spectrometer to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations ( 2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure.
Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee C. Cadwallader
2010-06-01
This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.
Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee C. Cadwallader
2007-08-01
This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.
Probabilistic finite elements for fracture and fatigue analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Lawrence, M.; Besterfield, G. H.
1989-01-01
The fusion of the probabilistic finite element method (PFEM) and reliability analysis for probabilistic fracture mechanics (PFM) is presented. A comprehensive method for determining the probability of fatigue failure for curved crack growth was developed. The criterion for failure or performance function is stated as: the fatigue life of a component must exceed the service life of the component; otherwise failure will occur. An enriched element that has the near-crack-tip singular strain field embedded in the element is used to formulate the equilibrium equation and solve for the stress intensity factors at the crack-tip. Performance and accuracy of the method is demonstrated on a classical mode 1 fatigue problem.
Advanced Self-Calibrating, Self-Repairing Data Acquisition System
NASA Technical Reports Server (NTRS)
Medelius, Pedro J. (Inventor); Eckhoff, Anthony J. (Inventor); Angel, Lucena R. (Inventor); Perotti, Jose M. (Inventor)
2002-01-01
An improved self-calibrating and self-repairing Data Acquisition System (DAS) for use in inaccessible areas, such as onboard spacecraft, and capable of autonomously performing required system health checks, failure detection. When required, self-repair is implemented utilizing a "spare parts/tool box" system. The available number of spare components primarily depends upon each component's predicted reliability which may be determined using Mean Time Between Failures (MTBF) analysis. Failing or degrading components are electronically removed and disabled to reduce power consumption, before being electronically replaced with spare components.
Failure modes and effects analysis automation
NASA Technical Reports Server (NTRS)
Kamhieh, Cynthia H.; Cutts, Dannie E.; Purves, R. Byron
1988-01-01
A failure modes and effects analysis (FMEA) assistant was implemented as a knowledge based system and will be used during design of the Space Station to aid engineers in performing the complex task of tracking failures throughout the entire design effort. The three major directions in which automation was pursued were the clerical components of the FMEA process, the knowledge acquisition aspects of FMEA, and the failure propagation/analysis portions of the FMEA task. The system is accessible to design, safety, and reliability engineers at single user workstations and, although not designed to replace conventional FMEA, it is expected to decrease by many man years the time required to perform the analysis.
Mass and Reliability Source (MaRS) Database
NASA Technical Reports Server (NTRS)
Valdenegro, Wladimir
2017-01-01
The Mass and Reliability Source (MaRS) Database consolidates components mass and reliability data for all Oribital Replacement Units (ORU) on the International Space Station (ISS) into a single database. It was created to help engineers develop a parametric model that relates hardware mass and reliability. MaRS supplies relevant failure data at the lowest possible component level while providing support for risk, reliability, and logistics analysis. Random-failure data is usually linked to the ORU assembly. MaRS uses this data to identify and display the lowest possible component failure level. As seen in Figure 1, the failure point is identified to the lowest level: Component 2.1. This is useful for efficient planning of spare supplies, supporting long duration crewed missions, allowing quicker trade studies, and streamlining diagnostic processes. MaRS is composed of information from various databases: MADS (operating hours), VMDB (indentured part lists), and ISS PART (failure data). This information is organized in Microsoft Excel and accessed through a program made in Microsoft Access (Figure 2). The focus of the Fall 2017 internship tour was to identify the components that were the root cause of failure from the given random-failure data, develop a taxonomy for the database, and attach material headings to the component list. Secondary objectives included verifying the integrity of the data in MaRS, eliminating any part discrepancies, and generating documentation for future reference. Due to the nature of the random-failure data, data mining had to be done manually without the assistance of an automated program to ensure positive identification.
DEPEND - A design environment for prediction and evaluation of system dependability
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Iyer, Ravishankar K.
1990-01-01
The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.
Failure Mode Identification Through Clustering Analysis
NASA Technical Reports Server (NTRS)
Arunajadai, Srikesh G.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Research has shown that nearly 80% of the costs and problems are created in product development and that cost and quality are essentially designed into products in the conceptual stage. Currently, failure identification procedures (such as FMEA (Failure Modes and Effects Analysis), FMECA (Failure Modes, Effects and Criticality Analysis) and FTA (Fault Tree Analysis)) and design of experiments are being used for quality control and for the detection of potential failure modes during the detail design stage or post-product launch. Though all of these methods have their own advantages, they do not give information as to what are the predominant failures that a designer should focus on while designing a product. This work uses a functional approach to identify failure modes, which hypothesizes that similarities exist between different failure modes based on the functionality of the product/component. In this paper, a statistical clustering procedure is proposed to retrieve information on the set of predominant failures that a function experiences. The various stages of the methodology are illustrated using a hypothetical design example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.
1995-08-01
A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less
Independent Orbiter Assessment (IOA): Analysis of the crew equipment subsystem
NASA Technical Reports Server (NTRS)
Sinclair, Susan; Graham, L.; Richard, Bill; Saxon, H.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical (PCIs) items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results coresponding to the Orbiter crew equipment hardware are documented. The IOA analysis process utilized available crew equipment hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 352 failure modes analyzed, 78 were determined to be PCIs.
Independent Orbiter Assessment (IOA): Analysis of the pyrotechnics subsystem
NASA Technical Reports Server (NTRS)
Robinson, W. W.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Pyrotechnics hardware. The IOA analysis process utilized available pyrotechnics hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
NASA Technical Reports Server (NTRS)
Patton, Jeff A.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C)/Electrical Power Generation (EPG) hardware. The EPD and C/EPG hardware is required for performing critical functions of cryogenic reactant storage, electrical power generation and product water distribution in the Orbiter. Specifically, the EPD and C/EPG hardware consists of the following components: Power Section Assembly (PSA); Reactant Control Subsystem (RCS); Thermal Control Subsystem (TCS); Water Removal Subsystem (WRS); and Power Reactant Storage and Distribution System (PRSDS). The IOA analysis process utilized available EPD and C/EPG hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Ferrographic and spectrometer oil analysis from a failed gas turbine engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1983-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor parts that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, and with plasma, atomic absorption, and emission spectrometers to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations (2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure. Previously announced in STAR as N83-12433
RSA prediction of high failure rate for the uncoated Interax TKA confirmed by meta-analysis.
Pijls, Bart G; Nieuwenhuijse, Marc J; Schoones, Jan W; Middeldorp, Saskia; Valstar, Edward R; Nelissen, Rob G H H
2012-04-01
In a previous radiostereometric (RSA) trial the uncoated, uncemented, Interax tibial components showed excessive migration within 2 years compared to HA-coated and cemented tibial components. It was predicted that this type of fixation would have a high failure rate. The purpose of this systematic review and meta-analysis was to investigate whether this RSA prediction was correct. We performed a systematic review and meta-analysis to determine the revision rate for aseptic loosening of the uncoated and cemented Interax tibial components. 3 studies were included, involving 349 Interax total knee arthroplasties (TKAs) for the comparison of uncoated and cemented fixation. There were 30 revisions: 27 uncoated and 3 cemented components. There was a 3-times higher revision rate for the uncoated Interax components than that for cemented Interax components (OR = 3; 95% CI: 1.4-7.2). This meta-analysis confirms the prediction of a previous RSA trial. The uncoated Interax components showed the highest migration and turned out to have the highest revision rate for aseptic loosening. RSA appears to enable efficient detection of an inferior design as early as 2 years postoperatively in a small group of patients.
Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations
NASA Technical Reports Server (NTRS)
Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor
2014-01-01
One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.
The Local Wind Pump for Marginal Societies in Indonesia: A Perspective of Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Gunawan, Insan; Taufik, Ahmad
2007-10-01
There are many efforts to reduce a cost of investment of well established hybrid wind pump applied to rural areas. A recent study on a local wind pump (LWP) for marginal societies in Indonesia (traditional farmers, peasant and tribes) was one of the efforts reporting a new application area. The objectives of the study were defined to measure reliability value of the LWP due to fluctuated wind intensity, low wind speed, economic point of view regarding a prolong economic crisis occurring and an available local component of the LWP and to sustain economics productivity (agriculture product) of the society. In the study, a fault tree analysis (FTA) was deployed as one of three methods used for assessing the LWP. In this article, the FTA has been thoroughly discussed in order to improve a better performance of the LWP applied in dry land watering system of Mesuji district of Lampung province-Indonesia. In the early stage, all of local component of the LWP was classified in term of its function. There were four groups of the components. Moreover, all of the sub components of each group were subjected to failure modes of the FTA, namely (1) primary failure modes; (2) secondary failure modes and (3) common failure modes. In the data processing stage, an available software package, ITEM was deployed. It was observed that the component indicated obtaining relative a long life duration of operational life cycle in 1,666 hours. Moreover, to enhance high performance the LWP, maintenance schedule, critical sub component suffering from failure and an overhaul priority have been identified in term of quantity values. Throughout a year pilot project, it can be concluded that the LWP is a reliable product to the societies enhancing their economics productivities.
Independent Orbiter Assessment (IOA): Analysis of the body flap subsystem
NASA Technical Reports Server (NTRS)
Wilson, R. E.; Riccio, J. R.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Body Flap (BF) subsystem hardware are documented. The BF is a large aerosurface located at the trailing edge of the lower aft fuselage of the Orbiter. The proper function of the BF is essential during the dynamic flight phases of ascent and entry. During the ascent phase of flight, the BF trails in a fixed position. For entry, the BF provides elevon load relief, trim control, and acts as a heat shield for the main engines. Specifically, the BF hardware comprises the following components: Power Drive Unit (PDU), rotary actuators, and torque tubes. The IOA analysis process utilized available BF hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 35 failure modes analyzed, 19 were determined to be PCIs.
A model for the progressive failure of laminated composite structural components
NASA Technical Reports Server (NTRS)
Allen, D. H.; Lo, D. C.
1991-01-01
Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.
Independent Orbiter Assessment (IOA): Analysis of the DPS subsystem
NASA Technical Reports Server (NTRS)
Lowery, H. J.; Haufler, W. A.; Pietz, K. C.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) is presented. The IOA approach features a top-down analysis of the hardware to independently determine failure modes, criticality, and potential critical items. The independent analysis results corresponding to the Orbiter Data Processing System (DPS) hardware are documented. The DPS hardware is required for performing critical functions of data acquisition, data manipulation, data display, and data transfer throughout the Orbiter. Specifically, the DPS hardware consists of the following components: Multiplexer/Demultiplexer (MDM); General Purpose Computer (GPC); Multifunction CRT Display System (MCDS); Data Buses and Data Bus Couplers (DBC); Data Bus Isolation Amplifiers (DBIA); Mass Memory Unit (MMU); and Engine Interface Unit (EIU). The IOA analysis process utilized available DPS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the extensive redundancy built into the DPS the number of critical items are few. Those identified resulted from premature operation and erroneous output of the GPCs.
Independent Orbiter Assessment (IOA): Analysis of the communication and tracking subsystem
NASA Technical Reports Server (NTRS)
Gardner, J. R.; Robinson, W. M.; Trahan, W. H.; Daley, E. S.; Long, W. C.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Communication and Tracking hardware. The IOA analysis process utilized available Communication and Tracking hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Vibration detection of component health and operability
NASA Technical Reports Server (NTRS)
Baird, B. C.
1975-01-01
In order to prevent catastrophic failure and eliminate unnecessary periodic maintenance in the shuttle orbiter program environmental control system components, some means of detecting incipient failure in these components is required. The utilization was investigated of vibrational/acoustic phenomena as one of the principal physical parameters on which to base the design of this instrumentation. Baseline vibration/acoustic data was collected from three aircraft type fans and two aircraft type pumps over a frequency range from a few hertz to greater than 3000 kHz. The baseline data included spectrum analysis of the baseband vibration signal, spectrum analysis of the detected high frequency bandpass acoustic signal, and amplitude distribution of the high frequency bandpass acoustic signal. A total of eight bearing defects and two unbalancings was introduced into the five test items. All defects were detected by at least one of a set of vibration/acoustic parameters with a margin of at least 2:1 over the worst case baseline. The design of a portable instrument using this set of vibration/acoustic parameters for detecting incipient failures in environmental control system components is described.
Comparison between four dissimilar solar panel configurations
NASA Astrophysics Data System (ADS)
Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.
2017-12-01
Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
Probabilistic structural analysis of aerospace components using NESSUS
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.
1988-01-01
Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.
Detailed analysis and test correlation of a stiffened composite wing panel
NASA Technical Reports Server (NTRS)
Davis, D. Dale, Jr.
1991-01-01
Nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings supplied by the Bell Helicopter Textron Corporation, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain (ANS) elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain displacements relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis. Strain predictions from both the linear and nonlinear stress analyses are shown to compare well with experimental data up through the Design Ultimate Load (DUL) of the panel. However, due to the extreme nonlinear response of the panel, the linear analysis was not accurate at loads above the DUL. The nonlinear analysis more accurately predicted the strain at high values of applied load, and even predicted complicated nonlinear response characteristics, such as load reversals, at the observed failure load of the test panel. In order to understand the failure mechanism of the panel, buckling and first ply failure analyses were performed. The buckling load was 17 percent above the observed failure load while first ply failure analyses indicated significant material damage at and below the observed failure load.
Morphological features (defects) in fuel cell membrane electrode assemblies
NASA Astrophysics Data System (ADS)
Kundu, S.; Fowler, M. W.; Simon, L. C.; Grot, S.
Reliability and durability issues in fuel cells are becoming more important as the technology and the industry matures. Although research in this area has increased, systematic failure analysis, such as a failure modes and effects analysis (FMEA), are very limited in the literature. This paper presents a categorization scheme of causes, modes, and effects related to fuel cell degradation and failure, with particular focus on the role of component quality, that can be used in FMEAs for polymer electrolyte membrane (PEM) fuel cells. The work also identifies component defects imparted on catalyst-coated membranes (CCM) by manufacturing and proposes mechanisms by which they can influence overall degradation and reliability. Six major defects have been identified on fresh CCM materials, i.e., cracks, orientation, delamination, electrolyte clusters, platinum clusters, and thickness variations.
NASA Astrophysics Data System (ADS)
Chen, Si; Jiang, Hailong; Cao, Yan; Wang, Yun; Hu, Ziheng; Zhu, Zhenyu; Chai, Yifeng
2016-04-01
Identifying the molecular targets for the beneficial effects of active small-molecule compounds simultaneously is an important and currently unmet challenge. In this study, we firstly proposed network analysis by integrating data from network pharmacology and metabolomics to identify targets of active components in sini decoction (SND) simultaneously against heart failure. To begin with, 48 potential active components in SND against heart failure were predicted by serum pharmacochemistry, text mining and similarity match. Then, we employed network pharmacology including text mining and molecular docking to identify the potential targets of these components. The key enriched processes, pathways and related diseases of these target proteins were analyzed by STRING database. At last, network analysis was conducted to identify most possible targets of components in SND. Among the 25 targets predicted by network analysis, tumor necrosis factor α (TNF-α) was firstly experimentally validated in molecular and cellular level. Results indicated that hypaconitine, mesaconitine, higenamine and quercetin in SND can directly bind to TNF-α, reduce the TNF-α-mediated cytotoxicity on L929 cells and exert anti-myocardial cell apoptosis effects. We envisage that network analysis will also be useful in target identification of a bioactive compound.
Chen, Si; Jiang, Hailong; Cao, Yan; Wang, Yun; Hu, Ziheng; Zhu, Zhenyu; Chai, Yifeng
2016-01-01
Identifying the molecular targets for the beneficial effects of active small-molecule compounds simultaneously is an important and currently unmet challenge. In this study, we firstly proposed network analysis by integrating data from network pharmacology and metabolomics to identify targets of active components in sini decoction (SND) simultaneously against heart failure. To begin with, 48 potential active components in SND against heart failure were predicted by serum pharmacochemistry, text mining and similarity match. Then, we employed network pharmacology including text mining and molecular docking to identify the potential targets of these components. The key enriched processes, pathways and related diseases of these target proteins were analyzed by STRING database. At last, network analysis was conducted to identify most possible targets of components in SND. Among the 25 targets predicted by network analysis, tumor necrosis factor α (TNF-α) was firstly experimentally validated in molecular and cellular level. Results indicated that hypaconitine, mesaconitine, higenamine and quercetin in SND can directly bind to TNF-α, reduce the TNF-α-mediated cytotoxicity on L929 cells and exert anti-myocardial cell apoptosis effects. We envisage that network analysis will also be useful in target identification of a bioactive compound. PMID:27095146
Comparative analysis on flexibility requirements of typical Cryogenic Transfer lines
NASA Astrophysics Data System (ADS)
Jadon, Mohit; Kumar, Uday; Choukekar, Ketan; Shah, Nitin; Sarkar, Biswanath
2017-04-01
The cryogenic systems and their applications; primarily in large Fusion devices, utilize multiple cryogen transfer lines of various sizes and complexities to transfer cryogenic fluids from plant to the various user/ applications. These transfer lines are composed of various critical sections i.e. tee section, elbows, flexible components etc. The mechanical sustainability (under failure circumstances) of these transfer lines are primary requirement for safe operation of the system and applications. The transfer lines need to be designed for multiple design constraints conditions like line layout, support locations and space restrictions. The transfer lines are subjected to single load and multiple load combinations, such as operational loads, seismic loads, leak in insulation vacuum loads etc. [1]. The analytical calculations and flexibility analysis using professional software are performed for the typical transfer lines without any flexible component, the results were analysed for functional and mechanical load conditions. The failure modes were identified along the critical sections. The same transfer line was then refurbished with the flexible components and analysed for failure modes. The flexible components provide additional flexibility to the transfer line system and make it safe. The results obtained from the analytical calculations were compared with those obtained from the flexibility analysis software calculations. The optimization of the flexible component’s size and selection was performed and components were selected to meet the design requirements as per code.
Correlation study between vibrational environmental and failure rates of civil helicopter components
NASA Technical Reports Server (NTRS)
Alaniz, O.
1979-01-01
An investigation of two selected helicopter types, namely, the Models 206A/B and 212, is reported. An analysis of the available vibration and reliability data for these two helicopter types resulted in the selection of ten components located in five different areas of the helicopter and consisting primarily of instruments, electrical components, and other noncritical flight hardware. The potential for advanced technology in suppressing vibration in helicopters was assessed. The are still several unknowns concerning both the vibration environment and the reliability of helicopter noncritical flight components. Vibration data for the selected components were either insufficient or inappropriate. The maintenance data examined for the selected components were inappropriate due to variations in failure mode identification, inconsistent reporting, or inaccurate informaton.
Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline
NASA Astrophysics Data System (ADS)
Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.
2017-05-01
In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
Gutiérrez, Sergio; Greiwe, R Michael; Frankle, Mark A; Siegal, Steven; Lee, William E
2007-01-01
There has been renewed interest in reverse shoulder arthroplasty for the treatment of glenohumeral arthritis with concomitant rotator cuff deficiency. Failure of the prosthesis at the glenoid attachment site remains a concern. The purpose of this study was to examine glenoid component stability with regard to the angle of implantation. This investigation entailed a biomechanical analysis to evaluate forces and micromotion in glenoid components attached to 12 polyurethane blocks at -15 degrees, 0 degrees, and +15 degrees of superior and inferior tilt. The 15 degrees inferior tilt had the most uniform compressive forces and the least amount of tensile forces and micromotion when compared with the 0 degrees and 15 degrees superiorly tilted baseplate. Our results suggest that implantation with an inferior tilt will reduce the incidence of mechanical failure of the glenoid component in a reverse shoulder prosthesis.
Enhanced Component Performance Study. Emergency Diesel Generators 1998–2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2014-11-01
This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2013 and maintenance unavailability (UA) performance data using Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2013. The objective is to present an analysis of factors that could influence the system and component trends in addition to annual performance trends of failure rates and probabilities. The factors analyzed for the EDG component are the differences in failuresmore » between all demands and actual unplanned engineered safety feature (ESF) demands, differences among manufacturers, and differences among EDG ratings. Statistical analyses of these differences are performed and results showing whether pooling is acceptable across these factors. In addition, engineering analyses were performed with respect to time period and failure mode. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating.« less
NAC Off-Vehicle Brake Testing Project
2007-05-01
disc pads/rotors and drum shoe assemblies/ drums - Must use vehicle “OEM” brake /hub-end hardware, or ESA... brake component comparison analysis (primary)* - brake system design analysis - brake system component failure analysis - (*) limited to disc pads...e.g. disc pads/rotors, drum shoe assemblies/ drums . - Not limited to “OEM” brake /hub-end hardware as there is none ! - Weight transfer, plumbing,
Ferrographic and spectrographic analysis of oil sampled before and after failure of a jet engine
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1980-01-01
An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph as well as plasma, atomic absorption, and emission spectrometers. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism, nor a high level of wear debris was detected in the oil sample from the engine just prior to the test in which the failure occurred. However, low concentrations of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure.
Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study
NASA Technical Reports Server (NTRS)
Flores, Melissa; Malin, Jane T.
2013-01-01
An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component s functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.
Tribology symposium 1995. PD-Volume 72
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masudi, H.
After the keynote presentation by Professor Aaron Cohen of Texas A and M University, entitled Processes Used in Design, the program is divided into five major sessions: Research and Development -- Recent research and development of tribological components; Tribology in Manufacturing -- The impact of tribology on modern manufacturing; Design/Design Representation -- Aspects of design related to tribological systems; Tribo-Chemistry/Tribo-Physics -- Discussion of chemical and physical behavior of substances as related to tribology; and Failure Analysis -- An analysis of failure, failure detection, and failure monitoring as related to manufacturing processes. Papers have been processed separately for inclusion on themore » data base.« less
Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study
NASA Astrophysics Data System (ADS)
Flores, Melissa D.; Malin, Jane T.; Fleming, Land D.
2013-09-01
An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component's functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.
Guyen, Olivier; Lewallen, David G; Cabanela, Miguel E
2008-07-01
The Osteonics constrained tripolar implant has been one of the most commonly used options to manage recurrent instability after total hip arthroplasty. Mechanical failures were expected and have been reported. The purpose of this retrospective review was to identify the observed modes of failure of this device. Forty-three failed Osteonics constrained tripolar implants were revised at our institution between September 1997 and April 2005. All revisions related to the constrained acetabular component only were considered as failures. All of the devices had been inserted for recurrent or intraoperative instability during revision procedures. Seven different methods of implantation were used. Operative reports and radiographs were reviewed to identify the modes of failure. The average time to failure of the forty-three implants was 28.4 months. A total of five modes of failure were observed: failure at the bone-implant interface (type I), which occurred in eleven hips; failure at the mechanisms holding the constrained liner to the metal shell (type II), in six hips; failure of the retaining mechanism of the bipolar component (type III), in ten hips; dislocation of the prosthetic head at the inner bearing of the bipolar component (type IV), in three hips; and infection (type V), in twelve hips. The mode of failure remained unknown in one hip that had been revised at another institution. The Osteonics constrained tripolar total hip arthroplasty implant is a complex device involving many parts. We showed that failure of this device can occur at most of its interfaces. It would therefore appear logical to limit its application to salvage situations.
Analysis of Gas Turbine Engine Failure Modes.
1974-01-01
failure due to factors ex- ternal (foreign to the power plant. Because in practice it is virtually impossible to distinguish accurately between the two, all...45 55 APPEN’DIX E WHEN DISCO ’=RED z z J-79 ENGINE AND HIGH FAILURE COMPONENTS H z Compressor R or242 Copeo R F4 -C H C s SeH UPi 0. 0- H U 4 C, Engine
System safety in Stirling engine development
NASA Technical Reports Server (NTRS)
Bankaitis, H.
1981-01-01
The DOE/NASA Stirling Engine Project Office has required that contractors make safety considerations an integral part of all phases of the Stirling engine development program. As an integral part of each engine design subtask, analyses are evolved to determine possible modes of failure. The accepted system safety analysis techniques (Fault Tree, FMEA, Hazards Analysis, etc.) are applied in various degrees of extent at the system, subsystem and component levels. The primary objectives are to identify critical failure areas, to enable removal of susceptibility to such failures or their effects from the system and to minimize risk.
NASA Technical Reports Server (NTRS)
1996-01-01
This Failure Modes and Effects Analysis (FMEA) is for the Advanced Microwave Sounding Unit-A (AMSU-A) instruments that are being designed and manufactured for the Meteorological Satellites Project (METSAT) and the Earth Observing System (EOS) integrated programs. The FMEA analyzes the design of the METSAT and EOS instruments as they currently exist. This FMEA is intended to identify METSAT and EOS failure modes and their effect on spacecraft-instrument and instrument-component interfaces. The prime objective of this FMEA is to identify potential catastrophic and critical failures so that susceptibility to the failures and their effects can be eliminated from the METSAT/EOS instruments.
2017-06-30
along the intermetallic component or at the interface between the two components of the composite. The availability of rnicroscale experimental data in...obtained with the PD model; (c) map of strain energy density; (d) the new quasi -index damage is a predictor of fai lure. As in the case of FRCs, one...which points are most likely to fail, before actual failure happens. The " quasi -damage index", shown in the formula below, is a point-wise measure
Forensic applications of metallurgy - Failure analysis of metal screw and bolt products
NASA Astrophysics Data System (ADS)
Tiner, Nathan A.
1993-03-01
It is often necessary for engineering consultants in liability lawsuits to consider whether a component has a manufacturing and/or design defect, as judged by industry standards, as well as whether the component was strong enough to resist service loads. Attention is presently given to the principles that must be appealed to in order to clarify these two issues in the cases of metal screw and bolt failures, which are subject to fatigue and brittle fractures and ductile dimple rupture.
Systematic Destruction of Electronic Parts for Aid in Electronic Failure Analysis
NASA Technical Reports Server (NTRS)
Decker, S. E.; Rolin, T. D.; McManus, P. D.
2012-01-01
NASA analyzes electrical, electronic, and electromechanical (EEE) parts used in space vehicles to understand failure modes of these components. Operational amplifiers and transistors are two examples of EEE parts critical to NASA missions that can fail due to electrical overstress (EOS). EOS is the result of voltage or current over time conditions that exceeds a component s specification limit. The objective of this study was to provide known voltage pulses over well-defined time intervals to determine the type and extent of damage imparted to the device. The amount of current was not controlled but measured so that pulse energy was determined. The damage was ascertained electrically using curve trace plots and optically using various metallographic techniques. The resulting data can be used to build a database of physical evidence to compare to damaged components removed from flight avionics. The comparison will provide the avionics failure analyst necessary information about voltage and times that caused flight or test failures when no other electrical data is available.
Model-OA wind turbine generator - Failure modes and effects analysis
NASA Technical Reports Server (NTRS)
Klein, William E.; Lali, Vincent R.
1990-01-01
The results failure modes and effects analysis (FMEA) conducted for wind-turbine generators are presented. The FMEA was performed for the functional modes of each system, subsystem, or component. The single-point failures were eliminated for most of the systems. The blade system was the only exception. The qualitative probability of a blade separating was estimated at level D-remote. Many changes were made to the hardware as a result of this analysis. The most significant change was the addition of the safety system. Operational experience and need to improve machine availability have resulted in subsequent changes to the various systems, which are also reflected in this FMEA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, G.R.; Wilson, J.R.
COMCAN2A and COMCAN are designed to analyze complex systems such as nuclear plants for common causes of failure. A common cause event, or common mode failure, is a secondary cause that could contribute to the failure of more than one component and violates the assumption of independence. Analysis of such events is an integral part of system reliability and safety analysis. A significant common cause event is a secondary cause common to all basic events in one or more minimal cut sets. Minimal cut sets containing events from components sharing a common location or a common link are called commonmore » cause candidates. Components share a common location if no barrier insulates any one of them from the secondary cause. A common link is a dependency among components which cannot be removed by a physical barrier (e.g., a common energy source or common maintenance instructions).IBM360;CDC CYBER176,175; FORTRAN IV (30%) and BAL (70%) (IBM360), FORTRAN IV (97%) and COMPASS (3%) (CDC CYBER176).; OS/360 (IBM360) and NOS/BE 1.4 (CDC CYBER176), NOS 1.3 (CDC CYBER175); 140K bytes of memory for COMCAN and 242K (octal) words of memory for COMCAN2A.« less
NASA Astrophysics Data System (ADS)
Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin
2015-03-01
Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.
Analysis for the Progressive Failure Response of Textile Composite Fuselage Frames
NASA Technical Reports Server (NTRS)
Johnson, Eric R.; Boitnott, Richard L. (Technical Monitor)
2002-01-01
A part of aviation accident mitigation is a crashworthy airframe structure, and an important measure of merit for a crashworthy structure is the amount of kinetic energy that can be absorbed in the crush of the structure. Prediction of the energy absorbed from finite element analyses requires modeling the progressive failure sequence. Progressive failure modes may include material degradation, fracture and crack growth, and buckling and collapse. The design of crashworthy airframe components will benefit from progressive failure analyses that have been validated by tests. The subject of this research is the development of a progressive failure analysis for a textile composite, circumferential fuselage frame subjected to a quasi-static, crash-type load. The test data for the frame are reported, and these data are used to develop and to validate methods for the progressive failure response.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Alwan, Faris M; Baharum, Adam; Hassan, Geehan S
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.
Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346
NASA Technical Reports Server (NTRS)
Gyekenyesi, John P.; Nemeth, Noel N.
1987-01-01
The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.
Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...
Evaluation of the split cantilever beam for Mode 3 delamination testing
NASA Technical Reports Server (NTRS)
Martin, Roderick H.
1989-01-01
A test rig for testing a thick split cantilever beam for scissoring delamination (mode 3) fracture toughness was developed. A 3-D finite element analysis was conducted on the test specimen to determine the strain energy release rate, G, distribution along the delamination front. The virtual crack closure technique was used to calculate the G components resulting from interlaminar tension, GI, interlaminar sliding shear, GII, and interlaminar tearing shear, GIII. The finite element analysis showed that at the delamination front no GI component existed, but a GII component was present in addition to a GIII component. Furthermore, near the free edges, the GII component was significantly higher than the GIII component. The GII/GIII ratio was found to increase with delamination length but was insensitive to the beam depth. The presence of GII at the delamination front was verified experimentally by examination of the failure surfaces. At the center of the beam, where the failure was in mode 3, there was significant fiber bridging. However, at the edges of the beam where the failure was in mode 3, there was no fiber bridging and mode 2 shear hackles were observed. Therefore, it was concluded that the split cantilever beam configuration does not represent a pure mode 3 test. The experimental work showed that the mode 2 fracture toughness, GIIc, must be less than the mode 3 fracture toughness, GIIIc. Therefore, a conservative approach to characterizing mode 3 delamination is to equate GIIIc to GIIc.
Failure Analysis for Composition of Web Services Represented as Labeled Transition Systems
NASA Astrophysics Data System (ADS)
Nadkarni, Dinanath; Basu, Samik; Honavar, Vasant; Lutz, Robyn
The Web service composition problem involves the creation of a choreographer that provides the interaction between a set of component services to realize a goal service. Several methods have been proposed and developed to address this problem. In this paper, we consider those scenarios where the composition process may fail due to incomplete specification of goal service requirements or due to the fact that the user is unaware of the functionality provided by the existing component services. In such cases, it is desirable to have a composition algorithm that can provide feedback to the user regarding the cause of failure in the composition process. Such feedback will help guide the user to re-formulate the goal service and iterate the composition process. We propose a failure analysis technique for composition algorithms that views Web service behavior as multiple sequences of input/output events. Our technique identifies the possible cause of composition failure and suggests possible recovery options to the user. We discuss our technique using a simple e-Library Web service in the context of the MoSCoE Web service composition framework.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Failure Analysis in Platelet Molded Composite Systems
NASA Astrophysics Data System (ADS)
Kravchenko, Sergii G.
Long-fiber discontinuous composite systems in the form of chopped prepreg tapes provide an advanced, structural grade, molding compound allowing for fabrication of complex three-dimensional components. Understanding of process-structure-property relationship is essential for application of prerpeg platelet molded components, especially because of their possible irregular disordered heterogeneous morphology. Herein, a structure-property relationship was analyzed in the composite systems of many platelets. Regular and irregular morphologies were considered. Platelet-based systems with more ordered morphology possess superior mechanical performance. While regular morphologies allow for a careful inspection of failure mechanisms derived from the morphological characteristics, irregular morphologies are representative of the composite architectures resulting from uncontrolled deposition and molding with chopped prerpegs. Progressive failure analysis (PFA) was used to study the damaged deformation up to ultimate failure in a platelet-based composite system. Computational damage mechanics approaches were utilized to conduct the PFA. The developed computational models granted understanding of how the composite structure details, meaning the platelet geometry and system morphology (geometrical arrangement and orientation distribution of platelets), define the effective mechanical properties of a platelet-molded composite system, its stiffness, strength and variability in properties.
NASA Technical Reports Server (NTRS)
Duffy, S. F.; Hu, J.; Hopkins, D. A.
1995-01-01
The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.
Savannah River Site generic data base development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanton, C.H.; Eide, S.A.
This report describes the results of a project to improve the generic component failure data base for the Savannah River Site (SRS). A representative list of components and failure modes for SRS risk models was generated by reviewing existing safety analyses and component failure data bases and from suggestions from SRS safety analysts. Then sources of data or failure rate estimates were identified and reviewed for applicability. A major source of information was the Nuclear Computerized Library for Assessing Reactor Reliability, or NUCLARR. This source includes an extensive collection of failure data and failure rate estimates for commercial nuclear powermore » plants. A recent Idaho National Engineering Laboratory report on failure data from the Idaho Chemical Processing Plant was also reviewed. From these and other recent sources, failure data and failure rate estimates were collected for the components and failure modes of interest. This information was aggregated to obtain a recommended generic failure rate distribution (mean and error factor) for each component failure mode.« less
Hainsworth, S V; Fitzpatrick, M E
2007-06-01
Forensic engineering is the application of engineering principles or techniques to the investigation of materials, products, structures or components that fail or do not perform as intended. In particular, forensic engineering can involve providing solutions to forensic problems by the application of engineering science. A criminal aspect may be involved in the investigation but often the problems are related to negligence, breach of contract, or providing information needed in the redesign of a product to eliminate future failures. Forensic engineering may include the investigation of the physical causes of accidents or other sources of claims and litigation (for example, patent disputes). It involves the preparation of technical engineering reports, and may require giving testimony and providing advice to assist in the resolution of disputes affecting life or property.This paper reviews the principal methods available for the analysis of failed components and then gives examples of different component failure modes through selected case studies.
Analysis for the Progressive Failure Response of Textile Composite Fuselage Frames
NASA Technical Reports Server (NTRS)
Johnson, Eric R.; Boitnott, Richard L. (Technical Monitor)
2002-01-01
A part of aviation accident mitigation is a crash worthy airframe structure, and an important measure of merit for a crash worthy structure is the amount of kinetic energy that can be absorbed in the crush of the structure. Prediction of the energy absorbed from finite element analyses requires modeling the progressive failure sequence. Progressive failure modes may include material degradation, fracture and crack growth, and buckling and collapse. The design of crash worthy airframe components will benefit from progressive failure analyses that have been validated by tests. The subject of this research is the development of a progressive failure analysis for textile composite. circumferential fuselage frames subjected to a quasi-static, crash-type load. The test data for these frames are reported, and these data, along with stub column test data, are to be used to develop and to validate methods for the progressive failure response.
NASA Technical Reports Server (NTRS)
Saiidi, M. J.; Duffy, R. E.; Mclaughlin, T. D.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Atmospheric Revitalization and Pressure Control Subsystem (ARPCS) are documented. The ARPCS hardware was categorized into the following subdivisions: (1) Atmospheric Make-up and Control (including the Auxiliary Oxygen Assembly, Oxygen Assembly, and Nitrogen Assembly); and (2) Atmospheric Vent and Control (including the Positive Relief Vent Assembly, Negative Relief Vent Assembly, and Cabin Vent Assembly). The IOA analysis process utilized available ARPCS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
NASA Astrophysics Data System (ADS)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.
NASA Technical Reports Server (NTRS)
Wong, J. T.; Andre, W. L.
1981-01-01
A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable.
Recent advances in computational structural reliability analysis methods
NASA Astrophysics Data System (ADS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-10-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Recent advances in computational structural reliability analysis methods
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-01-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
PV System Component Fault and Failure Compilation and Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne
This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.
Ultra Reliable Closed Loop Life Support for Long Space Missions
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Ewert, Michael K.
2010-01-01
Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.
Reliability analysis of different structure parameters of PCBA under drop impact
NASA Astrophysics Data System (ADS)
Liu, P. S.; Fan, G. M.; Liu, Y. H.
2018-03-01
The establishing process of PCBA is modelled by finite element analysis software ABAQUS. Firstly, introduce the Input-G method and the fatigue life under drop impact are introduced and the mechanism of the solder joint failure in the process of drop is analysed. The main reason of solder joint failure is that the PCB component is suffering repeated tension and compression stress during the drop impact. Finally, the equivalent stress and peel stress of different solder joint and plate-level components under different impact acceleration are also analysed. The results show that the reliability of tin-silver copper joint is better than that of tin- lead solder joint, and the fatigue life of solder joint expectancy decrease as the impact pulse amplitude increases.
Fatigue failure of metal components as a factor in civil aircraft accidents
NASA Technical Reports Server (NTRS)
Holshouser, W. L.; Mayner, R. D.
1972-01-01
A review of records maintained by the National Transportation Safety Board showed that 16,054 civil aviation accidents occurred in the United States during the 3-year period ending December 31, 1969. Material failure was an important factor in the cause of 942 of these accidents. Fatigue was identified as the mode of the material failures associated with the cause of 155 accidents and in many other accidents the records indicated that fatigue failures might have been involved. There were 27 fatal accidents and 157 fatalities in accidents in which fatigue failures of metal components were definitely identified. Fatigue failures associated with accidents occurred most frequently in landing-gear components, followed in order by powerplant, propeller, and structural components in fixed-wing aircraft and tail-rotor and main-rotor components in rotorcraft. In a study of 230 laboratory reports on failed components associated with the cause of accidents, fatigue was identified as the mode of failure in more than 60 percent of the failed components. The most frequently identified cause of fatigue, as well as most other types of material failures, was improper maintenance (including inadequate inspection). Fabrication defects, design deficiencies, defective material, and abnormal service damage also caused many fatigue failures. Four case histories of major accidents are included in the paper as illustrations of some of the factors invovled in fatigue failures of aircraft components.
Failure analysis of an aluminum alloy material framework component induced by casting defects
NASA Astrophysics Data System (ADS)
Li, Bo; Hu, Weiye
2017-09-01
Failure analysis on a fractured radome framework component was carried out through visual observations, metallographic examination using optical microscope, fractog-raphy inspections using scanning electron microscope and chemical composition analysis. The failed frame was made of casting Al-Si7-Mg0.4 aluminum alloy. It had suffered a former vi-bration performance tests. It was indicated that the fractures were attributed to fatigue cracks which were induced by casting porosities at the outer surfaces of frame. Failure analysis was carefully conducted for the semi-penetrating crack appearing on the framework. According to the fractography inspected by scanning electron microscope, it was indicated that numerous casting porosities at the outer surface of the framework played the role of multiple fracture sources due to some applied stresses. Optical microstructure observations suggested that the dendrite-shaped casting porosities largely contributed to the crack-initiation. The groove-shaped structure at roots of spatial convex-bodies on the edge of casting porosities supplied the preferred paths of the crack-propagation. Besides, the brittle silicon eutectic particles distrib-uting along grain boundaries induced the intergranular fracture mode in the region of the over-load final fracture surface.
Independent Orbiter Assessment (IOA): Analysis of the hydraulics/water spray boiler subsystem
NASA Technical Reports Server (NTRS)
Duval, J. D.; Davidson, W. R.; Parkman, William E.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Orbiter Hydraulics/Water Spray Boiler Subsystem. The hydraulic system provides hydraulic power to gimbal the main engines, actuate the main engine propellant control valves, move the aerodynamic flight control surfaces, lower the landing gear, apply wheel brakes, steer the nosewheel, and dampen the external tank (ET) separation. Each hydraulic system has an associated water spray boiler which is used to cool the hydraulic fluid and APU lubricating oil. The IOA analysis process utilized available HYD/WSB hardware drawings, schematics and documents for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 430 failure modes analyzed, 166 were determined to be PCIs.
Analysis on IGBT and Diode Failures in Distribution Electronic Power Transformers
NASA Astrophysics Data System (ADS)
Wang, Si-cong; Sang, Zi-xia; Yan, Jiong; Du, Zhi; Huang, Jia-qi; Chen, Zhu
2018-02-01
Fault characteristics of power electronic components are of great importance for a power electronic device, and are of extraordinary importance for those applied in power system. The topology structures and control method of Distribution Electronic Power Transformer (D-EPT) are introduced, and an exploration on fault types and fault characteristics for the IGBT and diode failures is presented. The analysis and simulation of different fault types for the fault characteristics lead to the D-EPT fault location scheme.
Local-global analysis of crack growth in continuously reinfoced ceramic matrix composites
NASA Technical Reports Server (NTRS)
Ballarini, Roberto; Ahmed, Shamim
1989-01-01
This paper describes the development of a mathematical model for predicting the strength and micromechanical failure characteristics of continuously reinforced ceramic matrix composites. The local-global analysis models the vicinity of a propagating crack tip as a local heterogeneous region (LHR) consisting of spring-like representation of the matrix, fibers and interfaces. Parametric studies are conducted to investigate the effects of LHR size, component properties, and interface conditions on the strength and sequence of the failure processes in the unidirectional composite system.
Impact analysis of composite aircraft structures
NASA Technical Reports Server (NTRS)
Pifko, Allan B.; Kushner, Alan S.
1993-01-01
The impact analysis of composite aircraft structures is discussed. Topics discussed include: background remarks on aircraft crashworthiness; comments on modeling strategies for crashworthiness simulation; initial study of simulation of progressive failure of an aircraft component constructed of composite material; and research direction in composite characterization for impact analysis.
Probabilistic structural analysis methods for space transportation propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.
1991-01-01
Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .
NASA Technical Reports Server (NTRS)
Hendricks, Robert C.; Zaretsky, Erwin V.
2001-01-01
Critical component design is based on minimizing product failures that results in loss of life. Potential catastrophic failures are reduced to secondary failures where components removed for cause or operating time in the system. Issues of liability and cost of component removal become of paramount importance. Deterministic design with factors of safety and probabilistic design address but lack the essential characteristics for the design of critical components. In deterministic design and fabrication there are heuristic rules and safety factors developed over time for large sets of structural/material components. These factors did not come without cost. Many designs failed and many rules (codes) have standing committees to oversee their proper usage and enforcement. In probabilistic design, not only are failures a given, the failures are calculated; an element of risk is assumed based on empirical failure data for large classes of component operations. Failure of a class of components can be predicted, yet one can not predict when a specific component will fail. The analogy is to the life insurance industry where very careful statistics are book-kept on classes of individuals. For a specific class, life span can be predicted within statistical limits, yet life-span of a specific element of that class can not be predicted.
NASA Technical Reports Server (NTRS)
Wanthal, Steven; Schaefer, Joseph; Justusson, Brian; Hyder, Imran; Engelstad, Stephen; Rose, Cheryl
2017-01-01
The Advanced Composites Consortium is a US Government/Industry partnership supporting technologies to enable timeline and cost reduction in the development of certified composite aerospace structures. A key component of the consortium's approach is the development and validation of improved progressive damage and failure analysis methods for composite structures. These methods will enable increased use of simulations in design trade studies and detailed design development, and thereby enable more targeted physical test programs to validate designs. To accomplish this goal with confidence, a rigorous verification and validation process was developed. The process was used to evaluate analysis methods and associated implementation requirements to ensure calculation accuracy and to gage predictability for composite failure modes of interest. This paper introduces the verification and validation process developed by the consortium during the Phase I effort of the Advanced Composites Project. Specific structural failure modes of interest are first identified, and a subset of standard composite test articles are proposed to interrogate a progressive damage analysis method's ability to predict each failure mode of interest. Test articles are designed to capture the underlying composite material constitutive response as well as the interaction of failure modes representing typical failure patterns observed in aerospace structures.
Material wear and failure mode analysis of breakfast cereal extruder barrels and screw elements
NASA Astrophysics Data System (ADS)
Mastio, Michael Joseph, Jr.
2005-11-01
Nearly seventy-five years ago, the single screw extruder was introduced as a means to produce metal products. Shortly after that, the extruder found its way into the plastics industry. Today much of the world's polymer industry utilizes extruders to produce items such as soda bottles, PVC piping, and toy figurines. Given the significant economical advantages of extruders over conventional batch flow systems, extruders have also migrated into the food industry. Food applications include the meat, pet food, and cereal industries to name just a few. Cereal manufacturers utilize extruders to produce various forms of Ready-to-Eat (RTE) cereals. These cereals are made from grains such as rice, oats, wheat, and corn. The food industry has been incorrectly viewed as an extruder application requiring only minimal energy control and performance capability. This misconception has resulted in very little research in the area of material wear and failure mode analysis of breakfast cereal extruders. Breakfast cereal extruder barrels and individual screw elements are subjected to the extreme pressures and temperatures required to shear and cook the cereal ingredients, resulting in excessive material wear and catastrophic failure of these components. Therefore, this project focuses on the material wear and failure mode analysis of breakfast cereal extruder barrels and screw elements, modeled as a Discrete Time Markov Chain (DTMC) process in which historical data is used to predict future failures. Such predictive analysis will yield cost savings opportunities by providing insight into extruder maintenance scheduling and interchangeability of screw elements. In this DTMC wear analysis, four states of wear are defined and a probability transition matrix is determined based upon 24,041 hours of operational data. This probability transition matrix is used to predict when an extruder component will move to the next state of wear and/or failure. This information can be used to determine maintenance schedules and screw element interchangeability.
NASA Technical Reports Server (NTRS)
Frady, Greg; Nesman, Thomas; Zoladz, Thomas; Szabo, Roland
2010-01-01
For many years, the capabilities to determine the root-cause failure of component failures have been limited to the analytical tools and the state of the art data acquisition systems. With this limited capability, many anomalies have been resolved by adding material to the design to increase robustness without the ability to determine if the design solution was satisfactory until after a series of expensive test programs were complete. The risk of failure and multiple design, test, and redesign cycles were high. During the Space Shuttle Program, many crack investigations in high energy density turbomachines, like the SSME turbopumps and high energy flows in the main propulsion system, have led to the discovery of numerous root-cause failures and anomalies due to the coexistences of acoustic forcing functions, structural natural modes, and a high energy excitation, such as an edge tone or shedding flow, leading the technical community to understand many of the primary contributors to extremely high frequency high cycle fatique fluid-structure interaction anomalies. These contributors have been identified using advanced analysis tools and verified using component and system tests during component ground tests, systems tests, and flight. The structural dynamics and fluid dynamics communities have developed a special sensitivity to the fluid-structure interaction problems and have been able to adjust and solve these problems in a time effective manner to meet budget and schedule deadlines of operational vehicle programs, such as the Space Shuttle Program over the years.
Small vulnerable sets determine large network cascades in power grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.
The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less
Small vulnerable sets determine large network cascades in power grids
Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.
2017-11-17
The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, E.R.
1983-09-01
The appendixes for the Saguaro Power Plant includes the following: receiver configuration selection report; cooperating modes and transitions; failure modes analysis; control system analysis; computer codes and simulation models; procurement package scope descriptions; responsibility matrix; solar system flow diagram component purpose list; thermal storage component and system test plans; solar steam generator tube-to-tubesheet weld analysis; pipeline listing; management control schedule; and system list and definitions.
NASA Technical Reports Server (NTRS)
Brown, K. L.; Bertsch, P. J.
1986-01-01
Results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Generation (EPG)/Fuel Cell Powerplant (FCP) hardware. The EPG/FCP hardware is required for performing functions of electrical power generation and product water distribution in the Orbiter. Specifically, the EPG/FCP hardware consists of the following divisions: (1) Power Section Assembly (PSA); (2) Reactant Control Subsystem (RCS); (3) Thermal Control Subsystem (TCS); and (4) Water Removal Subsystem (WRS). The IOA analysis process utilized available EPG/FCP hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Independent Orbiter Assessment (IOA): Analysis of the orbital maneuvering system
NASA Technical Reports Server (NTRS)
Prust, C. D.; Paul, D. J.; Burkemper, V. J.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbital Maneuvering System (OMS) hardware are documented. The OMS provides the thrust to perform orbit insertion, orbit circularization, orbit transfer, rendezvous, and deorbit. The OMS is housed in two independent pods located one on each side of the tail and consists of the following subsystems: Helium Pressurization; Propellant Storage and Distribution; Orbital Maneuvering Engine; and Electrical Power Distribution and Control. The IOA analysis process utilized available OMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluted and analyzed for possible failure modes and effects. Criticality was asigned based upon the severity of the effect for each failure mode.
Computing Reliabilities Of Ceramic Components Subject To Fracture
NASA Technical Reports Server (NTRS)
Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.
1992-01-01
CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.
Independent Orbiter Assessment (IOA): Analysis of the guidance, navigation, and control subsystem
NASA Technical Reports Server (NTRS)
Trahan, W. H.; Odonnell, R. A.; Pietz, K. C.; Hiott, J. M.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) is presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Guidance, Navigation, and Control (GNC) Subsystem hardware are documented. The function of the GNC hardware is to respond to guidance, navigation, and control software commands to effect vehicle control and to provide sensor and controller data to GNC software. Some of the GNC hardware for which failure modes analysis was performed includes: hand controllers; Rudder Pedal Transducer Assembly (RPTA); Speed Brake Thrust Controller (SBTC); Inertial Measurement Unit (IMU); Star Tracker (ST); Crew Optical Alignment Site (COAS); Air Data Transducer Assembly (ADTA); Rate Gyro Assemblies; Accelerometer Assembly (AA); Aerosurface Servo Amplifier (ASA); and Ascent Thrust Vector Control (ATVC). The IOA analysis process utilized available GNC hardware drawings, workbooks, specifications, schematics, and systems briefs for defining hardware assemblies, components, and circuits. Each hardware item was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Independent Orbiter Assessment (IOA): Analysis of the manned maneuvering unit
NASA Technical Reports Server (NTRS)
Bailey, P. S.
1986-01-01
Results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve indepedence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Manned Maneuvering Unit (MMU) hardware. The MMU is a propulsive backpack, operated through separate hand controllers that input the pilot's translational and rotational maneuvering commands to the control electronics and then to the thrusters. The IOA analysis process utilized available MMU hardware drawings and schematics for defining hardware subsystems, assemblies, components, and hardware items. Final levels of detail were evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the worst case severity of the effect for each identified failure mode. The IOA analysis of the MMU found that the majority of the PCIs identified are resultant from the loss of either the propulsion or control functions, or are resultant from inability to perform an immediate or future mission. The five most severe criticalities identified are all resultant from failures imposed on the MMU hand controllers which have no redundancy within the MMU.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
A Critical Analysis of the Conventionally Employed Creep Lifing Methods
Abdallah, Zakaria; Gray, Veronica; Whittaker, Mark; Perkins, Karen
2014-01-01
The deformation of structural alloys presents problems for power plants and aerospace applications due to the demand for elevated temperatures for higher efficiencies and reductions in greenhouse gas emissions. The materials used in such applications experience harsh environments which may lead to deformation and failure of critical components. To avoid such catastrophic failures and also increase efficiency, future designs must utilise novel/improved alloy systems with enhanced temperature capability. In recognising this issue, a detailed understanding of creep is essential for the success of these designs by ensuring components do not experience excessive deformation which may ultimately lead to failure. To achieve this, a variety of parametric methods have been developed to quantify creep and creep fracture in high temperature applications. This study reviews a number of well-known traditionally employed creep lifing methods with some more recent approaches also included. The first section of this paper focuses on predicting the long-term creep rupture properties which is an area of interest for the power generation sector. The second section looks at pre-defined strains and the re-production of full creep curves based on available data which is pertinent to the aerospace industry where components are replaced before failure. PMID:28788623
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Technical Reports Server (NTRS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-01-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Astrophysics Data System (ADS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-10-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
Probabilistic analysis on the failure of reactivity control for the PWR
NASA Astrophysics Data System (ADS)
Sony Tjahyani, D. T.; Deswandri; Sunaryo, G. R.
2018-02-01
The fundamental safety function of the power reactor is to control reactivity, to remove heat from the reactor, and to confine radioactive material. The safety analysis is used to ensure that each parameter is fulfilled during the design and is done by deterministic and probabilistic method. The analysis of reactivity control is important to be done because it will affect the other of fundamental safety functions. The purpose of this research is to determine the failure probability of the reactivity control and its failure contribution on a PWR design. The analysis is carried out by determining intermediate events, which cause the failure of reactivity control. Furthermore, the basic event is determined by deductive method using the fault tree analysis. The AP1000 is used as the object of research. The probability data of component failure or human error, which is used in the analysis, is collected from IAEA, Westinghouse, NRC and other published documents. The results show that there are six intermediate events, which can cause the failure of the reactivity control. These intermediate events are uncontrolled rod bank withdrawal at low power or full power, malfunction of boron dilution, misalignment of control rod withdrawal, malfunction of improper position of fuel assembly and ejection of control rod. The failure probability of reactivity control is 1.49E-03 per year. The causes of failures which are affected by human factor are boron dilution, misalignment of control rod withdrawal and malfunction of improper position for fuel assembly. Based on the assessment, it is concluded that the failure probability of reactivity control on the PWR is still within the IAEA criteria.
NASA Technical Reports Server (NTRS)
Ling, Lisa
2014-01-01
For the purpose of performing safety analysis and risk assessment for a probable offnominal suborbital/orbital atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. This report discusses the capabilities, modeling, and validation of the SPEAD analysis tool. SPEAD is applicable for Earth or Mars, with the option for 3 or 6 degrees-of-freedom (DOF) trajectory propagation. The atmosphere and aerodynamics data are supplied in tables, for linear interpolation of up to 4 independent variables. The gravitation model can include up to 20 zonal harmonic coefficients. The modeling of a single motor is available and can be adapted to multiple motors. For thermal analysis, the aerodynamic radiative and free-molecular/continuum convective heating, black-body radiative cooling, conductive heat transfer between adjacent nodes, and node ablation are modeled. In a 6- DOF simulation, the local convective heating on a node is a function of Mach, angle-ofattack, and sideslip angle, and is dependent on 1) the location of the node in the spacecraft and its orientation to the flow modeled by an exposure factor, and 2) the geometries of the spacecraft and the node modeled by a heating factor and convective area. Node failure is evaluated using criteria based on melting temperature, reference heat load, g-load, or a combination of the above. The failure of a liquid propellant tank is evaluated based on burnout flux from nucleate boiling or excess internal pressure. Following a component failure, updates are made as needed to the spacecraft mass and aerodynamic properties, nodal exposure and heating factors, and nodal convective and conductive areas. This allows the trajectory to be propagated seamlessly in a single run, inclusive of the trajectories of components that have separated from the spacecraft. The node ablation simulates the decreasing mass and convective/reference areas, and variable heating factor. A built-in database provides the thermo-mechanical properties of For the purpose of performing safety analysis and risk assessment for a probable offnominal suborbital/orbital atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. This report discusses the capabilities, modeling, and validation of the SPEAD analysis tool. SPEAD is applicable for Earth or Mars, with the option for 3 or 6 degrees-of-freedom (DOF) trajectory propagation. The atmosphere and aerodynamics data are supplied in tables, for linear interpolation of up to 4 independent variables. The gravitation model can include up to 20 zonal harmonic coefficients. The modeling of a single motor is available and can be adapted to multiple motors. For thermal analysis, the aerodynamic radiative and free-molecular/continuum convective heating, black-body radiative cooling, conductive heat transfer between adjacent nodes, and node ablation are modeled. In a 6- DOF simulation, the local convective heating on a node is a function of Mach, angle-ofattack, and sideslip angle, and is dependent on 1) the location of the node in the spacecraft and its orientation to the flow modeled by an exposure factor, and 2) the geometries of the spacecraft and the node modeled by a heating factor and convective area. Node failure is evaluated using criteria based on melting temperature, reference heat load, g-load, or a combination of the above. The failure of a liquid propellant tank is evaluated based on burnout flux from nucleate boiling or excess internal pressure. Following a component failure, updates are made as needed to the spacecraft mass and aerodynamic properties, nodal exposure and heating factors, and nodal convective and conductive areas. This allows the trajectory to be propagated seamlessly in a single run, inclusive of the trajectories of components that have separated from the spacecraft. The node ablation simulates the decreasing mass and convective/reference areas, and variable heating factor. A built-in database provides the thermo-mechanical properties of
Transient Reliability Analysis Capability Developed for CARES/Life
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2001-01-01
The CARES/Life software developed at the NASA Glenn Research Center provides a general-purpose design tool that predicts the probability of the failure of a ceramic component as a function of its time in service. This award-winning software has been widely used by U.S. industry to establish the reliability and life of a brittle material (e.g., ceramic, intermetallic, and graphite) structures in a wide variety of 21st century applications.Present capabilities of the NASA CARES/Life code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code can compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth failure conditions CARES/Life can handle sustained and linearly increasing time-dependent loads, whereas in cyclic fatigue applications various types of repetitive constant-amplitude loads can be accounted for. However, in real applications applied loads are rarely that simple but vary with time in more complex ways such as engine startup, shutdown, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. A methodology has now been developed to allow the CARES/Life computer code to perform reliability analysis of ceramic components undergoing transient thermal and mechanical loading. This means that CARES/Life will be able to analyze finite element models of ceramic components that simulate dynamic engine operating conditions. The methodology developed is generalized to account for material property variation (on strength distribution and fatigue) as a function of temperature. This allows CARES/Life to analyze components undergoing rapid temperature change in other words, components undergoing thermal shock. In addition, the capability has been developed to perform reliability analysis for components that undergo proof testing involving transient loads. This methodology was developed for environmentally assisted crack growth (crack growth as a function of time and loading), but it will be extended to account for cyclic fatigue (crack growth as a function of load cycles) as well.
RI 1170 advanced strapdown gyro
NASA Technical Reports Server (NTRS)
1973-01-01
The major components of the RI 1170 gyroscope are described. A detailed functional description of the electronics including block diagrams and photographs of output waveshapes within the loop electronics are presented. An electronic data flow diagram is included. Those gyro subassemblies that were originally planned and subsequently changed or modified for one reason or another are discussed in detail. Variations to the original design included the capacitive pickoffs, torquer flexleads, magnetic suspension, gas bearings, electronic design, and packaging. The selection of components and changes from the original design and components selected are discussed. Device failures experienced throughout the program are reported and design corrections to eliminate the failure modes are noted. Major design deficiencies such as those of the MSE electronics are described in detail. Modifications made to the gas bearing parts and design improvements to the wheel are noted. Changes to the gas bearing prints are included as well as a mathematical analysis of the 1170 gas bearing wheel by computer analysis. The mean free-path effects on gas bearing performance is summarized.
Code of Federal Regulations, 2011 CFR
2011-01-01
... undergo analysis and testing that is comparable to that required by this part to demonstrate that the...) Functions, subsystems, and components. When initiated in the event of a launch vehicle failure, a flight...
Code of Federal Regulations, 2013 CFR
2013-01-01
... undergo analysis and testing that is comparable to that required by this part to demonstrate that the...) Functions, subsystems, and components. When initiated in the event of a launch vehicle failure, a flight...
Code of Federal Regulations, 2012 CFR
2012-01-01
... undergo analysis and testing that is comparable to that required by this part to demonstrate that the...) Functions, subsystems, and components. When initiated in the event of a launch vehicle failure, a flight...
Code of Federal Regulations, 2014 CFR
2014-01-01
... undergo analysis and testing that is comparable to that required by this part to demonstrate that the...) Functions, subsystems, and components. When initiated in the event of a launch vehicle failure, a flight...
Phase dependent fracture and damage evolution of polytetrafluoroethylene (PTFE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, E. N.; Rae, P.; Orler, E. B.
2004-01-01
Compared with other polymers, polytetrafluoroethylene (PTFE) presents several advantages for load-bearing structural components including higher strength at elevated temperatures and higher toughness at lowered temperatures. Failure sensitive applications of PTFE include surgical implants, aerospace components, and chemical barriers. Polytetrafluoroethylene is semicrystalline in nature with their linear chains forming complicated phases near room temperature and ambient pressure. The presence of three unique phases near room temperature implies that failure during standard operating conditions may be strongly dependent on the phase. This paper presents a comprehensive and systematic study of fracture and damage evolution in PTFE to elicit the effects of temperature-inducedmore » phase on fracture mechanisms. The fracture behavior of PTFE is observed to undergo transitions from brittle-fracture below 19 C to ductile-fracture with crazing and some stable crack growth to plastic flow aver 30 C. The bulk failure properties are correlated to failure mechanisms through fractography and analysis of the crystalline structure.« less
Failure Diagnosis for the Holdup Tank System via ISFA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Huijuan; Bragg-Sitton, Shannon; Smidts, Carol
This paper discusses the use of the integrated system failure analysis (ISFA) technique for fault diagnosis for the holdup tank system. ISFA is a simulation-based, qualitative and integrated approach used to study fault propagation in systems containing both hardware and software subsystems. The holdup tank system consists of a tank containing a fluid whose level is controlled by an inlet valve and an outlet valve. We introduce the component and functional models of the system, quantify the main parameters and simulate possible failure-propagation paths based on the fault propagation approach, ISFA. The results show that most component failures in themore » holdup tank system can be identified clearly and that ISFA is viable as a technique for fault diagnosis. Since ISFA is a qualitative technique that can be used in the very early stages of system design, this case study provides indications that it can be used early to study design aspects that relate to robustness and fault tolerance.« less
NASA Technical Reports Server (NTRS)
Powers, L. M.; Jadaan, O. M.; Gyekenyesi, J. P.
1998-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural application such as in advanced turbine engine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilizes commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life, of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the Ceramics Analysis and Reliability Evaluation of Structures/CREEP (CARES/CREEP) integrated design program, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benchmark problems and engine components are included.
NASA Technical Reports Server (NTRS)
Gyekenyesi, J. P.; Powers, L. M.; Jadaan, O. M.
1998-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilized commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the CARES/CREEP (Ceramics Analysis and Reliability Evaluation of Structures/CREEP) integrated design programs, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benechmark problems and engine components are included.
Analyzing and Predicting Effort Associated with Finding and Fixing Software Faults
NASA Technical Reports Server (NTRS)
Hamill, Maggie; Goseva-Popstojanova, Katerina
2016-01-01
Context: Software developers spend a significant amount of time fixing faults. However, not many papers have addressed the actual effort needed to fix software faults. Objective: The objective of this paper is twofold: (1) analysis of the effort needed to fix software faults and how it was affected by several factors and (2) prediction of the level of fix implementation effort based on the information provided in software change requests. Method: The work is based on data related to 1200 failures, extracted from the change tracking system of a large NASA mission. The analysis includes descriptive and inferential statistics. Predictions are made using three supervised machine learning algorithms and three sampling techniques aimed at addressing the imbalanced data problem. Results: Our results show that (1) 83% of the total fix implementation effort was associated with only 20% of failures. (2) Both safety critical failures and post-release failures required three times more effort to fix compared to non-critical and pre-release counterparts, respectively. (3) Failures with fixes spread across multiple components or across multiple types of software artifacts required more effort. The spread across artifacts was more costly than spread across components. (4) Surprisingly, some types of faults associated with later life-cycle activities did not require significant effort. (5) The level of fix implementation effort was predicted with 73% overall accuracy using the original, imbalanced data. Using oversampling techniques improved the overall accuracy up to 77%. More importantly, oversampling significantly improved the prediction of the high level effort, from 31% to around 85%. Conclusions: This paper shows the importance of tying software failures to changes made to fix all associated faults, in one or more software components and/or in one or more software artifacts, and the benefit of studying how the spread of faults and other factors affect the fix implementation effort.
Pushover analysis of reinforced concrete frames considering shear failure at beam-column joints
NASA Astrophysics Data System (ADS)
Sung, Y. C.; Lin, T. K.; Hsiao, C. C.; Lai, M. C.
2013-09-01
Since most current seismic capacity evaluations of reinforced concrete (RC) frame structures are implemented by either static pushover analysis (PA) or dynamic time history analysis, with diverse settings of the plastic hinges (PHs) on such main structural components as columns, beams and walls, the complex behavior of shear failure at beam-column joints (BCJs) during major earthquakes is commonly neglected. This study proposes new nonlinear PA procedures that consider shear failure at BCJs and seek to assess the actual damage to RC structures. Based on the specifications of FEMA-356, a simplified joint model composed of two nonlinear cross struts placed diagonally over the location of the plastic hinge is established, allowing a sophisticated PA to be performed. To verify the validity of this method, the analytical results for the capacity curves and the failure mechanism derived from three different full-size RC frames are compared with the experimental measurements. By considering shear failure at BCJs, the proposed nonlinear analytical procedures can be used to estimate the structural behavior of RC frames, including seismic capacity and the progressive failure sequence of joints, in a precise and effective manner.
Anger, hostility, and hospitalizations in patients with heart failure.
Keith, Felicia; Krantz, David S; Chen, Rusan; Harris, Kristie M; Ware, Catherine M; Lee, Amy K; Bellini, Paula G; Gottlieb, Stephen S
2017-09-01
Heart failure patients have a high hospitalization rate, and anger and hostility are associated with coronary heart disease morbidity and mortality. Using structural equation modeling, this prospective study assessed the predictive validity of anger and hostility traits for cardiovascular and all-cause rehospitalizations in patients with heart failure. 146 heart failure patients were administered the STAXI and Cook-Medley Hostility Inventory to measure anger, hostility, and their component traits. Hospitalizations were recorded for up to 3 years following baseline. Causes of hospitalizations were categorized as heart failure, total cardiac, noncardiac, and all-cause (sum of cardiac and noncardiac). Measurement models were separately fit for Anger and Hostility, followed by a Confirmatory Factor Analysis to estimate the relationship between the Anger and Hostility constructs. An Anger model consisted of State Anger, Trait Anger, Anger Expression Out, and Anger Expression In, and a Hostility model included Cynicism, Hostile Affect, Aggressive Responding, and Hostile Attribution. The latent construct of Anger did not predict any of the hospitalization outcomes, but Hostility significantly predicted all-cause hospitalizations. Analyses of individual trait components of each of the 2 models indicated that Anger Expression Out predicted all-cause and noncardiac hospitalizations, and Trait Anger predicted noncardiac hospitalizations. None of the individual components of Hostility were related to rehospitalizations or death. The construct of Hostility and several components of Anger are predictive of hospitalizations that were not specific to cardiac causes. Mechanisms common to a variety of health problems, such as self-care and risky health behaviors, may be involved in these associations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A comparative critical study between FMEA and FTA risk analysis methods
NASA Astrophysics Data System (ADS)
Cristea, G.; Constantinescu, DM
2017-10-01
Today there is used an overwhelming number of different risk analyses techniques with acronyms such as: FMEA (Failure Modes and Effects Analysis) and its extension FMECA (Failure Mode, Effects, and Criticality Analysis), DRBFM (Design Review by Failure Mode), FTA (Fault Tree Analysis) and and its extension ETA (Event Tree Analysis), HAZOP (Hazard & Operability Studies), HACCP (Hazard Analysis and Critical Control Points) and What-if/Checklist. However, the most used analysis techniques in the mechanical and electrical industry are FMEA and FTA. In FMEA, which is an inductive method, information about the consequences and effects of the failures is usually collected through interviews with experienced people, and with different knowledge i.e., cross-functional groups. The FMEA is used to capture potential failures/risks & impacts and prioritize them on a numeric scale called Risk Priority Number (RPN) which ranges from 1 to 1000. FTA is a deductive method i.e., a general system state is decomposed into chains of more basic events of components. The logical interrelationship of how such basic events depend on and affect each other is often described analytically in a reliability structure which can be visualized as a tree. Both methods are very time-consuming to be applied thoroughly, and this is why it is oftenly not done so. As a consequence possible failure modes may not be identified. To address these shortcomings, it is proposed to use a combination of FTA and FMEA.
HFE (Human Factors Engineering) Technology for Navy Weapon System Acquisition.
1979-07-01
requirements 2-31 to electrical components using: Failure Modes and Effects Analysis ( FMEA ) and LOR data, component design requirements and a selected...3- 60 * ,.- .- I; L , , _ m m _ --- : " I. I ._ . - I- The use of SAINT can specify various outputs of the simulation, histograms, plots, summary...Electro Safety . 60 .98 .95 .65 .92 .70 .42 .62 Personnel Relationships .74 .70 .79 .63 .40 .77 .85 .80 Electro Circuit Analysis .63 .90 .95 .58 .40
Design of ceramic components with the NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.
1990-01-01
The ceramics analysis and reliability evaluation of structures (CARES) computer program is described. The primary function of the code is to calculate the fast-fracture reliability or failure probability of macro-scopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. CARES uses results from MSC/NASTRAN or ANSYS finite-element analysis programs to evaluate how inherent surface and/or volume type flaws component reliability. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effects of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for a single or multiple failure modes by using a least-squares analysis or a maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-to-fit-tests, 90 percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan 90 percent confidence band values are also provided. Examples are provided to illustrate the various features of CARES.
Wind Turbine Failures - Tackling current Problems in Failure Data Analysis
NASA Astrophysics Data System (ADS)
Reder, M. D.; Gonzalez, E.; Melero, J. J.
2016-09-01
The wind industry has been growing significantly over the past decades, resulting in a remarkable increase in installed wind power capacity. Turbine technologies are rapidly evolving in terms of complexity and size, and there is an urgent need for cost effective operation and maintenance (O&M) strategies. Especially unplanned downtime represents one of the main cost drivers of a modern wind farm. Here, reliability and failure prediction models can enable operators to apply preventive O&M strategies rather than corrective actions. In order to develop these models, the failure rates and downtimes of wind turbine (WT) components have to be understood profoundly. This paper is focused on tackling three of the main issues related to WT failure analyses. These are, the non-uniform data treatment, the scarcity of available failure analyses, and the lack of investigation on alternative data sources. For this, a modernised form of an existing WT taxonomy is introduced. Additionally, an extensive analysis of historical failure and downtime data of more than 4300 turbines is presented. Finally, the possibilities to encounter the lack of available failure data by complementing historical databases with Supervisory Control and Data Acquisition (SCADA) alarms are evaluated.
Market reform and universal coverage: avoid market failure.
Enthoven, A
1993-02-01
Determining the marketing mix for hospitals, especially those in transition, will require critical analysis to guard against market failure. Managed competition requires careful planning and awareness of pricing components in a free-market situation. Alain Enthoven, writing for the Jackson Hole Group, proposes establishment of a new national system of sponsor organizations--Health Insurance Purchasing Cooperatives--to function as a collective purchasing agent on behalf of small employers and individuals.
Compound estimation procedures in reliability
NASA Technical Reports Server (NTRS)
Barnes, Ron
1990-01-01
At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.
Low-thrust mission risk analysis, with application to a 1980 rendezvous with the comet Encke
NASA Technical Reports Server (NTRS)
Yen, C. L.; Smith, D. B.
1973-01-01
A computerized failure process simulation procedure is used to evaluate the risk in a solar electric space mission. The procedure uses currently available thrust-subsystem reliability data and performs approximate simulations of the thrust sybsystem burn operation, the system failure processes, and the retargeting operations. The method is applied to assess the risks in carrying out a 1980 rendezvous mission to the comet Encke. Analysis of the results and evaluation of the effects of various risk factors on the mission show that system component failure rates are the limiting factors in attaining a high mission relability. It is also shown that a well-designed trajectory and system operation mode can be used effectively to partially compensate for unreliable thruster performance.
Independent Orbiter Assessment (IOA): Analysis of the elevon subsystem
NASA Technical Reports Server (NTRS)
Wilson, R. E.; Riccio, J. R.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Orbiter Elevon system hardware. The elevon actuators are located at the trailing edge of the wing surface. The proper function of the elevons is essential during the dynamic flight phases of ascent and entry. In the ascent phase of flight, the elevons are used for relieving high wing loads. For entry, the elevons are used to pitch and roll the vehicle. Specifically, the elevon system hardware comprises the following components: flow cutoff valve; switching valve; electro-hydraulic (EH) servoactuator; secondary delta pressure transducer; bypass valve; power valve; power valve check valve; primary actuator; primary delta pressure transducer; and primary actuator position transducer. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 25 failure modes analyzed, 18 were determined to be PCIs.
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Duffy, R. E.; Barickman, K.; Saiidi, M. J.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Life Support and Airlock Support Systems (LSS and ALSS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baseline with proposed Post 51-L updates included. The discrepancies were flagged for potential future resolution. This report documents the results of that comparison for the Orbiter LSS and ALSS hardware. The IOA product for the LSS and ALSS analysis consisted of 511 failure mode worksheets that resulted in 140 potential critical items. Comparison was made to the NASA baseline which consisted of 456 FMEAs and 101 CIL items. The IOA analysis identified 39 failure modes, 6 of which were classified as CIL items, for components not covered by the NASA FMEAs. It was recommended that these failure modes be added to the NASA FMEA baseline. The overall assessment produced agreement on all but 301 FMEAs which caused differences in 111 CIL items.
NASA Astrophysics Data System (ADS)
Ahn, Junkeon; Noh, Yeelyong; Park, Sung Ho; Choi, Byung Il; Chang, Daejun
2017-10-01
This study proposes a fuzzy-based FMEA (failure mode and effect analysis) for a hybrid molten carbonate fuel cell and gas turbine system for liquefied hydrogen tankers. An FMEA-based regulatory framework is adopted to analyze the non-conventional propulsion system and to understand the risk picture of the system. Since the participants of the FMEA rely on their subjective and qualitative experiences, the conventional FMEA used for identifying failures that affect system performance inevitably involves inherent uncertainties. A fuzzy-based FMEA is introduced to express such uncertainties appropriately and to provide flexible access to a risk picture for a new system using fuzzy modeling. The hybrid system has 35 components and has 70 potential failure modes, respectively. Significant failure modes occur in the fuel cell stack and rotary machine. The fuzzy risk priority number is used to validate the crisp risk priority number in the FMEA.
An overview of the mathematical and statistical analysis component of RICIS
NASA Technical Reports Server (NTRS)
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
Socket position determines hip resurfacing 10-year survivorship.
Amstutz, Harlan C; Le Duff, Michel J; Johnson, Alicia J
2012-11-01
Modern metal-on-metal hip resurfacing arthroplasty designs have been used for over a decade. Risk factors for short-term failure include small component size, large femoral head defects, low body mass index, older age, high level of sporting activity, and component design, and it is established there is a surgeon learning curve. Owing to failures with early surgical techniques, we developed a second-generation technique to address those failures. However, it is unclear whether the techniques affected the long-term risk factors. We (1) determined survivorship for hips implanted with the second-generation cementing technique; (2) identified the risk factors for failure in these patients; and (3) determined the effect of the dominant risk factors on the observed modes of failure. We retrospectively reviewed the first 200 hips (178 patients) implanted using our second-generation surgical technique, which consisted of improvements in cleaning and drying the femoral head before and during cement application. There were 129 men and 49 women. Component orientation and contact patch to rim distance were measured. We recorded the following modes of failure: femoral neck fracture, femoral component loosening, acetabular component loosening, wear, dislocation, and sepsis. The minimum followup was 25 months (mean, 106.5 months; range, 25-138 months). Twelve hips were revised. Kaplan-Meier survivorship was 98.0% at 5 years and 94.3% at 10 years. The only variable associated with revision was acetabular component position. Contact patch to rim distance was lower in hips that dislocated, were revised for wear, or were revised for acetabular loosening. The dominant modes of failure were related to component wear or acetabular component loosening. Acetabular component orientation, a factor within the surgeon's control, determines the long-term success of our current hip resurfacing techniques. Current techniques have changed the modes of failure from aseptic femoral failure to wear or loosening of the acetabular component. Level III, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.
Reliability analysis in interdependent smart grid systems
NASA Astrophysics Data System (ADS)
Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong
2018-06-01
Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.
NASA Astrophysics Data System (ADS)
Zuo, Ye; Sun, Guangjun; Li, Hongjing
2018-01-01
Under the action of near-fault ground motions, curved bridges are prone to pounding, local damage of bridge components and even unseating. A multi-scale fine finite element model of a typical three-span curved bridge is established by considering the elastic-plastic behavior of piers and pounding effect of adjacent girders. The nonlinear time-history method is used to study the seismic response of the curved bridge equipped with unseating failure control system under the action of near-fault ground motion. An in-depth analysis is carried to evaluate the control effect of the proposed unseating failure control system. The research results indicate that under the near-fault ground motion, the seismic response of the curved bridge is strong. The unseating failure control system perform effectively to reduce the pounding force of the adjacent girders and the probability of deck unseating.
The Importance of Engine External's Health
NASA Technical Reports Server (NTRS)
Stoner, Barry L.
2006-01-01
Engine external components include all the fluid carrying, electron carrying, and support devices that are needed to operate the propulsion system. These components are varied and include: pumps, valves, actuators, solenoids, sensors, switches, heat exchangers, electrical generators, electrical harnesses, tubes, ducts, clamps and brackets. The failure of any component to perform its intended function will result in a maintenance action, a dispatch delay, or an engine in flight shutdown. The life of each component, in addition to its basic functional design, is closely tied to its thermal and dynamic environment .Therefore, to reach a mature design life, the component's thermal and dynamic environment must be understood and controlled, which can only be accomplished by attention to design analysis and testing. The purpose of this paper is to review analysis and test techniques toward achieving good component health.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Guowei; Sun, Qingping; Zeng, Danielle
In current work, unidirectional (UD) carbon fiber composite hatsection component with two different layups are studied under dynamic 3 point bending loading. The experiments are performed at various impact velocities, and the effects of impactor velocity and layup on acceleration histories are compared. A macro model is established with LS-Dyna for more detailed study. The simulation results show that the delamination plays an important role during dynamic 3 point bending test. Based on the analysis with high speed camera, the sidewall of hatsection shows significant buckling rather than failure. Without considering the delamination, current material model cannot capture the postmore » failure phenomenon correctly. The sidewall delamination is modeled by assumption of larger failure strain together with slim parameters, and the simulation results of different impact velocities and layups match the experimental results reasonable well.« less
The Inclusion of In-Plane Stresses in Delamination Criteria
NASA Technical Reports Server (NTRS)
Fenske, Matthew T.
1999-01-01
A study of delamination failure was conducted with emphasis on delamination criteria. Evidence is presented which supports the inclusion of the in-plane stresses in addition to the interlaminar stress terms in delamination criteria. The delamination is characterized as the failure of a resin rich region in between ply sets. The entire six component stress state in this resin layer is calculated through a finite element analysis, averaged over a dimension of 1.75 ply thicknesses, and used in a Modified von Mises Delamination Criterion. This criterion builds onto previous criteria by including all six stress components in the interply resin layer. The MVMDC shows good correlation to experimental data. The results show that the treatment of delamination as the failure of a finite interply resin layer is a valid method and that the MVMDC, considering the full stress state, accurately indicates delamination for different laminate families.
Failure investigations of failed valve plug SS410 steel due to cracking
NASA Astrophysics Data System (ADS)
Kalyankar, V. D.; Deshmukh, D. D.
2017-12-01
Premature and sudden in service failure of a valve plug due to crack formation, applied in power plant has been investigated. The plug was tempered and heat treated, the crack was originated at centre, developed along the axis and propagates radially towards outer surface of plug. The expected life of the component is 10-15 years while, the component had failed just after the installation that is, within 3 months of its service. No corrosion products were observed on the crack interface and on the failed surface; hence, causes of corrosion failure are neglected. This plug of level separator control valve, is welded to the stem by means of plasma-transferred arc welding and as there is no crack observed at the welding zone, the failure due to welding residual stresses are also neglected. The failed component discloses exposed surface of a crack interface that originated from centre and propagates radially. The micro-structural observation, hardness testing, and visual observation are carried out of the specimen prepared from the failed section and base portion. The microstructure from the cracked interface showed severe carbide formation along the grain boundaries. From the microstructural analysis of the failed sample, it is observed that there is a formation of acicular carbides along the grain boundaries due to improper tempering heat treatment.
Kumar, Mohit; Yadav, Shiv Prasad
2012-03-01
This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Failure detection and recovery in the assembly/contingency subsystem
NASA Technical Reports Server (NTRS)
Gantenbein, Rex E.
1993-01-01
The Assembly/Contingency Subsystem (ACS) is the primary communications link on board the Space Station. Any failure in a component of this system or in the external devices through which it communicates with ground-based systems will isolate the Station. The ACS software design includes a failure management capability (ACFM) that provides protocols for failure detection, isolation, and recovery (FDIR). The the ACFM design requirements as outlined in the current ACS software requirements specification document are reviewed. The activities carried out in this review include: (1) an informal, but thorough, end-to-end failure mode and effects analysis of the proposed software architecture for the ACFM; and (2) a prototype of the ACFM software, implemented as a C program under the UNIX operating system. The purpose of this review is to evaluate the FDIR protocols specified in the ACS design and the specifications themselves in light of their use in implementing the ACFM. The basis of failure detection in the ACFM is the loss of signal between the ground and the Station, which (under the appropriate circumstances) will initiate recovery to restore communications. This recovery involves the reconfiguration of the ACS to either a backup set of components or to a degraded communications mode. The initiation of recovery depends largely on the criticality of the failure mode, which is defined by tables in the ACFM and can be modified to provide a measure of flexibility in recovery procedures.
NASA Technical Reports Server (NTRS)
Aruljothi, Arunvenkatesh
2016-01-01
The Space Exploration Division of the Safety and Mission Assurances Directorate is responsible for reducing the risk to Human Space Flight Programs by providing system safety, reliability, and risk analysis. The Risk & Reliability Analysis branch plays a part in this by utilizing Probabilistic Risk Assessment (PRA) and Reliability and Maintainability (R&M) tools to identify possible types of failure and effective solutions. A continuous effort of this branch is MaRS, or Mass and Reliability System, a tool that was the focus of this internship. Future long duration space missions will have to find a balance between the mass and reliability of their spare parts. They will be unable take spares of everything and will have to determine what is most likely to require maintenance and spares. Currently there is no database that combines mass and reliability data of low level space-grade components. MaRS aims to be the first database to do this. The data in MaRS will be based on the hardware flown on the International Space Stations (ISS). The components on the ISS have a long history and are well documented, making them the perfect source. Currently, MaRS is a functioning excel workbook database; the backend is complete and only requires optimization. MaRS has been populated with all the assemblies and their components that are used on the ISS; the failures of these components are updated regularly. This project was a continuation on the efforts of previous intern groups. Once complete, R&M engineers working on future space flight missions will be able to quickly access failure and mass data on assemblies and components, allowing them to make important decisions and tradeoffs.
Independent Orbiter Assessment (IOA): Assessment of the auxiliary power unit
NASA Technical Reports Server (NTRS)
Barnes, J. E.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Auxiliary Power Unit (APU) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baseline with proposed Post 51-L updates included. A resolution of each discrepancy from the comparison is provided through additional analysis as required. This report documents the results of that comparison for the Orbiter APU hardware. The IOA product for the APU analysis, covering both APU hardware and APU electrical components, consisted of 344 failure mode worksheets that resulted in 178 potential critical items being identified. A comparison was made of the IOA product to the NASA APU hardware FMEA/CIL baseline which consisted of 184 FMEAs and 57 CIL items. The comparison identified 72 discrepancies.
Risk and Vulnerability Analysis of Satellites Due to MM/SD with PIRAT
NASA Astrophysics Data System (ADS)
Kempf, Scott; Schafer, Frank Rudolph, Martin; Welty, Nathan; Donath, Therese; Destefanis, Roberto; Grassi, Lilith; Janovsky, Rolf; Evans, Leanne; Winterboer, Arne
2013-08-01
Until recently, the state-of-the-art assessment of the threat posed to spacecraft by micrometeoroids and space debris was limited to the application of ballistic limit equations to the outer hull of a spacecraft. The probability of no penetration (PNP) is acceptable for assessing the risk and vulnerability of manned space mission, however, for unmanned missions, whereby penetrations of the spacecraft exterior do not necessarily constitute satellite or mission failure, these values are overly conservative. The newly developed software tool PIRAT (Particle Impact Risk and Vulnerability Analysis Tool) has been developed based on the Schäfer-Ryan-Lambert (SRL) triple-wall ballistic limit equation (BLE), applicable for various satellite components. As a result, it has become possible to assess the individual failure rates of satellite components. This paper demonstrates the modeling of an example satellite, the performance of a PIRAT analysis and the potential for subsequent design optimizations with respect of micrometeoroid and space debris (MM/SD) impact risk.
The application of probabilistic design theory to high temperature low cycle fatigue
NASA Technical Reports Server (NTRS)
Wirsching, P. H.
1981-01-01
Metal fatigue under stress and thermal cycling is a principal mode of failure in gas turbine engine hot section components such as turbine blades and disks and combustor liners. Designing for fatigue is subject to considerable uncertainty, e.g., scatter in cycles to failure, available fatigue test data and operating environment data, uncertainties in the models used to predict stresses, etc. Methods of analyzing fatigue test data for probabilistic design purposes are summarized. The general strain life as well as homo- and hetero-scedastic models are considered. Modern probabilistic design theory is reviewed and examples are presented which illustrate application to reliability analysis of gas turbine engine components.
Swartz, Erik E; Decoster, Laura C; Norkus, Susan A; Cappaert, Thomas A
2007-01-01
Context: Most research on face mask removal has been performed on unused equipment. Objective: To identify and compare factors that influence the condition of helmet components and their relationship to face mask removal. Design: A cross-sectional, retrospective study. Setting: Five athletic equipment reconditioning/recertification facilities. Participants: 2584 helmets from 46 high school football teams representing 5 geographic regions. Intervention(s): Helmet characteristics (brand, model, hardware components) were recorded. Helmets were mounted and face mask removal was attempted using a cordless screwdriver. The 2004 season profiles and weather histories were obtained for each high school. Main Outcome Measure(s): Success and failure (including reason) for removal of 4 screws from the face mask were noted. Failure rates among regions, teams, reconditioning year, and screw color (type) were compared. Weather histories were compared. We conducted a discriminant analysis to determine if weather variables, region, helmet brand and model, reconditioning year, and screw color could predict successful face mask removal. Metallurgic analysis of screw samples was performed. Results: All screws were successfully removed from 2165 (84%) helmets. At least 1 screw could not be removed from 419 (16%) helmets. Significant differences were found for mean screw failure per helmet among the 5 regions, with the Midwest having the lowest failure rate (0.08 ± 0.38) and the Southern (0.33 ± 0.72), the highest. Differences were found in screw failure rates among the 46 teams (F1,45 = 9.4, P < .01). Helmets with the longest interval since last reconditioning (3 years) had the highest failure rate, 0.47 ± 0.93. Differences in success rates were found among 4 screw types (χ21,4 = 647, P < .01), with silver screws having the lowest percentage of failures (3.4%). A discriminant analysis (Λ = .932, χ214,n=2584 = 175.34, P < .001) revealed screw type to be the strongest predictor of successful removal. Conclusions: Helmets with stainless steel or nickel-plated carbon steel screws reconditioned in the previous year had the most favorable combination of factors for successful screw removal. T-nut spinning at the side screw locations was the most common reason and location for failure. PMID:17597938
Radl, Roman; Hungerford, Marc; Materna, Wilfried; Rehak, Peter; Windhager, Reinhard
2005-02-01
Several authors have found poorer outcome after hip replacement for osteonecrosis than after hip replacement for arthrosis. In a retrospective study we evaluated the performance of an uncemented femoral component in patients with osteonecrosis and arthrosis of the hip. 31 patients operated for osteonecrosis, and 49 patients operated for osteoarthrosis were included. The median follow-up time was 6.1 (2-11) years for the patients with osteonecrosis, and 5.9 (4-8) for the arthrosis patients. Migration analysis performed by the Einzel-Bild-Roentgen Analysis (EBRA) technique revealed a median stem migration of 1.5 (-8.8-0) mm in the patients with osteonecrosis, but only 0.6 (-2.8-0.7) mm in the patients with arthrosis (p < 0.001). Survivorship analysis with stem revision as endpoint for failure was 74% (95% CI: 55-94) in the osteonecrosis, and 98% (95% CI: 94-100) in the arthrosis group (p = 0.01). We suggest that the higher failure rate and stem migration of uncemented total hip replacement in the patients with osteonecrosis is a consequence of the disease. On the basis of these findings, we recommend close monitoring of the patients with osteonecrosis, which should include migration measurements.
Independent Orbiter Assessment (IOA): Analysis of the nose wheel steering subsystem
NASA Technical Reports Server (NTRS)
Mediavilla, Anthony Scott
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Nose Wheel Steering (NWS) hardware are documented. The NWS hardware provides primary directional control for the Orbiter vehicle during landing rollout. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The original NWS design was envisioned as a backup system to differential braking for directional control of the Orbiter during landing rollout. No real effort was made to design the NWS system as fail operational. The brakes have much redundancy built into their design but the poor brake/tire performance has forced the NSTS to upgrade NWS to the primary mode of directional control during rollout. As a result, a large percentage of the NWS system components have become Potential Critical Items (PCI).
Safety Guided Design of Crew Return Vehicle in Concept Design Phase Using STAMP/STPA
NASA Astrophysics Data System (ADS)
Nakao, H.; Katahira, M.; Miyamoto, Y.; Leveson, N.
2012-01-01
In the concept development and design phase of a new space system, such as a Crew Vehicle, designers tend to focus on how to implement new technology. Designers also consider the difficulty of using the new technology and trade off several system design candidates. Then they choose an optimal design from the candidates. Safety should be a key aspect driving optimal concept design. However, in past concept design activities, safety analysis such as FTA has not used to drive the design because such analysis techniques focus on component failure and component failure cannot be considered in the concept design phase. The solution to these problems is to apply a new hazard analysis technique, called STAMP/STPA. STAMP/STPA defines safety as a control problem rather than a failure problem and identifies hazardous scenarios and their causes. Defining control flow is the essential in concept design phase. Therefore STAMP/STPA could be a useful tool to assess the safety of system candidates and to be part of the rationale for choosing a design as the baseline of the system. In this paper, we explain our case study of safety guided concept design using STPA, the new hazard analysis technique, and model-based specification technique on Crew Return Vehicle design and evaluate benefits of using STAMP/STPA in concept development phase.
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)
2007-01-01
Structures often comprise smaller substructures that are connected to each other or attached to the ground by a set of finite connections. Under static loading one or more of these connections may exceed allowable limits and be deemed to fail. Of particular interest is the structural response when a connection is severed (failed) while the structure is under static load. A transient failure analysis procedure was developed by which it is possible to examine the dynamic effects that result from introducing a discrete failure while a structure is under static load. The failure is introduced by replacing a connection load history by a time-dependent load set that removes the connection load at the time of failure. The subsequent transient response is examined to determine the importance of the dynamic effects by comparing the structural response with the appropriate allowables. Additionally, this procedure utilizes a standard finite element transient analysis that is readily available in most commercial software, permitting the study of dynamic failures without the need to purchase software specifically for this purpose. The procedure is developed and explained, demonstrated on a simple cantilever box example, and finally demonstrated on a real-world example, the American Airlines Flight 587 (AA587) vertical tail plane (VTP).
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
NASA Technical Reports Server (NTRS)
1997-01-01
The NASA Lewis Research Center Structures Division is an international leader and pioneer in developing new structural analysis, life prediction, and failure analysis related to rotating machinery and more specifically to hot section components in air-breathing aircraft engines and spacecraft propulsion systems. The research consists of both deterministic and probabilistic methodology. Studies include, but are not limited to, high-cycle and low-cycle fatigue as well as material creep. Studies of structural failure are at both the micro- and macrolevels. Nondestructive evaluation methods related to structural reliability are developed, applied, and evaluated. Materials from which structural components are made, studied, and tested are monolithics and metal-matrix, polymer-matrix, and ceramic-matrix composites. Aeroelastic models are developed and used to determine the cyclic loading and life of fan and turbine blades. Life models are developed and tested for bearings, seals, and other mechanical components, such as magnetic suspensions. Results of these studies are published in NASA technical papers and reference publication as well as in technical society journal articles. The results of the work of the Structures Division and the bibliography of its publications for calendar year 1995 are presented.
Structures Division 1994 Annual Report
NASA Technical Reports Server (NTRS)
1996-01-01
The NASA Lewis Research Center Structures Division is an international leader and pioneer in developing new structural analysis, life prediction, and failure analysis related to rotating machinery and more specifically to hot section components in air-breathing aircraft engines and spacecraft propulsion systems. The research consists of both deterministic and probabilistic methodology. Studies include, but are not limited to, high-cycle and low-cycle fatigue as well as material creep. Studies of structural failure are at both the micro- and macrolevels. Nondestructive evaluation methods related to structural reliability are developed, applied, and evaluated. Materials from which structural components are made, studied, and tested are monolithics and metal-matrix, polymer-matrix, and ceramic-matrix composites. Aeroelastic models are developed and used to determine the cyclic loading and life of fan and turbine blades. Life models are developed and tested for bearings, seals, and other mechanical components, such as magnetic suspensions. Results of these studies are published in NASA technical papers and reference publication as well as in technical society journal articles. The results of the work of the Structures Division and the bibliography of its publications for calendar year 1994 are presented.
Parametric Testing of Launch Vehicle FDDR Models
NASA Technical Reports Server (NTRS)
Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar
2011-01-01
For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.
Satellite vulnerability to space debris - an improved 3D risk assessment methodology
NASA Astrophysics Data System (ADS)
Grassi, Lilith; Tiboldo, Francesca; Destefanis, Roberto; Donath, Thérèse; Winterboer, Arne; Evans, Leanne; Janovsky, Rolf; Kempf, Scott; Rudolph, Martin; Schäfer, Frank; Gelhaus, Johannes
2014-06-01
The work described in the present paper, performed as a part of the P2 project, presents an enhanced method to evaluate satellite vulnerability to micrometeoroids and orbital debris (MMOD), using the ESABASE2/Debris tool (developed under ESA contract). Starting from the estimation of induced failures on spacecraft (S/C) components and from the computation of lethal impacts (with an energy leading to the loss of the satellite), and considering the equipment redundancies and interactions between components, the debris-induced S/C functional impairment is assessed. The developed methodology, illustrated through its application to a case study satellite, includes the capability to estimate the number of failures on internal components, overcoming the limitations of current tools which do not allow propagating the debris cloud inside the S/C. The ballistic limit of internal equipment behind a sandwich panel structure is evaluated through the implementation of the Schäfer Ryan Lambert (SRL) Ballistic Limit Equation (BLE). The analysis conducted on the case study satellite shows the S/C vulnerability index to be in the range of about 4% over the complete mission, with a significant reduction with respect to the results typically obtained with the traditional analysis, which considers as a failure the structural penetration of the satellite structural panels. The methodology has then been applied to select design strategies (additional local shielding, relocation of components) to improve S/C protection with respect to MMOD. The results of the analyses conducted on the improved design show a reduction of the vulnerability index of about 18%.
Integrating FMEA in a Model-Driven Methodology
NASA Astrophysics Data System (ADS)
Scippacercola, Fabio; Pietrantuono, Roberto; Russo, Stefano; Esper, Alexandre; Silva, Nuno
2016-08-01
Failure Mode and Effects Analysis (FMEA) is a well known technique for evaluating the effects of potential failures of components of a system. FMEA demands for engineering methods and tools able to support the time- consuming tasks of the analyst. We propose to make FMEA part of the design of a critical system, by integration into a model-driven methodology. We show how to conduct the analysis of failure modes, propagation and effects from SysML design models, by means of custom diagrams, which we name FMEA Diagrams. They offer an additional view of the system, tailored to FMEA goals. The enriched model can then be exploited to automatically generate FMEA worksheet and to conduct qualitative and quantitative analyses. We present a case study from a real-world project.
Krueger, Alexander P; Singh, Gurpal; Beil, Frank Timo; Feuerstein, Bernd; Ruether, Wolfgang; Lohmann, Christoph H
2014-05-01
Ceramic components in total knee arthroplasty (TKA) are evolving. We analyze the first case of BIOLOX delta ceramic femoral component fracture. A longitudinal midline fracture in the patellar groove was present, with an intact cement mantle and no bony defects. Fractographic analysis with laser scanning microscopy and white light interferometry showed no evidence of arrest lines, hackles, wake hackles, material flaws, fatigue or crack propagation. Analysis of periprosthetic tissues with Fourier-transform infrared (FT-IR) microscopy, contact radiography, histology, and subsequent digestion and high-speed centrifugation did not show ceramic debris. A macrophage-dominated response was present around polyethylene debris. We conclude that ceramic femoral component failure in this case was related to a traumatic event. Further research is needed to determine the suitability of ceramic components in TKA. Copyright © 2014 Elsevier Inc. All rights reserved.
Reliability and Maintainability Analysis for the Amine Swingbed Carbon Dioxide Removal System
NASA Technical Reports Server (NTRS)
Dunbar, Tyler
2016-01-01
I have performed a reliability & maintainability analysis for the Amine Swingbed payload system. The Amine Swingbed is a carbon dioxide removal technology that has gone through 2,400 hours of International Space Station on-orbit use between 2013 and 2016. While the Amine Swingbed is currently an experimental payload system, the Amine Swingbed may be converted to system hardware. If the Amine Swingbed becomes system hardware, it will supplement the Carbon Dioxide Removal Assembly (CDRA) as the primary CO2 removal technology on the International Space Station. NASA is also considering using the Amine Swingbed as the primary carbon dioxide removal technology for future extravehicular mobility units and for the Orion, which will be used for the Asteroid Redirect and Journey to Mars missions. The qualitative component of the reliability and maintainability analysis is a Failure Modes and Effects Analysis (FMEA). In the FMEA, I have investigated how individual components in the Amine Swingbed may fail, and what the worst case scenario is should a failure occur. The significant failure effects are the loss of ability to remove carbon dioxide, the formation of ammonia due to chemical degradation of the amine, and loss of atmosphere because the Amine Swingbed uses the vacuum of space to regenerate the Amine Swingbed. In the quantitative component of the reliability and maintainability analysis, I have assumed a constant failure rate for both electronic and nonelectronic parts. Using this data, I have created a Poisson distribution to predict the failure rate of the Amine Swingbed as a whole. I have determined a mean time to failure for the Amine Swingbed to be approximately 1,400 hours. The observed mean time to failure for the system is between 600 and 1,200 hours. This range includes initial testing of the Amine Swingbed, as well as software faults that are understood to be non-critical. If many of the commercial parts were switched to military-grade parts, the expected mean time to failure would be 2,300 hours. Both calculated mean times to failure for the Amine Swingbed use conservative failure rate models. The observed mean time to failure for CDRA is 2,500 hours. Working on this project and for NASA in general has helped me gain insight into current aeronautics missions, reliability engineering, circuit analysis, and different cultures. Prior my internship, I did not have a lot knowledge about the work being performed at NASA. As a chemical engineer, I had not really considered working for NASA as a career path. By engaging in interactions with civil servants, contractors, and other interns, I have learned a great deal about modern challenges that NASA is addressing. My work has helped me develop a knowledge base in safety and reliability that would be difficult to find elsewhere. Prior to this internship, I had not thought about reliability engineering. Now, I have gained a skillset in performing reliability analyses, and understanding the inner workings of a large mechanical system. I have also gained experience in understanding how electrical systems work while I was analyzing the electrical components of the Amine Swingbed. I did not expect to be exposed to as many different cultures as I have while working at NASA. I am referring to both within NASA and the Houston area. NASA employs individuals with a broad range of backgrounds. It has been great to learn from individuals who have highly diverse experiences and outlooks on the world. In the Houston area, I have come across individuals from different parts of the world. Interacting with such a high number of individuals with significantly different backgrounds has helped me to grow as a person in ways that I did not expect. My time at NASA has opened a window into the field of aeronautics. After earning a bachelor's degree in chemical engineering, I plan to go to graduate school for a PhD in engineering. Prior to coming to NASA, I was not aware of the graduate Pathways program. I intend to apply for the graduate Pathways program as positions are opened up. I would like to pursue future opportunities with NASA, especially as my engineering career progresses.
A probabilisitic based failure model for components fabricated from anisotropic graphite
NASA Astrophysics Data System (ADS)
Xiao, Chengfeng
The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.
Integrated Design Software Predicts the Creep Life of Monolithic Ceramic Components
NASA Technical Reports Server (NTRS)
1996-01-01
Significant improvements in propulsion and power generation for the next century will require revolutionary advances in high-temperature materials and structural design. Advanced ceramics are candidate materials for these elevated-temperature applications. As design protocols emerge for these material systems, designers must be aware of several innate features, including the degrading ability of ceramics to carry sustained load. Usually, time-dependent failure in ceramics occurs because of two different, delayedfailure mechanisms: slow crack growth and creep rupture. Slow crack growth initiates at a preexisting flaw and continues until a critical crack length is reached, causing catastrophic failure. Creep rupture, on the other hand, occurs because of bulk damage in the material: void nucleation and coalescence that eventually leads to macrocracks which then propagate to failure. Successful application of advanced ceramics depends on proper characterization of material behavior and the use of an appropriate design methodology. The life of a ceramic component can be predicted with the NASA Lewis Research Center's Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design programs. CARES/CREEP determines the expected life of a component under creep conditions, and CARES/LIFE predicts the component life due to fast fracture and subcritical crack growth. The previously developed CARES/LIFE program has been used in numerous industrial and Government applications.
Failure and recovery in dynamical networks.
Böttcher, L; Luković, M; Nagler, J; Havlin, S; Herrmann, H J
2017-02-03
Failure, damage spread and recovery crucially underlie many spatially embedded networked systems ranging from transportation structures to the human body. Here we study the interplay between spontaneous damage, induced failure and recovery in both embedded and non-embedded networks. In our model the network's components follow three realistic processes that capture these features: (i) spontaneous failure of a component independent of the neighborhood (internal failure), (ii) failure induced by failed neighboring nodes (external failure) and (iii) spontaneous recovery of a component. We identify a metastable domain in the global network phase diagram spanned by the model's control parameters where dramatic hysteresis effects and random switching between two coexisting states are observed. This dynamics depends on the characteristic link length of the embedded system. For the Euclidean lattice in particular, hysteresis and switching only occur in an extremely narrow region of the parameter space compared to random networks. We develop a unifying theory which links the dynamics of our model to contact processes. Our unifying framework may help to better understand controllability in spatially embedded and random networks where spontaneous recovery of components can mitigate spontaneous failure and damage spread in dynamical networks.
Spatial correlation analysis of cascading failures: Congestions and Blackouts
Daqing, Li; Yinan, Jiang; Rui, Kang; Havlin, Shlomo
2014-01-01
Cascading failures have become major threats to network robustness due to their potential catastrophic consequences, where local perturbations can induce global propagation of failures. Unlike failures spreading via direct contacts due to structural interdependencies, overload failures usually propagate through collective interactions among system components. Despite the critical need in developing protection or mitigation strategies in networks such as power grids and transportation, the propagation behavior of cascading failures is essentially unknown. Here we find by analyzing our collected data that jams in city traffic and faults in power grid are spatially long-range correlated with correlations decaying slowly with distance. Moreover, we find in the daily traffic, that the correlation length increases dramatically and reaches maximum, when morning or evening rush hour is approaching. Our study can impact all efforts towards improving actively system resilience ranging from evaluation of design schemes, development of protection strategies to implementation of mitigation programs. PMID:24946927
Virtually-synchronous communication based on a weak failure suspector
NASA Technical Reports Server (NTRS)
Schiper, Andre; Ricciardi, Aleta
1993-01-01
Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.
Implementation and Qualifications Lessons Learned for Space Flight Photonic Components
NASA Technical Reports Server (NTRS)
Ott, Melanie N.
2010-01-01
This slide presentation reviews the process for implementation and qualification of space flight photonic components. It discusses the causes for most common anomalies for the space flight components, design compatibility, a specific failure analysis of optical fiber that occurred in a cable in 1999-2000, and another ExPCA connector anomaly involving pins that broke off. It reviews issues around material selection, quality processes and documentation, and current projects that the Photonics group is involved in. The importance of good documentation is stressed.
Failure Analysis of a Missile Locking Hook from the F-14 Jet
1989-09-01
MTL) to determine the probable cause of failure. The component is one of two launcher housing support points for the Spar- row Missile and is located...reference Raytheon Draw- ing No. 685029, Figure 3). Atomic absorpticn and inductively coupled argon plasma emission spectroscopy were used to determine ...microscopy, while Figure 16 is a SEM fractograph taken of the same region. The crack initiation site was determined by tracing the radial marks indicative of
Bala, Lakshmi; Mehrotra, Mayank; Mohindra, Samir; Saxena, Rajan; Khetrapal, Chunni Lal
2013-02-01
Fulminant hepatic failure is associated with liver metabolic derangements which could have fatal consequences. The aim of the present study is to identify serum markers for early prediction of the outcome. Proton nuclear magnetic resonance spectroscopic studies of serum of fulminant hepatic failure patients due to viral hepatitis with grade II/III of encephalopathy (twenty-four: ten prospective and fourteen retrospective) and twenty-five controls were undertaken. Of the twenty-four patients, fifteen survived with medical management alone while nine had fatal outcome. The results demonstrated significantly elevated indices of amino acids (alanine, lysine, glutamine, histidine, tyrosine, phenylalanine and 1,2-propanediol) in fatal cases compared to survivors and controls. Principal component analysis showed clear separation of fatal and surviving cases. Liver function parameters were significantly deranged in patients but they failed to provide early significant differences between surviving and fatal cases. Compared to model for end-stage liver disease scores, principal component analysis appear to be better as an early prognostic indicator. Biochemical mapping of pathways suggested interruptions in amino acid metabolism and urea cycle. Proton nuclear magnetic resonance studies of serum have the potential of rapidly identifying patients with irreversible fulminant hepatic failure requiring liver transplantation as life saving option. Copyright © 2012 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Independent Orbiter Assessment (IOA): Analysis of the orbiter main propulsion system
NASA Technical Reports Server (NTRS)
Mcnicoll, W. J.; Mcneely, M.; Holden, K. A.; Emmons, T. E.; Lowery, H. J.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Main Propulsion System (MPS) hardware are documented. The Orbiter MPS consists of two subsystems: the Propellant Management Subsystem (PMS) and the Helium Subsystem. The PMS is a system of manifolds, distribution lines and valves by which the liquid propellants pass from the External Tank (ET) to the Space Shuttle Main Engines (SSMEs) and gaseous propellants pass from the SSMEs to the ET. The Helium Subsystem consists of a series of helium supply tanks and their associated regulators, check valves, distribution lines, and control valves. The Helium Subsystem supplies helium that is used within the SSMEs for inflight purges and provides pressure for actuation of SSME valves during emergency pneumatic shutdowns. The balance of the helium is used to provide pressure to operate the pneumatically actuated valves within the PMS. Each component was evaluated and analyzed for possible failure modes and effects. Criticalities were assigned based on the worst possible effect of each failure mode. Of the 690 failure modes analyzed, 349 were determined to be PCIs.
NASA Astrophysics Data System (ADS)
Kempf, Scott; Schäfer, Frank K.; Cardone, Tiziana; Ferreira, Ivo; Gerené, Sam; Destefanis, Roberto; Grassi, Lilith
2016-12-01
During recent years, the state-of-the-art risk assessment of the threat posed to spacecraft by micrometeoroids and space debris has been expanded to the analysis of failure modes of internal spacecraft components. This method can now be used to perform risk analyses for satellites to assess various failure levels - from failure of specific sub-systems to catastrophic break-up. This new assessment methodology is based on triple-wall ballistic limit equations (BLEs), specifically the Schäfer-Ryan-Lambert (SRL) BLE, which is applicable for describing failure threshold levels for satellite components following a hypervelocity impact. The methodology is implemented in the form of the software tool Particle Impact Risk and vulnerability Analysis Tool (PIRAT). During a recent European Space Agency (ESA) funded study, the PIRAT functionality was expanded in order to provide an interface to ESA's Concurrent Design Facility (CDF). The additions include a geometry importer and an OCDT (Open Concurrent Design Tool) interface. The new interface provides both the expanded geometrical flexibility, which is provided by external computer aided design (CAD) modelling, and an ease of import of existing data without the need for extensive preparation of the model. The reduced effort required to perform vulnerability analyses makes it feasible for application during early design phase, at which point modifications to satellite design can be undertaken with relatively little extra effort. The integration of PIRAT in the CDF represents the first time that vulnerability analyses can be performed in-session in ESA's CDF and the first time that comprehensive vulnerability studies can be applied cost-effectively in early design phase in general.
Jackson, Brian A; Faith, Kay Sullivan
2013-02-01
Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.
Comprehensive risk assessment method of catastrophic accident based on complex network properties
NASA Astrophysics Data System (ADS)
Cui, Zhen; Pang, Jun; Shen, Xiaohong
2017-09-01
On the macro level, the structural properties of the network and the electrical characteristics of the micro components determine the risk of cascading failures. And the cascading failures, as a process with dynamic development, not only the direct risk but also potential risk should be considered. In this paper, comprehensively considered the direct risk and potential risk of failures based on uncertain risk analysis theory and connection number theory, quantified uncertain correlation by the node degree and node clustering coefficient, then established a comprehensive risk indicator of failure. The proposed method has been proved by simulation on the actual power grid. Modeling a network according to the actual power grid, and verified the rationality of the proposed method.
A review of failure models for unidirectional ceramic matrix composites under monotonic loads
NASA Technical Reports Server (NTRS)
Tripp, David E.; Hemann, John H.; Gyekenyesi, John P.
1989-01-01
Ceramic matrix composites offer significant potential for improving the performance of turbine engines. In order to achieve their potential, however, improvements in design methodology are needed. In the past most components using structural ceramic matrix composites were designed by trial and error since the emphasis of feasibility demonstration minimized the development of mathematical models. To understand the key parameters controlling response and the mechanics of failure, the development of structural failure models is required. A review of short term failure models with potential for ceramic matrix composite laminates under monotonic loads is presented. Phenomenological, semi-empirical, shear-lag, fracture mechanics, damage mechanics, and statistical models for the fast fracture analysis of continuous fiber unidirectional ceramic matrix composites under monotonic loads are surveyed.
NASA Astrophysics Data System (ADS)
Witantyo; Rindiyah, Anita
2018-03-01
According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.
NASA Technical Reports Server (NTRS)
Robinson, W. W.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the Electrical Power Distribution and Control (EPD and C)/Remote Manipulator System (RMS) hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained in the NASA FMEA/CIL documentation. This report documents the results of the independent analysis of the EPD and C/RMS (both port and starboard) hardware. The EPD and C/RMS subsystem hardware provides the electrical power and power control circuitry required to safely deploy, operate, control, and stow or guillotine and jettison two (one port and one starboard) RMSs. The EPD and C/RMS subsystem is subdivided into the four following functional divisions: Remote Manipulator Arm; Manipulator Deploy Control; Manipulator Latch Control; Manipulator Arm Shoulder Jettison; and Retention Arm Jettison. The IOA analysis process utilized available EPD and C/RMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based on the severity of the effect for each failure mode.
Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.
2008-01-01
High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight, enable higher operating temperatures requiring less cooling and thus leading to increased engine efficiencies. However, these materials are brittle and show degradation with time at high operating temperatures due to creep as well as cyclic mechanical and thermal loads. In addition, these materials are heterogeneous in their make-up and various factors affect their properties in a specific design environment. Most of these advanced composites involve two- and three-dimensional fiber architectures and require a complex multi-step high temperature processing. Since there are uncertainties associated with each of these in addition to the variability in the constituent material properties, the observed behavior of composite materials exhibits scatter. Traditional material failure analyses employing a deterministic approach, where failure is assumed to occur when some allowable stress level or equivalent stress is exceeded, are not adequate for brittle material component design. Such phenomenological failure theories are reasonably successful when applied to ductile materials such as metals. Analysis of failure in structural components is governed by the observed scatter in strength, stiffness and loading conditions. In such situations, statistical design approaches must be used. Accounting for these phenomena requires a change in philosophy on the design engineer s part that leads to a reduced focus on the use of safety factors in favor of reliability analyses. The reliability approach demands that the design engineer must tolerate a finite risk of unacceptable performance. This risk of unacceptable performance is identified as a component's probability of failure (or alternatively, component reliability). The primary concern of the engineer is minimizing this risk in an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.
Cost decomposition of linear systems with application to model reduction
NASA Technical Reports Server (NTRS)
Skelton, R. E.
1980-01-01
A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.
NASA Technical Reports Server (NTRS)
Ames, B. E.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) is presented. The IOA effort first completed an analysis of the Electrical Power Generation/Power Reactant Storage and Distribution (EPG/PRSD) subsystem hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baselines with proposed Post 51-L updates included. A resolution of each discrepancy from the comparison is provided through additional analysis as required. The results of that comparison are documented for the Orbiter EPG/PRSD hardware. The comparison produced agreement on all but 27 FMEAs and 9 CIL items. The discrepancy between the number of IOA findings and NASA FMEAs can be partially explained by the different approaches used by IOA and NASA to group failure modes together to form one FMEA. Also, several IOA items represented inner tank components and ground operations failure modes which were not in the NASA baseline.
Fracture and Failure at and Near Interfaces Under Pressure
1998-06-18
realistic data for comparison with improved analytical results, and to 2) initiate a new computational approach for stress analysis of cracks at and near...new computational approach for stress analysis of cracks in solid propellants at and near interfaces, which analysis can draw on the ever expanding...tactical and strategic missile systems. The most important and most difficult component of the system analysis has been the predictability or
Continuous fiber ceramic matrix composites for heat engine components
NASA Technical Reports Server (NTRS)
Tripp, David E.
1988-01-01
High strength at elevated temperatures, low density, resistance to wear, and abundance of nonstrategic raw materials make structural ceramics attractive for advanced heat engine applications. Unfortunately, ceramics have a low fracture toughness and fail catastrophically because of overload, impact, and contact stresses. Ceramic matrix composites provide the means to achieve improved fracture toughness while retaining desirable characteristics, such as high strength and low density. Materials scientists and engineers are trying to develop the ideal fibers and matrices to achieve the optimum ceramic matrix composite properties. A need exists for the development of failure models for the design of ceramic matrix composite heat engine components. Phenomenological failure models are currently the most frequently used in industry, but they are deterministic and do not adequately describe ceramic matrix composite behavior. Semi-empirical models were proposed, which relate the failure of notched composite laminates to the stress a characteristic distance away from the notch. Shear lag models describe composite failure modes at the micromechanics level. The enhanced matrix cracking stress occurs at the same applied stress level predicted by the two models of steady state cracking. Finally, statistical models take into consideration the distribution in composite failure strength. The intent is to develop these models into computer algorithms for the failure analysis of ceramic matrix composites under monotonically increasing loads. The algorithms will be included in a postprocessor to general purpose finite element programs.
Experimental study on the connection property of full-scale composite member
NASA Astrophysics Data System (ADS)
Panpan, Cao; Qing, Sun
2018-01-01
The excellent properties of composite result in its increasingly application in electric power construction, however there are less experimental studies on full-scale composite member connection property. Full-scale experiments of the connection property between E-glass fiber/epoxy reinforced polymer member and steel casing in practical engineering have been conducted. Based on the axial compression test of the designed specimens, the failure process and failure characteristics were observed, the load-displacement curves and strain distribution of the specimens were obtained. The finite element analysis was used to get the tensile connection strength of the component. The connection property of the components was analyzed to provide basis of the casing connection of GFRP application in practical engineering.
Failure prediction of thin beryllium sheets used in spacecraft structures
NASA Technical Reports Server (NTRS)
Roschke, Paul N.; Mascorro, Edward; Papados, Photios; Serna, Oscar R.
1991-01-01
The primary objective of this study is to develop a method for prediction of failure of thin beryllium sheets that undergo complex states of stress. Major components of the research include experimental evaluation of strength parameters for cross-rolled beryllium sheet, application of the Tsai-Wu failure criterion to plate bending problems, development of a high order failure criterion, application of the new criterion to a variety of structures, and incorporation of both failure criteria into a finite element code. A Tsai-Wu failure model for SR-200 sheet material is developed from available tensile data, experiments carried out by NASA on two circular plates, and compression and off-axis experiments performed in this study. The failure surface obtained from the resulting criterion forms an ellipsoid. By supplementing experimental data used in the the two-dimensional criterion and modifying previously suggested failure criteria, a multi-dimensional failure surface is proposed for thin beryllium structures. The new criterion for orthotropic material is represented by a failure surface in six-dimensional stress space. In order to determine coefficients of the governing equation, a number of uniaxial, biaxial, and triaxial experiments are required. Details of these experiments and a complementary ultrasonic investigation are described in detail. Finally, validity of the criterion and newly determined mechanical properties is established through experiments on structures composed of SR200 sheet material. These experiments include a plate-plug arrangement under a complex state of stress and a series of plates with an out-of-plane central point load. Both criteria have been incorporated into a general purpose finite element analysis code. Numerical simulation incrementally applied loads to a structural component that is being designed and checks each nodal point in the model for exceedance of a failure criterion. If stresses at all locations do not exceed the failure criterion, the load is increased and the process is repeated. Failure results for the plate-plug and clamped plate tests are accurate to within 2 percent.
Independent Orbiter Assessment (IOA): Analysis of the Orbiter Experiment (OEX) subsystem
NASA Technical Reports Server (NTRS)
Compton, J. M.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Experiments hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The Orbiter Experiments (OEX) Program consists of a multiple set of experiments for the purpose of gathering environmental and aerodynamic data to develop more accurate ground models for Shuttle performance and to facilitate the design of future spacecraft. This assessment only addresses currently manifested experiments and their support systems. Specifically this list consists of: Shuttle Entry Air Data System (SEADS); Shuttle Upper Atmosphere Mass Spectrometer (SUMS); Forward Fuselage Support System for OEX (FFSSO); Shuttle Infrared Laced Temperature Sensor (SILTS); Aerodynamic Coefficient Identification Package (ACIP); and Support System for OEX (SSO). There are only two potential critical items for the OEX, since the experiments only gather data for analysis post mission and are totally independent systems except for power. Failure of any experiment component usually only causes a loss of experiment data and in no way jeopardizes the crew or mission.
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
Pes, Giovanni Mario; Delitala, Alessandro Palmerio; Errigo, Alessandra; Delitala, Giuseppe; Dore, Maria Pina
2016-06-01
Latent autoimmune diabetes in adults (LADA) which accounts for more than 10 % of all cases of diabetes is characterized by onset after age 30, absence of ketoacidosis, insulin independence for at least 6 months, and presence of circulating islet-cell antibodies. Its marked heterogeneity in clinical features and immunological markers suggests the existence of multiple mechanisms underlying its pathogenesis. The principal component (PC) analysis is a statistical approach used for finding patterns in data of high dimension. In this study the PC analysis was applied to a set of variables from a cohort of Sardinian LADA patients to identify a smaller number of latent patterns. A list of 11 variables including clinical (gender, BMI, lipid profile, systolic and diastolic blood pressure and insulin-free time period), immunological (anti-GAD65, anti-IA-2 and anti-TPO antibody titers) and genetic features (predisposing gene variants previously identified as risk factors for autoimmune diabetes) retrieved from clinical records of 238 LADA patients referred to the Internal Medicine Unit of University of Sassari, Italy, were analyzed by PC analysis. The predictive value of each PC on the further development of insulin dependence was evaluated using Kaplan-Meier curves. Overall 4 clusters were identified by PC analysis. In component PC-1, the dominant variables were: BMI, triglycerides, systolic and diastolic blood pressure and duration of insulin-free time period; in PC-2: genetic variables such as Class II HLA, CTLA-4 as well as anti-GAD65, anti-IA-2 and anti-TPO antibody titers, and the insulin-free time period predominated; in PC-3: gender and triglycerides; and in PC-4: total cholesterol. These components explained 18, 15, 12, and 12 %, respectively, of the total variance in the LADA cohort. The predictive power of insulin dependence of the four components was different. PC-2 (characterized mostly by high antibody titers and presence of predisposing genetic markers) showed a faster beta-cells failure and PC-3 (characterized mostly by gender and high triglycerides) and PC-4 (high cholesterol) showed a slower beta-cells failure. PC-1 (including dislipidemia and other metabolic dysfunctions), showed a mild beta-cells failure. In conclusion variable clustering might be consistent with different pathogenic pathways and/or distinct immune mechanisms in LADA and could potentially help physicians improve the clinical management of these patients.
Aging assessment of large electric motors in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villaran, M.; Subudhi, M.
1996-03-01
Large electric motors serve as the prime movers to drive high capacity pumps, fans, compressors, and generators in a variety of nuclear plant systems. This study examined the stressors that cause degradation and aging in large electric motors operating in various plant locations and environments. The operating history of these machines in nuclear plant service was studied by review and analysis of failure reports in the NPRDS and LER databases. This was supplemented by a review of motor designs, and their nuclear and balance of plant applications, in order to characterize the failure mechanisms that cause degradation, aging, and failuremore » in large electric motors. A generic failure modes and effects analysis for large squirrel cage induction motors was performed to identify the degradation and aging mechanisms affecting various components of these large motors, the failure modes that result, and their effects upon the function of the motor. The effects of large motor failures upon the systems in which they are operating, and on the plant as a whole, were analyzed from failure reports in the databases. The effectiveness of the industry`s large motor maintenance programs was assessed based upon the failure reports in the databases and reviews of plant maintenance procedures and programs.« less
The Aging of Engines: An Operator’s Perspective
2000-10-01
internal HCF failures of blades . Erosion of compressor gas path 2-3 components can be minimized through the use of inlet aluminide intermetallic...fatigue problems in the dovetails durability in accelerated burner rig tests [2,35]. areas of titanium alloy fan and compressor blades . Shot peening in...Criticality Analysis replacement of durability-critical components, such as FOD Foreign object damage blades and vanes. The need to balance risk and escalating
Modeling Hydraulic Components for Automated FMEA of a Braking System
2014-12-23
Modeling Hydraulic Components for Automated FMEA of a Braking System Peter Struss, Alessandro Fraracci Tech. Univ. of Munich, 85748 Garching...Germany struss@in.tum.de ABSTRACT This paper presents work on model-based automation of failure-modes-and-effects analysis ( FMEA ) applied to...the hydraulic part of a vehicle braking system. We describe the FMEA task and the application problem and outline the foundations for automating the
Five year survival analysis of an oxidised zirconium total knee arthroplasty.
Holland, Philip; Santini, Alasdair J A; Davidson, John S; Pope, Jill A
2013-12-01
Zirconium total knee arthroplasties theoretically have a low incidence of failure as they are low friction, hard wearing and hypoallergenic. We report the five year survival of 213 Profix zirconium total knee arthroplasties with a conforming all polyethylene tibial component. Data was collected prospectively and multiple strict end points were used. SF12 and WOMAC scores were recorded pre-operatively, at three months, at twelve months, at 3 years and at 5 years. Eight patients died and six were "lost to follow-up". The remaining 199 knees were followed up for five years. The mean WOMAC score improved from 56 to 35 and the mean SF12 physical component score improved from 28 to 34. The five year survival for failure due to implant related reasons was 99.5% (95% CI 97.4-100). This was due to one tibial component becoming loose aseptically in year zero. Our results demonstrate that the Profix zirconium total knee arthroplasty has a low medium term failure rate comparable to the best implants. Further research is needed to establish if the beneficial properties of zirconium improve long term implant survival. Copyright © 2012 Elsevier B.V. All rights reserved.
Delamination modeling of laminate plate made of sublaminates
NASA Astrophysics Data System (ADS)
Kormaníková, Eva; Kotrasová, Kamila
2017-07-01
The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.
Model 0A wind turbine generator FMEA
NASA Technical Reports Server (NTRS)
Klein, William E.; Lalli, Vincent R.
1989-01-01
The results of Failure Modes and Effects Analysis (FMEA) conducted for the Wind Turbine Generators are presented. The FMEA was performed for the functional modes of each system, subsystem, or component. The single-point failures were eliminated for most of the systems. The blade system was the only exception. The qualitative probability of a blade separating was estimated at level D-remote. Many changes were made to the hardware as a result of this analysis. The most significant change was the addition of the safety system. Operational experience and need to improve machine availability have resulted in subsequent changes to the various systems which are also reflected in this FMEA.
Sensitivity Analysis of Digital I&C Modules in Protection and Safety Systems
NASA Astrophysics Data System (ADS)
Khalil Ur, Rahman; Zubair, M.; Heo, G.
2013-12-01
This research is performed to examine the sensitivity of digital Instrumentation and Control (I&C) components and modules used in regulating and protection systems architectures of nuclear industry. Fault Tree Analysis (FTA) was performed for four configurations of RPS channel architecture. The channel unavailability has been calculated by using AIMS-PSA, which comes out 4.517E-03, 2.551E-03, 2.246E-03 and 2.7613-04 for architecture configuration I, II, III and IV respectively. It is observed that unavailability decreases by 43.5 % & 50.4% by inserting partial redundancy whereas maximum reduction of 93.9 % in unavailability happens when double redundancy is inserted in architecture. Coincidence module output failure and bi-stable output failures are identified as sensitive failures by Risk Reduction Worth (RRW) and Fussell-Vesely (FV) importance. RRW highlights that risk from coincidence processor output failure can reduced by 48.83 folds and FV indicates that BP output is sensitive by 0.9796 (on a scale of 1).
NASA Astrophysics Data System (ADS)
Poley, Jack; Dines, Michael
2011-04-01
Wind turbines are frequently located in remote, hard-to-reach locations, making it difficult to apply traditional oil analysis sampling of the machine's critical gearset at timely intervals. Metal detection sensors are excellent candidates for sensors designed to monitor machine condition in vivo. Remotely sited components, such as wind turbines, therefore, can be comfortably monitored from a distance. Online sensor technology has come of age with products now capable of identifying onset of wear in time to avoid or mitigate failure. Online oil analysis is now viable, and can be integrated with onsite testing to vet sensor alarms, as well as traditional oil analysis, as furnished by offsite laboratories. Controlled laboratory research data were gathered from tests conducted on a typical wind turbine gearbox, wherein total ferrous particle measurement and metallic particle counting were employed and monitored. The results were then compared with a physical inspection for wear experienced by the gearset. The efficacy of results discussed herein strongly suggests the viability of metallic wear debris sensors in today's wind turbine gearsets, as correlation between sensor data and machine trauma were very good. By extension, similar components and settings would also seem amenable to wear particle sensor monitoring. To our knowledge no experiments such as described herein, have previously been conducted and published.
Modelling of Damage Evolution in Braided Composites: Recent Developments
NASA Astrophysics Data System (ADS)
Wang, Chen; Roy, Anish; Silberschmidt, Vadim V.; Chen, Zhong
2017-12-01
Composites reinforced with woven or braided textiles exhibit high structural stability and excellent damage tolerance thanks to yarn interlacing. With their high stiffness-to-weight and strength-to-weight ratios, braided composites are attractive for aerospace and automotive components as well as sports protective equipment. In these potential applications, components are typically subjected to multi-directional static, impact and fatigue loadings. To enhance material analysis and design for such applications, understanding mechanical behaviour of braided composites and development of predictive capabilities becomes crucial. Significant progress has been made in recent years in development of new modelling techniques allowing elucidation of static and dynamic responses of braided composites. However, because of their unique interlacing geometric structure and complicated failure modes, prediction of damage initiation and its evolution in components is still a challenge. Therefore, a comprehensive literature analysis is presented in this work focused on a review of the state-of-the-art progressive damage analysis of braided composites with finite-element simulations. Recently models employed in the studies on mechanical behaviour, impact response and fatigue analyses of braided composites are presented systematically. This review highlights the importance, advantages and limitations of as-applied failure criteria and damage evolution laws for yarns and composite unit cells. In addition, this work provides a good reference for future research on FE simulations of braided composites.
Health monitoring display system for a complex plant
Ridolfo, Charles F [Bloomfield, CT; Harmon, Daryl L [Enfield, CT; Colin, Dreyfuss [Enfield, CT
2006-08-08
A single page enterprise wide level display provides a comprehensive readily understood representation of the overall health status of a complex plant. Color coded failure domains allow rapid intuitive recognition of component failure status. A three-tier hierarchy of displays provide details on the health status of the components and systems displayed on the enterprise wide level display in a manner that supports a logical drill down to the health status of sub-components on Tier 1 to expected faults of the sub-components on Tier 2 to specific information relative to expected sub-component failures on Tier 3.
Developing Ultra Reliable Life Support for the Moon and Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2009-01-01
Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.
Composite Interlaminar Shear Fracture Toughness, G(sub 2c): Shear Measurement of Sheer Myth?
NASA Technical Reports Server (NTRS)
OBrien, T. Kevin
1997-01-01
The concept of G2c as a measure of the interlaminar shear fracture toughness of a composite material is critically examined. In particular, it is argued that the apparent G2c as typically measured is inconsistent with the original definition of shear fracture. It is shown that interlaminar shear failure actually consists of tension failures in the resin rich layers between plies followed by the coalescence of ligaments created by these failures and not the sliding of two planes relative to one another that is assumed in fracture mechanics theory. Several strain energy release rate solutions are reviewed for delamination in composite laminates and structural components where failures have been experimentally documented. Failures typically occur at a location where the mode 1 component accounts for at least one half of the total G at failure. Hence, it is the mode I and mixed-mode interlaminar fracture toughness data that will be most useful in predicting delamination failure in composite components in service. Although apparent G2c measurements may prove useful for completeness of generating mixed-mode criteria, the accuracy of these measurements may have very little influence on the prediction of mixed-mode failures in most structural components.
Analysis on Sealing Reliability of Bolted Joint Ball Head Component of Satellite Propulsion System
NASA Astrophysics Data System (ADS)
Guo, Tao; Fan, Yougao; Gao, Feng; Gu, Shixin; Wang, Wei
2018-01-01
Propulsion system is one of the important subsystems of satellite, and its performance directly affects the service life, attitude control and reliability of the satellite. The Paper analyzes the sealing principle of bolted joint ball head component of satellite propulsion system and discuss from the compatibility of hydrazine anhydrous and bolted joint ball head component, influence of ground environment on the sealing performance of bolted joint ball heads, and material failure caused by environment, showing that the sealing reliability of bolted joint ball head component is good and the influence of above three aspects on sealing of bolted joint ball head component can be ignored.
2015-01-01
Cell membrane chromatography (CMC) derived from pathological tissues is ideal for screening specific components acting on specific diseases from complex medicines owing to the maximum simulation of in vivo drug-receptor interactions. However, there are no pathological tissue-derived CMC models that have ever been developed, as well as no visualized affinity comparison of potential active components between normal and pathological CMC columns. In this study, a novel comparative normal/failing rat myocardium CMC analysis system based on online column selection and comprehensive two-dimensional (2D) chromatography/monolithic column/time-of-flight mass spectrometry was developed for parallel comparison of the chromatographic behaviors on both normal and pathological CMC columns, as well as rapid screening of the specific therapeutic agents that counteract doxorubicin (DOX)-induced heart failure from Acontium carmichaeli (Fuzi). In total, 16 potential active alkaloid components with similar structures in Fuzi were retained on both normal and failing myocardium CMC models. Most of them had obvious decreases of affinities on failing myocardium CMC compared with normal CMC model except for four components, talatizamine (TALA), 14-acetyl-TALA, hetisine, and 14-benzoylneoline. One compound TALA with the highest affinity was isolated for further in vitro pharmacodynamic validation and target identification to validate the screen results. Voltage-dependent K+ channel was confirmed as a binding target of TALA and 14-acetyl-TALA with high affinities. The online high throughput comparative CMC analysis method is suitable for screening specific active components from herbal medicines by increasing the specificity of screened results and can also be applied to other biological chromatography models. PMID:24731167
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. Schroeder; R. W. Youngblood
The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less
Fractography, NDE, and fracture mechanics applications in failure analysis studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morin, C.R.; Shipley, R.J.; Wilkinson, J.A.
1994-10-01
While identification of the precise mode of a failure can lead logically to the underlying cause, a thorough failure investigation requires much more than just the identification of a specific metallurgical mechanism, for example, fatigue, creep, stress corrosion cracking, etc. Failures involving fracture provide good illustrations of this concept. An initial step in characterizing fracture surfaces is often the identification of an origin or origins. However, the analysis should not stop there. If the origin is associated with a discontinuity, the manner in which it was formed must also be addressed. The stresses that would have existed at the originmore » must be determined and compared with material properties to determine whether or not a crack should have initiated and propagated during normal operation. Many critical components are inspected throughout their lives by nondestructive methods. When a crack progresses to failure, its nondetection at earlier inspections must also be understood. Careful study of the fracture surface combined with crack growth analysis based on fracture mechanics can provide an estimate of the crack length at the times of previous inspections. An important issue often overlooked in such studies is how processing of parts during manufacture or rework affects the probability of detection of such cracks. The ultimate goal is to understand thoroughly the progression of the failure, to understand the root cause(s), and to design appropriate corrective action(s) to minimize recurrence.« less
Failure analysis of parameter-induced simulation crashes in climate models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.
2013-01-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We apply support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicts model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures are determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations are the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
Failure analysis of parameter-induced simulation crashes in climate models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.
2013-08-01
Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
Independent Orbiter Assessment (IOA): Analysis of the landing/deceleration subsystem
NASA Technical Reports Server (NTRS)
Compton, J. M.; Beaird, H. G.; Weissinger, W. D.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Landing/Deceleration Subsystem hardware. The Landing/Deceleration Subsystem is utilized to allow the Orbiter to perform a safe landing, allowing for landing-gear deploy activities, steering and braking control throughout the landing rollout to wheel-stop, and to allow for ground-handling capability during the ground-processing phase of the flight cycle. Specifically, the Landing/Deceleration hardware consists of the following components: Nose Landing Gear (NLG); Main Landing Gear (MLG); Brake and Antiskid (B and AS) Electrical Power Distribution and Controls (EPD and C); Nose Wheel Steering (NWS); and Hydraulics Actuators. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the lack of redundancy in the Landing/Deceleration Subsystems there is a high number of critical items.
14 CFR 33.70 - Engine life-limited parts.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., hubs, shafts, high-pressure casings, and non-redundant mount components. For the purposes of this... life before hazardous engine effects can occur. These steps include validated analysis, test, or... assessments to address the potential for failure from material, manufacturing, and service induced anomalies...
14 CFR 33.70 - Engine life-limited parts.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., hubs, shafts, high-pressure casings, and non-redundant mount components. For the purposes of this... life before hazardous engine effects can occur. These steps include validated analysis, test, or... assessments to address the potential for failure from material, manufacturing, and service induced anomalies...
40 CFR 86.527-90 - Test procedures, overview.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Section 86.527-90 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... constant volume (variable dilution) sampler. (d) Except in cases of component malfunction or failure, all... emissions measurements are made. For exhaust testing, this requires sampling and analysis of the dilution...
NASA Astrophysics Data System (ADS)
Spinner, Neil S.; Field, Christopher R.; Hammond, Mark H.; Williams, Bradley A.; Myers, Kristina M.; Lubrano, Adam L.; Rose-Pehrsson, Susan L.; Tuttle, Steven G.
2015-04-01
A 5-cubic meter decompression chamber was re-purposed as a fire test chamber to conduct failure and abuse experiments on lithium-ion batteries. Various modifications were performed to enable remote control and monitoring of chamber functions, along with collection of data from instrumentation during tests including high speed and infrared cameras, a Fourier transform infrared spectrometer, real-time gas analyzers, and compact reconfigurable input and output devices. Single- and multi-cell packages of LiCoO2 chemistry 18650 lithium-ion batteries were constructed and data was obtained and analyzed for abuse and failure tests. Surrogate 18650 cells were designed and fabricated for multi-cell packages that mimicked the thermal behavior of real cells without using any active components, enabling internal temperature monitoring of cells adjacent to the active cell undergoing failure. Heat propagation and video recordings before, during, and after energetic failure events revealed a high degree of heterogeneity; some batteries exhibited short burst of sparks while others experienced a longer, sustained flame during failure. Carbon monoxide, carbon dioxide, methane, dimethyl carbonate, and ethylene carbonate were detected via gas analysis, and the presence of these species was consistent throughout all failure events. These results highlight the inherent danger in large format lithium-ion battery packs with regards to cell-to-cell failure, and illustrate the need for effective safety features.
Determination of Turbine Blade Life from Engine Field Data
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.; Litt, Jonathan S.; Hendricks, Robert C.; Soditus, Sherry M.
2013-01-01
It is probable that no two engine companies determine the life of their engines or their components in the same way or apply the same experience and safety factors to their designs. Knowing the failure mode that is most likely to occur minimizes the amount of uncertainty and simplifies failure and life analysis. Available data regarding failure mode for aircraft engine blades, while favoring low-cycle, thermal-mechanical fatigue (TMF) as the controlling mode of failure, are not definitive. Sixteen high-pressure turbine (HPT) T-1 blade sets were removed from commercial aircraft engines that had been commercially flown by a single airline and inspected for damage. Each set contained 82 blades. The damage was cataloged into three categories related to their mode of failure: (1) TMF, (2) Oxidation/erosion (O/E), and (3) Other. From these field data, the turbine blade life was determined as well as the lives related to individual blade failure modes using Johnson-Weibull analysis. A simplified formula for calculating turbine blade life and reliability was formulated. The L10 blade life was calculated to be 2427 cycles (11 077 hr). The resulting blade life attributed to O/E equaled that attributed to TMF. The category that contributed most to blade failure was Other. If there were no blade failures attributed to O/E and TMF, the overall blade L(sub 10) life would increase approximately 11 to 17 percent.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
NASA Astrophysics Data System (ADS)
Zhang, Ding; Zhang, Yingjie
2017-09-01
A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.
NASA Technical Reports Server (NTRS)
Jadaan, Osama
2001-01-01
Present capabilities of the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code has the capability to compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth (SCG) type failure conditions CARES/Life can handle the cases of sustained and linearly increasing time-dependent loads, while for cyclic fatigue applications various types of repetitive constant amplitude loads can be accounted for. In real applications applied loads are rarely that simple, but rather vary with time in more complex ways such as, for example, engine start up, shut down, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. The objective of this paper is to demonstrate a methodology capable of predicting the time-dependent reliability of components subjected to transient thermomechanical loads that takes into account the change in material response with time. In this paper, the dominant delayed failure mechanism is assumed to be SCG. This capability has been added to the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code, which has also been modified to have the ability of interfacing with commercially available FEA codes executed for transient load histories. An example involving a ceramic exhaust valve subjected to combustion cycle loads is presented to demonstrate the viability of this methodology and the CARES/Life program.
NASA Astrophysics Data System (ADS)
Chan, H. M.; van der Velden, B. H. M.; E Loo, C.; Gilhuijs, K. G. A.
2017-08-01
We present a radiomics model to discriminate between patients at low risk and those at high risk of treatment failure at long-term follow-up based on eigentumors: principal components computed from volumes encompassing tumors in washin and washout images of pre-treatment dynamic contrast-enhanced (DCE-) MR images. Eigentumors were computed from the images of 563 patients from the MARGINS study. Subsequently, a least absolute shrinkage selection operator (LASSO) selected candidates from the components that contained 90% of the variance of the data. The model for prediction of survival after treatment (median follow-up time 86 months) was based on logistic regression. Receiver operating characteristic (ROC) analysis was applied and area-under-the-curve (AUC) values were computed as measures of training and cross-validated performances. The discriminating potential of the model was confirmed using Kaplan-Meier survival curves and log-rank tests. From the 322 principal components that explained 90% of the variance of the data, the LASSO selected 28 components. The ROC curves of the model yielded AUC values of 0.88, 0.77 and 0.73, for the training, leave-one-out cross-validated and bootstrapped performances, respectively. The bootstrapped Kaplan-Meier survival curves confirmed significant separation for all tumors (P < 0.0001). Survival analysis on immunohistochemical subgroups shows significant separation for the estrogen-receptor subtype tumors (P < 0.0001) and the triple-negative subtype tumors (P = 0.0039), but not for tumors of the HER2 subtype (P = 0.41). The results of this retrospective study show the potential of early-stage pre-treatment eigentumors for use in prediction of treatment failure of breast cancer.
NASA Technical Reports Server (NTRS)
Childs, D. W.
1984-01-01
Rotational stability of turbopump components in the space shuttle main engine was studied via analysis of component and structural dynamic models. Subsynchronous vibration caused unacceptable migration of the rotor/housing unit with unequal load sharing of the synchronous bearings that resulted in the failure of the High Pressure Oxygen Turbopump. Linear analysis shows that a shrouded inducer eliminates the second critical speed and the stability problem, a stiffened rotor improves the rotordynamic characteristics of the turbopump, and installing damper boost/impeller seals reduces bearing loads. Nonlinear analysis shows that by increasing the "dead band' clearances, a marked reduction in peak bearing loads occurs.
Accelerated life assessment of coating on the radar structure components in coastal environment.
Liu, Zhe; Ming, ZhiMao
2016-07-04
This paper aimed to build an accelerated life test scheme and carry out quantitative analysis between accelerated life test in the laboratory and actual service for the coating composed of epoxy primer and polyurethane paint on structure components of some kind of radar served in the coastal environment of South China Sea. The accelerated life test scheme was built based on the service environment and failure analysis of the coating. The quantitative analysis between accelerated life test and actual service was conducted by comparing the gloss loss, discoloration, chalking, blistering, cracking and electrochemical impedance spectroscopy of the coating. The main factors leading to the coating failure were ultraviolet radiation, temperature, moisture, salt fog and loads, the accelerated life test included ultraviolet radiation, damp heat, thermal shock, fatigue and salt spray. The quantitative relationship was that one cycle of the accelerated life test was equal to actual service for one year. It was established that one cycle of the accelerated life test was equal to actual service for one year. It provided a precise way to predict actual service life of newly developed coatings for the manufacturer.
Performance-based maintenance of gas turbines for reliable control of degraded power systems
NASA Astrophysics Data System (ADS)
Mo, Huadong; Sansavini, Giovanni; Xie, Min
2018-03-01
Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.
Influence of Finite Element Size in Residual Strength Prediction of Composite Structures
NASA Technical Reports Server (NTRS)
Satyanarayana, Arunkumar; Bogert, Philip B.; Karayev, Kazbek Z.; Nordman, Paul S.; Razi, Hamid
2012-01-01
The sensitivity of failure load to the element size used in a progressive failure analysis (PFA) of carbon composite center notched laminates is evaluated. The sensitivity study employs a PFA methodology previously developed by the authors consisting of Hashin-Rotem intra-laminar fiber and matrix failure criteria and a complete stress degradation scheme for damage simulation. The approach is implemented with a user defined subroutine in the ABAQUS/Explicit finite element package. The effect of element size near the notch tips on residual strength predictions was assessed for a brittle failure mode with a parametric study that included three laminates of varying material system, thickness and stacking sequence. The study resulted in the selection of an element size of 0.09 in. X 0.09 in., which was later used for predicting crack paths and failure loads in sandwich panels and monolithic laminated panels. Comparison of predicted crack paths and failure loads for these panels agreed well with experimental observations. Additionally, the element size vs. normalized failure load relationship, determined in the parametric study, was used to evaluate strength-scaling factors for three different element sizes. The failure loads predicted with all three element sizes provided converged failure loads with respect to that corresponding with the 0.09 in. X 0.09 in. element size. Though preliminary in nature, the strength-scaling concept has the potential to greatly reduce the computational time required for PFA and can enable the analysis of large scale structural components where failure is dominated by fiber failure in tension.
Dynamically induced cascading failures in power grids.
Schäfer, Benjamin; Witthaut, Dirk; Timme, Marc; Latora, Vito
2018-05-17
Reliable functioning of infrastructure networks is essential for our modern society. Cascading failures are the cause of most large-scale network outages. Although cascading failures often exhibit dynamical transients, the modeling of cascades has so far mainly focused on the analysis of sequences of steady states. In this article, we focus on electrical transmission networks and introduce a framework that takes into account both the event-based nature of cascades and the essentials of the network dynamics. We find that transients of the order of seconds in the flows of a power grid play a crucial role in the emergence of collective behaviors. We finally propose a forecasting method to identify critical lines and components in advance or during operation. Overall, our work highlights the relevance of dynamically induced failures on the synchronization dynamics of national power grids of different European countries and provides methods to predict and model cascading failures.
NASA Technical Reports Server (NTRS)
Kennedy, Barbara J.
2004-01-01
The purposes of this study are to compare the current Space Shuttle Ground Support Equipment (GSE) infrastructure with the proposed GSE infrastructure upgrade modification. The methodology will include analyzing the first prototype installation equipment at Launch PAD B called the "Pathfinder". This study will begin by comparing the failure rate of the current components associated with the "Hardware interface module (HIM)" at the Kennedy Space Center to the failure rate of the neW Pathfinder components. Quantitative data will be gathered specifically on HIM components and the PAD B Hypergolic Fuel facility and Hypergolic Oxidizer facility areas which has the upgraded pathfinder equipment installed. The proposed upgrades include utilizing industrial controlled modules, software, and a fiber optic network. The results of this study provide evidence that there is a significant difference in the failure rates of the two studied infrastructure equipment components. There is also evidence that the support staff for each infrastructure system is not equal. A recommendation to continue with future upgrades is based on a significant reduction of failures in the new' installed ground system components.
Evaluation of ENEPIG and Immersion Silver Surface Finishes Under Drop Loading
NASA Astrophysics Data System (ADS)
Pearl, Adam; Osterman, Michael; Pecht, Michael
2016-01-01
The effect of printed circuit board surface finish on the drop loading reliability of ball grid array (BGA) solder interconnects has been examined. The finishes examined include electroless nickel/electroless palladium/immersion gold (ENEPIG) and immersion silver (ImAg). For the ENEPIG finish, the effect of the Pd plating layer thickness was evaluated by testing two different thicknesses: 0.05 μm and 0.15 μm. BGA components were assembled onto the boards using either eutectic Sn-Pb or Sn-3.0Ag-0.5Cu (SAC305) solder. Prior to testing, the assembled boards were aged at 100°C for 24 h or 500 h. The boards were then subjected to multiple 1500-g drop tests. Failure analysis indicated the primary failure site for the BGAs to be the solder balls at the board-side solder interface. Cratering of the board laminate under the solder-attached pads was also observed. In all cases, isothermal aging reduced the number of drops to failure. The components soldered onto the boards with the 0.15- μm-Pd ENEPIG finish with the SAC305 solder had the highest characteristic life, at 234 drops to failure, compared with the other finish-solder combinations.
Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fok, Alex
2013-10-30
The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the modelmore » to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.« less
1990-08-01
of the review are presented in Tables 1 and 2 by aircraft and type of component. The totals for each component are combined in Table 3. Adjusted...of Table 3 have been grouped according to basic system functions and combined percentages for each of the basic functions have been computed as shown...and the free oxygen combines with the tungsten to form 29 Fig. 2.5 Notching of lamp aged 77 hours at 28 Volts DC. 2000X. (Reference 2.1) 30 DAMAGE
Probabilistic Prediction of Lifetimes of Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.
2006-01-01
ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.
DOT National Transportation Integrated Search
2010-01-01
The Smart Grid is a cyber-physical system comprised of physical components, such as transmission lines and generators, and a : network of embedded systems deployed for their cyber control. Our objective is to qualitatively and quantitatively analyze ...
Fatigue analysis of composite materials using the fail-safe concept
NASA Technical Reports Server (NTRS)
Stievenard, G.
1982-01-01
If R1 is the probability of having a crack on a flight component and R2 is the probability of seeing this crack propagate between two scheduled inspections, the global failure regulation states that this product must not exceed 0.0000001.
Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael
On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning ofmore » component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL estimate and uncertainty from the previous prognostics type, and combining it with observational data related to the newer prognostics type. The resulting lifecycle prognostics algorithm uses all available information throughout the component lifecycle.« less
LDEF electronic systems: Successes, failures, and lessons
NASA Technical Reports Server (NTRS)
Miller, Emmett; Porter, Dave; Smith, Dave; Brooks, Larry; Levorsen, Joe; Mulkey, Owen
1991-01-01
Following the Long Duration Exposure Facility (LDEF) retrieval, the Systems Special Investigation Group (SIG) participated in an extensive series of tests of various electronic systems, including the NASA provided data and initiate systems, and some experiment systems. Overall, these were found to have performed remarkably well, even though most were designed and tested under limited budgets and used at least some nonspace qualified components. However, several anomalies were observed, including a few which resulted in some loss of data. The postflight test program objectives, observations, and lessons learned from these examinations are discussed. All analyses are not yet complete, but observations to date will be summarized, including the Boeing experiment component studies and failure analysis results related to the Interstellar Gas Experiment. Based upon these observations, suggestions for avoiding similar problems on future programs are presented.
System diagnostics using qualitative analysis and component functional classification
Reifman, J.; Wei, T.Y.C.
1993-11-23
A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.
System diagnostics using qualitative analysis and component functional classification
Reifman, Jaques; Wei, Thomas Y. C.
1993-01-01
A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.
NASA Technical Reports Server (NTRS)
Owens, Andrew; De Weck, Olivier L.; Stromgren, Chel; Goodliff, Kandyce; Cirillo, William
2017-01-01
Future crewed missions to Mars present a maintenance logistics challenge that is unprecedented in human spaceflight. Mission endurance – defined as the time between resupply opportunities – will be significantly longer than previous missions, and therefore logistics planning horizons are longer and the impact of uncertainty is magnified. Maintenance logistics forecasting typically assumes that component failure rates are deterministically known and uses them to represent aleatory uncertainty, or uncertainty that is inherent to the process being examined. However, failure rates cannot be directly measured; rather, they are estimated based on similarity to other components or statistical analysis of observed failures. As a result, epistemic uncertainty – that is, uncertainty in knowledge of the process – exists in failure rate estimates that must be accounted for. Analyses that neglect epistemic uncertainty tend to significantly underestimate risk. Epistemic uncertainty can be reduced via operational experience; for example, the International Space Station (ISS) failure rate estimates are refined using a Bayesian update process. However, design changes may re-introduce epistemic uncertainty. Thus, there is a tradeoff between changing a design to reduce failure rates and operating a fixed design to reduce uncertainty. This paper examines the impact of epistemic uncertainty on maintenance logistics requirements for future Mars missions, using data from the ISS Environmental Control and Life Support System (ECLS) as a baseline for a case study. Sensitivity analyses are performed to investigate the impact of variations in failure rate estimates and epistemic uncertainty on spares mass. The results of these analyses and their implications for future system design and mission planning are discussed.
NASA Astrophysics Data System (ADS)
Flynn, J. William; Goodfellow, Sebastian; Reyes-Montes, Juan; Nasseri, Farzine; Young, R. Paul
2016-04-01
Continuous acoustic emission (AE) data recorded during rock deformation tests facilitates the monitoring of fracture initiation and propagation due to applied stress changes. Changes in the frequency and energy content of AE waveforms have been previously observed and were associated with microcrack coalescence and the induction or mobilisation of large fractures which are naturally associated with larger amplitude AE events and lower-frequency components. The shift from high to low dominant frequency components during the late stages of the deformation experiment, as the rate of AE events increases and the sample approaches failure, indicates a transition from the micro-cracking to macro-cracking regime, where large cracks generated result in material failure. The objective of this study is to extract information on the fracturing process from the acoustic records around sample failure, where the fast occurrence of AE events does not allow for identification of individual AE events and phase arrivals. Standard AE event processing techniques are not suitable for extracting this information at these stages. Instead the observed changes in the frequency content of the continuous record can be used to characterise and investigate the fracture process at the stage of microcrack coalescence and sample failure. To analyse and characterise these changes, a detailed non-linear and non-stationary time-frequency analysis of the continuous waveform data is required. Empirical Mode Decomposition (EMD) and Hilbert Spectral Analysis (HSA) are two of the techniques used in this paper to analyse the acoustic records which provide a high-resolution temporal frequency distribution of the data. In this paper we present the results from our analysis of continuous AE data recorded during a laboratory triaxial deformation experiment using the combined EMD and HSA method.
Application of Function-Failure Similarity Method to Rotorcraft Component Design
NASA Technical Reports Server (NTRS)
Roberts, Rory A.; Stone, Robert E.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the designs that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. During the design of aircraft, a general technique is needed to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to specific components, which are described by their functionality. The failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using this technique, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. The fundamentals of this method were previously introduced for a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, D.; Pappacena, K.; Gaviria, J.
2010-10-11
The U.S. Department of Energy and its contractor, Bechtel Jacobs Company (BJC), are undertaking a major effort to clean up the former gaseous diffusion facility (K-25) located in Oak Ridge, TN. The decontamination and decommissioning activities require systematic removal of contaminated equipment and machinery followed by demolition of the buildings. As part of the cleanup activities, a beam clamp, used for horizontal life lines (HLLs) for fall protection, was discovered to be fractured during routine inspection. The beam clamp (yoke and D-ring) was a component in the HLL system purchased from Reliance Industries LLC. Specifically, the U-shaped stainless steel yokemore » of the beam clamp failed in a brittle mode at under less than 10% of the rated design capacity of 14,500 lb. The beam clamp had been in service for approximately 16 months. Bechtel Jacobs approached Argonne National Laboratory to assist in identifying the root cause of the failure of the beam clamp. The objectives of this study were to (1) review the prior reports and documents on the subject, (2) understand the possible failure mechanism(s) that resulted in the failed beam clamp components, (3) recommend approaches to mitigate the failure mechanism(s), and (4) evaluate the modified beam clamp assemblies. Energy dispersive x-ray analysis and chemical analysis of the corrosion products on the failed yoke and white residue on an in-service yoke indicated the presence of zinc, sulfur, and calcium. Analysis of rainwater in the complex, as conducted by BJC, indicated the presence of sulfur and calcium. It was concluded that, as a result of galvanic corrosion, zinc from the galvanized components of the beam clamp assembly (D-ring) migrated to the corroded region in the presence of the rainwater. Under mechanical stress, the corrosion process would have accelerated, resulting in the catastrophic failure of the yoke. As suggested by Bechtel Jacobs personnel, hydrogen embrittlement as a consequence of corrosion was also explored as a failure mechanism. Corroded and failed yoke samples had hydrogen concentrations of 20-60 ppm. However, the hydrogen content reduced to 4-11 ppm (similar to baseline as-received yoke samples) when the corrosion products were polished off. The hydrogen content in the scraped off corrosion product powders was >7000 ppm. These results indicate that hydrogen is primarily present in the corrosion products and not in the underlying steel. Rockwell hardness values on the corroded yoke and D-rings were R{sub c} {approx} 41-46. It was recommended to the beam clamp manufacturer that the beam clamp components be annealed to reduce the hardness values so that they are less susceptible to brittle failure. Upon annealing, hardness values of the beam clamp components reduced to R{sub c} {approx} 25. Several strategies were recommended and put in place to mitigate failure of the beam clamp components: (a) maintain hardness levels of both yokes and D-rings at R{sub c} < 35, (b) coat the yoke and D-rings with a dual coating of nickel (with 10% phosphorus) to delay corrosion and aluminum to prevent galvanic corrosion since it is more anodic to zinc, and (c) optimize coating thicknesses for nickel and aluminum while maintaining the physical integrity of the coatings. Evaluation of the Al- and Ni-coated yoke and D-ring specimens indicated they appear to have met the recommendations. Average hardness values of the dual-coated yokes were R{sub c} {approx} 25-35. Hardness values of dual-coated D-ring were R{sub c} {approx} 32. Measured average coating thicknesses for the aluminum and nickel coatings for yoke samples were 22 {micro}m (0.9 mils) and 80 {micro}m (3 mils), respectively. The D-rings also showed similar coating thicknesses. Microscopic examination showed that the aluminum coating was well bonded to the underlying nickel coating. Some observed damage was believed to be an artifact of the cutting-and-polishing steps during sample preparation for microscopy.« less
Analysis and Test Correlation of Proof of Concept Box for Blended Wing Body-Low Speed Vehicle
NASA Technical Reports Server (NTRS)
Spellman, Regina L.
2003-01-01
The Low Speed Vehicle (LSV) is a 14.2% scale remotely piloted vehicle of the revolutionary Blended Wing Body concept. The design of the LSV includes an all composite airframe. Due to internal manufacturing capability restrictions, room temperature layups were necessary. An extensive materials testing and manufacturing process development effort was underwent to establish a process that would achieve the high modulus/low weight properties required to meet the design requirements. The analysis process involved a loads development effort that incorporated aero loads to determine internal forces that could be applied to a traditional FEM of the vehicle and to conduct detailed component analyses. A new tool, Hypersizer, was added to the design process to address various composite failure modes and to optimize the skin panel thickness of the upper and lower skins for the vehicle. The analysis required an iterative approach as material properties were continually changing. As a part of the material characterization effort, test articles, including a proof of concept wing box and a full-scale wing, were fabricated. The proof of concept box was fabricated based on very preliminary material studies and tested in bending, torsion, and shear. The box was then tested to failure under shear. The proof of concept box was also analyzed using Nastran and Hypersizer. The results of both analyses were scaled to determine the predicted failure load. The test results were compared to both the Nastran and Hypersizer analytical predictions. The actual failure occurred at 899 lbs. The failure was predicted at 1167 lbs based on the Nastran analysis. The Hypersizer analysis predicted a lower failure load of 960 lbs. The Nastran analysis alone was not sufficient to predict the failure load because it does not identify local composite failure modes. This analysis has traditionally been done using closed form solutions. Although Hypersizer is typically used as an optimizer for the design process, the failure prediction was used to help gain acceptance and confidence in this new tool. The correlated models and process were to be used to analyze the full BWB-LSV airframe design. The analysis and correlation with test results of the proof of concept box is presented here, including the comparison of the Nastran and Hypersizer results.
Reliability analysis of C-130 turboprop engine components using artificial neural network
NASA Astrophysics Data System (ADS)
Qattan, Nizar A.
In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.
Failure Analysis of Sapphire Refractive Secondary Concentrators
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Quinn, George D.
2009-01-01
Failure analysis was performed on two sapphire, refractive secondary concentrators (RSC) that failed during elevated temperature testing. Both concentrators failed from machining/handling damage on the lens face. The first concentrator, which failed during testing to 1300 C, exhibited a large r-plane twin extending from the lens through much of the cone. The second concentrator, which was an attempt to reduce temperature gradients and failed during testing to 649 C, exhibited a few small twins on the lens face. The twins were not located at the origin, but represent another mode of failure that needs to be considered in the design of sapphire components. In order to estimate the fracture stress from fractographic evidence, branching constants were measured on sapphire strength specimens. The fractographic analysis indicated radial tensile stresses of 44 to 65 MPa on the lens faces near the origins. Finite element analysis indicated similar stresses for the first RSC, but lower stresses for the second RSC. Better machining and handling might have prevented the fractures, however, temperature gradients and resultant thermal stresses need to be reduced to prevent twinning.
Code of Federal Regulations, 2013 CFR
2013-01-01
... installed swimming pool slide shall be such that no structural failures of any component part shall cause failures of any other component part of the slide as described in the performance tests in paragraphs (d)(4... number and placement of such fasteners shall not cause a failure of the tread under the ladder loading...
Failure Analysis on Tail Rotor Teeter Pivot Bolt on a Helicopter
NASA Astrophysics Data System (ADS)
Qiang, WANG; Zi-long, DONG
2018-03-01
Tail rotor teeter pivot bolt of a helicopter fractured when in one flight. Failure analysis on the bolt was finished in laboratory. Macroscopic observation of the tailor rotor teeter pivot bolt, macro and microscopic inspection on the fracture surface of the bolt was carried out. Chemical components and metallurgical structure was also carried out. Experiment results showed that fracture mode of the tail rotor teeter pivot bolt is fatigue fracture. Fatigue area is over 80% of the total fracture surface, obvious fatigue band characteristics can be found at the fracture face. According to the results were analyzed from the macroscopic and microcosmic aspects, fracture reasons of the tail rotor teeter pivot bolt were analyzed in detail
Remote maintenance monitoring system
NASA Technical Reports Server (NTRS)
Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)
1992-01-01
A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.
A geometric approach to failure detection and identification in linear systems
NASA Technical Reports Server (NTRS)
Massoumnia, M. A.
1986-01-01
Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.
Product Quality Improvement Using FMEA for Electric Parking Brake (EPB)
NASA Astrophysics Data System (ADS)
Dumitrescu, C. D.; Gruber, G. C.; Tişcă, I. A.
2016-08-01
One of the most frequently used methods to improve product quality is complex FMEA. (Failure Modes and Effects Analyses). In the literature various FMEA is known, depending on the mode and depending on the targets; we mention here some of these names: Failure Modes and Effects Analysis Process, or analysis Failure Mode and Effects Reported (FMECA). Whatever option is supported by the work team, the goal of the method is the same: optimize product design activities in research, design processes, implementation of manufacturing processes, optimization of mining product to beneficiaries. According to a market survey conducted on parts suppliers to vehicle manufacturers FMEA method is used in 75%. One purpose of the application is that after the research and product development is considered resolved, any errors which may be detected; another purpose of applying the method is initiating appropriate measures to avoid mistakes. Achieving these two goals leads to a high level distribution in applying, to avoid errors already in the design phase of the product, thereby avoiding the emergence and development of additional costs in later stages of product manufacturing. During application of FMEA method using standardized forms; with their help will establish the initial assemblies of product structure, in which all components will be viewed without error. The work is an application of the method FMEA quality components to optimize the structure of the electrical parking brake (Electric Parching Brake - E.P.B). This is a component attached to the roller system which ensures automotive replacement of conventional mechanical parking brake while ensuring its comfort, functionality, durability and saves space in the passenger compartment. The paper describes the levels at which they appealed in applying FMEA, working arrangements in the 4 distinct levels of analysis, and how to determine the number of risk (Risk Priority Number); the analysis of risk factors and established authors who have imposed measures to reduce / eliminate risk completely exploiting this complex product.
Time-frequency vibration analysis for the detection of motor damages caused by bearing currents
NASA Astrophysics Data System (ADS)
Prudhom, Aurelien; Antonino-Daviu, Jose; Razik, Hubert; Climente-Alarcon, Vicente
2017-02-01
Motor failure due to bearing currents is an issue that has drawn an increasing industrial interest over recent years. Bearing currents usually appear in motors operated by variable frequency drives (VFD); these drives may lead to common voltage modes which cause currents induced in the motor shaft that are discharged through the bearings. The presence of these currents may lead to the motor bearing failure only few months after system startup. Vibration monitoring is one of the most common ways for detecting bearing damages caused by circulating currents; the evaluation of the amplitudes of well-known characteristic components in the vibration Fourier spectrum that are associated with race, ball or cage defects enables to evaluate the bearing condition and, hence, to identify an eventual damage due to bearing currents. However, the inherent constraints of the Fourier transform may complicate the detection of the progressive bearing degradation; for instance, in some cases, other frequency components may mask or be confused with bearing defect-related while, in other cases, the analysis may not be suitable due to the eventual non-stationary nature of the captured vibration signals. Moreover, the fact that this analysis implies to lose the time-dimension limits the amount of information obtained from this technique. This work proposes the use of time-frequency (T-F) transforms to analyse vibration data in motors affected by bearing currents. The experimental results obtained in real machines show that the vibration analysis via T-F tools may provide significant advantages for the detection of bearing current damages; among other, these techniques enable to visualise the progressive degradation of the bearing while providing an effective discrimination versus other components that are not related with the fault. Moreover, their application is valid regardless of the operation regime of the machine. Both factors confirm the robustness and reliability of these tools that may be an interesting alternative for detecting this type of failure in induction motors.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Deriving Function-failure Similarity Information for Failure-free Rotorcraft Component Design
NASA Technical Reports Server (NTRS)
Roberts, Rory A.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the design that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. The aircraft design needs to be passed through a general technique to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to certain components, which are described by their functionality. In turn, the failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using the technique proposed in this paper, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. This method was previously applied to a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.
Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive
NASA Technical Reports Server (NTRS)
Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)
2001-01-01
Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.
Materials Examination of the Vertical Stabilizer from American Airlines Flight 587
NASA Technical Reports Server (NTRS)
Fox, Matthew R.; Schultheisz, Carl R.; Reeder, James R.; Jensen, Brian J.
2005-01-01
The first in-flight failure of a primary structural component made from composite material on a commercial airplane led to the crash of American Airlines Flight 587. As part of the National Transportation Safety Board investigation of the accident, the composite materials of the vertical stabilizer were tested, microstructure was analyzed, and fractured composite lugs that attached the vertical stabilizer to the aircraft tail were examined. In this paper the materials testing and analysis is presented, composite fractures are described, and the resulting clues to the failure events are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santhanagopalan, Shriram; Smith, Kandler A; Graf, Peter A
NREL's Energy Storage team is exploring the effect of mechanical crush of lithium ion cells on their thermal and electrical safety. PHEV cells, fresh as well as ones aged over 8 months under different temperatures, voltage windows, and charging rates, were subjected to destructive physical analysis. Constitutive relationship and failure criteria were developed for the electrodes, separator as well as packaging material. The mechanical models capture well, the various modes of failure across different cell components. Cell level validation is being conducted by Sandia National Laboratories.
Environmental testing to prevent on-orbit TDRS failures
NASA Technical Reports Server (NTRS)
Cutler, Robert M.
1994-01-01
Can improved environmental testing prevent on-orbit component failures such as those experienced in the Tracking and Data Relay Satellite (TDRS) constellation? TDRS communications have been available to user spacecraft continuously for over 11 years, during which the five TDRS's placed in orbit have demonstrated their redundancies and robustness by surviving 26 component failures. Nevertheless, additional environmental testing prior to launch could prevent the occurrence of some types of failures, and could help to maintain communication services. Specific testing challenges involve traveling wave tube assemblies (TWTA's) whose lives may decrease with on-off cycling, and heaters that are subject to thermal cycles. The development of test conditions and procedures should account for known thermal variations. Testing may also have the potential to prevent failures in which components such as diplexers have had their lives dramatically shortened because of particle migration in a weightless environment. Reliability modeling could be used to select additional components that could benefit from special testing, but experience shows that this approach has serious limitations. Through knowledge of on-orbit experience, and with advances in testing, communication satellite programs might avoid the occurrence of some types of failures, and extend future spacecraft longevity beyond the current TDRS design life of ten years. However, determining which components to test, and how must testing to do, remain problematical.
Gambetta, Miguel; Dunn, Patrick; Nelson, Dawn; Herron, Bobbi; Arena, Ross
2007-01-01
The purpose of the present investigation is to examine the impact of a telemanagement component on an outpatient disease management program in patients with heart failure (HF). A total of 282 patients in whom HF was diagnosed and who were enrolled in an outpatient HF program were included in this analysis. One hundred fifty-eight patients additionally participated in a self-directed telemanagement component. The remaining 124 patients received care at an HF clinic but declined telemanagement. During the 7-month tracking period, 19 patients in the HF clinic plus telemanagement group and 53 patients in the HF clinic only group were hospitalized for cardiac reasons (log rank, 36.0; P<.001). The HF clinic only group had a significantly higher risk for hospitalization (hazard ratio, 4.0; 95% confidence interval, 2.4-6.7; P<.001). The results of the present study indicate that telemanagement is an important component of a disease management program in patients with HF.
Failure Analysis of Nonvolatile Residue (NVR) Analyzer Model SP-1000
NASA Technical Reports Server (NTRS)
Potter, Joseph C.
2011-01-01
National Aeronautics and Space Administration (NASA) subcontractor Wiltech contacted the NASA Electrical Lab (NE-L) and requested a failure analysis of a Solvent Purity Meter; model SP-IOOO produced by the VerTis Instrument Company. The meter, used to measure the contaminate in a solvent to determine the relative contamination on spacecraft flight hardware and ground servicing equipment, had been inoperable and in storage for an unknown amount of time. NE-L was asked to troubleshoot the unit and make a determination on what may be required to make the unit operational. Through the use of general troubleshooting processes and the review of a unit in service at the time of analysis, the unit was found to be repairable but would need the replacement of multiple components.
Determination of Turbine Blade Life from Engine Field Data
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.; Litt, Jonathan S.; Hendricks, Robert C.; Soditus, Sherry M.
2012-01-01
It is probable that no two engine companies determine the life of their engines or their components in the same way or apply the same experience and safety factors to their designs. Knowing the failure mode that is most likely to occur minimizes the amount of uncertainty and simplifies failure and life analysis. Available data regarding failure mode for aircraft engine blades, while favoring low-cycle, thermal mechanical fatigue as the controlling mode of failure, are not definitive. Sixteen high-pressure turbine (HPT) T-1 blade sets were removed from commercial aircraft engines that had been commercially flown by a single airline and inspected for damage. Each set contained 82 blades. The damage was cataloged into three categories related to their mode of failure: (1) Thermal-mechanical fatigue, (2) Oxidation/Erosion, and (3) "Other." From these field data, the turbine blade life was determined as well as the lives related to individual blade failure modes using Johnson-Weibull analysis. A simplified formula for calculating turbine blade life and reliability was formulated. The L(sub 10) blade life was calculated to be 2427 cycles (11 077 hr). The resulting blade life attributed to oxidation/erosion equaled that attributed to thermal-mechanical fatigue. The category that contributed most to blade failure was Other. If there were there no blade failures attributed to oxidation/erosion and thermal-mechanical fatigue, the overall blade L(sub 10) life would increase approximately 11 to 17 percent.
Mass and Reliability System (MaRS)
NASA Technical Reports Server (NTRS)
Barnes, Sarah
2016-01-01
The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including: operation hours, random/nonrandom failures, software/hardware failures, quantity, orbital replaceable units (ORU), date of placement, unit weight, frequency of part, etc. The motivation for creating such a database will be the development of a mass/reliability parametric model to estimate mass required for replacement parts. Once complete, engineers working on future space flight missions will have access a mean time to failures and on parts along with their mass, this will be used to make proper decisions for long duration space flight missions
Studies on the thermal breakdown of common Li-ion battery electrolyte components
Lamb, Joshua; Orendorff, Christopher J.; Roth, Emanuel Peter; ...
2015-08-06
While much attention is paid to the impact of the active materials on the catastrophic failure of lithium ion batteries, much of the severity of a battery failure is also governed by the electrolytes used, which are typically flammable themselves and can decompose during battery failure. The use of LiPF 6 salt can be problematic as well, not only catalyzing electrolyte decomposition, but also providing a mechanism for HF production. This work evaluates the safety performance of the common components ethylene carbonate (EC), diethyl carbonate (DEC), dimethyl carbonate (DMC), and ethyl methyl carbonate (EMC) in the context of the gassesmore » produced during thermal decomposition, looking at both the quantity and composition of the vapor produced. EC and DEC were found to be the largest contributors to gas production, both producing upwards of 1.5 moles of gas/mole of electrolyte. DMC was found to be relatively stable, producing very little gas regardless of the presence of LiPF 6. EMC was stable on its own, but the addition of LiPF 6 catalyzed decomposition of the solvent. As a result, while gas analysis did not show evidence of significant quantities of any acutely toxic materials, the gasses themselves all contained enough flammable components to potentially ignite in air.« less
Ceramics Analysis and Reliability Evaluation of Structures (CARES). Users and programmers manual
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.
1990-01-01
This manual describes how to use the Ceramics Analysis and Reliability Evaluation of Structures (CARES) computer program. The primary function of the code is to calculate the fast fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. The program uses results from MSC/NASTRAN or ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effect of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or unifrom uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-square analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests, ninety percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan ninety percent confidence band values are also provided. The probabilistic fast-fracture theories used in CARES, along with the input and output for CARES, are described. Example problems to demonstrate various feature of the program are also included. This manual describes the MSC/NASTRAN version of the CARES program.
NASA Astrophysics Data System (ADS)
Nemeth, Noel N.; Jadaan, Osama M.; Palfi, Tamas; Baker, Eric H.
Brittle materials today are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts, thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing brittle material components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The NASA CARES/Life 1 (Ceramic Analysis and Reliability Evaluation of Structure/Life) code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. This capability includes predicting the time-dependent failure probability of ceramic components against catastrophic rupture when subjected to transient thermomechanical loads (including cyclic loads). The developed methodology allows for changes in material response that can occur with temperature or time (i.e. changing fatigue and Weibull parameters with temperature or time). For this article an overview of the transient reliability methodology and how this methodology is extended to account for proof testing is described. The CARES/Life code has been modified to have the ability to interface with commercially available finite element analysis (FEA) codes executed for transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.
A computer program for cyclic plasticity and structural fatigue analysis
NASA Technical Reports Server (NTRS)
Kalev, I.
1980-01-01
A computerized tool for the analysis of time independent cyclic plasticity structural response, life to crack initiation prediction, and crack growth rate prediction for metallic materials is described. Three analytical items are combined: the finite element method with its associated numerical techniques for idealization of the structural component, cyclic plasticity models for idealization of the material behavior, and damage accumulation criteria for the fatigue failure.
Advanced Signal Conditioners for Data-Acquisition Systems
NASA Technical Reports Server (NTRS)
Lucena, Angel; Perotti, Jose; Eckhoff, Anthony; Medelius, Pedro
2004-01-01
Signal conditioners embodying advanced concepts in analog and digital electronic circuitry and software have been developed for use in data-acquisition systems that are required to be compact and lightweight, to utilize electric energy efficiently, and to operate with high reliability, high accuracy, and high power efficiency, without intervention by human technicians. These signal conditioners were originally intended for use aboard spacecraft. There are also numerous potential terrestrial uses - especially in the fields of aeronautics and medicine, wherein it is necessary to monitor critical functions. Going beyond the usual analog and digital signal-processing functions of prior signal conditioners, the new signal conditioner performs the following additional functions: It continuously diagnoses its own electronic circuitry, so that it can detect failures and repair itself (as described below) within seconds. It continuously calibrates itself on the basis of a highly accurate and stable voltage reference, so that it can continue to generate accurate measurement data, even under extreme environmental conditions. It repairs itself in the sense that it contains a micro-controller that reroutes signals among redundant components as needed to maintain the ability to perform accurate and stable measurements. It detects deterioration of components, predicts future failures, and/or detects imminent failures by means of a real-time analysis in which, among other things, data on its present state are continuously compared with locally stored historical data. It minimizes unnecessary consumption of electric energy. The design architecture divides the signal conditioner into three main sections: an analog signal section, a digital module, and a power-management section. The design of the analog signal section does not follow the traditional approach of ensuring reliability through total redundancy of hardware: Instead, following an approach called spare parts tool box, the reliability of each component is assessed in terms of such considerations as risks of damage, mean times between failures, and the effects of certain failures on the performance of the signal conditioner as a whole system. Then, fewer or more spares are assigned for each affected component, pursuant to the results of this analysis, in order to obtain the required degree of reliability of the signal conditioner as a whole system. The digital module comprises one or more processors and field-programmable gate arrays, the number of each depending on the results of the aforementioned analysis. The digital module provides redundant control, monitoring, and processing of several analog signals. It is designed to minimize unnecessary consumption of electric energy, including, when possible, going into a low-power "sleep" mode that is implemented in firmware. The digital module communicates with external equipment via a personal-computer serial port. The digital module monitors the "health" of the rest of the signal conditioner by processing defined measurements and/or trends. It automatically makes adjustments to respond to channel failures, compensate for effects of temperature, and maintain calibration.
X-framework: Space system failure analysis framework
NASA Astrophysics Data System (ADS)
Newman, John Steven
Space program and space systems failures result in financial losses in the multi-hundred million dollar range every year. In addition to financial loss, space system failures may also represent the loss of opportunity, loss of critical scientific, commercial and/or national defense capabilities, as well as loss of public confidence. The need exists to improve learning and expand the scope of lessons documented and offered to the space industry project team. One of the barriers to incorporating lessons learned include the way in which space system failures are documented. Multiple classes of space system failure information are identified, ranging from "sound bite" summaries in space insurance compendia, to articles in journals, lengthy data-oriented (what happened) reports, and in some rare cases, reports that treat not only the what, but also the why. In addition there are periodically published "corporate crisis" reports, typically issued after multiple or highly visible failures that explore management roles in the failure, often within a politically oriented context. Given the general lack of consistency, it is clear that a good multi-level space system/program failure framework with analytical and predictive capability is needed. This research effort set out to develop such a model. The X-Framework (x-fw) is proposed as an innovative forensic failure analysis approach, providing a multi-level understanding of the space system failure event beginning with the proximate cause, extending to the directly related work or operational processes and upward through successive management layers. The x-fw focus is on capability and control at the process level and examines: (1) management accountability and control, (2) resource and requirement allocation, and (3) planning, analysis, and risk management at each level of management. The x-fw model provides an innovative failure analysis approach for acquiring a multi-level perspective, direct and indirect causation of failures, and generating better and more consistent reports. Through this approach failures can be more fully understood, existing programs can be evaluated and future failures avoided. The x-fw development involved a review of the historical failure analysis and prevention literature, coupled with examination of numerous failure case studies. Analytical approaches included use of a relational failure "knowledge base" for classification and sorting of x-fw elements and attributes for each case. In addition a novel "management mapping" technique was developed as a means of displaying an integrated snapshot of indirect causes within the management chain. Further research opportunities will extend the depth of knowledge available for many of the component level cases. In addition, the x-fw has the potential to expand the scope of space sector lessons learned, and contribute to knowledge management and organizational learning.
Code of Federal Regulations, 2010 CFR
2010-01-01
... STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction Landing Gear § 25.721 General. (a) The main... one or more landing gear legs not extended without sustaining a structural component failure that is... provisions of this section may be shown by analysis or tests, or both. [Amdt. 25-32, 37 FR 3969, Feb. 24...
A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems
NASA Technical Reports Server (NTRS)
Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon
2009-01-01
Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.
Modelling river bank retreat by combining fluvial erosion, seepage and mass failure
NASA Astrophysics Data System (ADS)
Dapporto, S.; Rinaldi, M.
2003-04-01
Streambank erosion processes contribute significantly to the sediment yielded from a river system and represent an important issue in the contexts of soil degradation and river management. Bank retreat is controlled by a complex interaction of hydrologic, geotechnical, and hydraulic processes. The capability of modelling these different components allows for a full reconstruction and comprehension of the causes and rates of bank erosion. River bank retreat during a single flow event has been modelled by combining simulation of fluvial erosion, seepage, and mass failures. The study site, along the Sieve River (Central Italy), has been subject to extensive researches, including monitoring of pore water pressures for a period of 4 years. The simulation reconstructs fairly faithfully the observed changes, and is used to: a) test the potentiality and discuss advantages and limitations of such type of methodology for modelling bank retreat; c) quantify the contribution and mutual role of the different processes determining bank retreat. The hydrograph of the event is divided in a series of time steps. Modelling of the riverbank retreat includes for each step the following components: a) fluvial erosion and consequent changes in bank geometry; b) finite element seepage analysis; c) stability analysis by limit equilibrium method. Direct fluvial shear erosion is computed using empirically derived relationships expressing lateral erosion rate as a function of the excess of shear stress to the critical entrainment value for the different materials along the bank profile. Lateral erosion rate has been calibrated on the basis of the total bank retreat measured by digital terrestrial photogrammetry. Finite element seepage analysis is then conducted to reconstruct the saturated and unsaturated flow within the bank and the pore water pressure distribution for each time step. The safety factor for mass failures is then computed, using the pore water pressure distribution obtained by the seepage analysis, and the geometry of the upper bank is modified in case of failure.
NASA Astrophysics Data System (ADS)
Marhadi, Kun Saptohartyadi
Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.
Software Considerations for Subscale Flight Testing of Experimental Control Laws
NASA Technical Reports Server (NTRS)
Murch, Austin M.; Cox, David E.; Cunningham, Kevin
2009-01-01
The NASA AirSTAR system has been designed to address the challenges associated with safe and efficient subscale flight testing of research control laws in adverse flight conditions. In this paper, software elements of this system are described, with an emphasis on components which allow for rapid prototyping and deployment of aircraft control laws. Through model-based design and automatic coding a common code-base is used for desktop analysis, piloted simulation and real-time flight control. The flight control system provides the ability to rapidly integrate and test multiple research control laws and to emulate component or sensor failures. Integrated integrity monitoring systems provide aircraft structural load protection, isolate the system from control algorithm failures, and monitor the health of telemetry streams. Finally, issues associated with software configuration management and code modularity are briefly discussed.
NASA Technical Reports Server (NTRS)
2001-01-01
Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.
Manger, Ryan P; Paxton, Adam B; Pawlicki, Todd; Kim, Gwe-Ya
2015-05-01
Surface image guided, Linac-based radiosurgery (SIG-RS) is a modern approach for delivering radiosurgery that utilizes optical stereoscopic imaging to monitor the surface of the patient during treatment in lieu of using a head frame for patient immobilization. Considering the novelty of the SIG-RS approach and the severity of errors associated with delivery of large doses per fraction, a risk assessment should be conducted to identify potential hazards, determine their causes, and formulate mitigation strategies. The purpose of this work is to investigate SIG-RS using the combined application of failure modes and effects analysis (FMEA) and fault tree analysis (FTA), report on the effort required to complete the analysis, and evaluate the use of FTA in conjunction with FMEA. A multidisciplinary team was assembled to conduct the FMEA on the SIG-RS process. A process map detailing the steps of the SIG-RS was created to guide the FMEA. Failure modes were determined for each step in the SIG-RS process, and risk priority numbers (RPNs) were estimated for each failure mode to facilitate risk stratification. The failure modes were ranked by RPN, and FTA was used to determine the root factors contributing to the riskiest failure modes. Using the FTA, mitigation strategies were formulated to address the root factors and reduce the risk of the process. The RPNs were re-estimated based on the mitigation strategies to determine the margin of risk reduction. The FMEA and FTAs for the top two failure modes required an effort of 36 person-hours (30 person-hours for the FMEA and 6 person-hours for two FTAs). The SIG-RS process consisted of 13 major subprocesses and 91 steps, which amounted to 167 failure modes. Of the 91 steps, 16 were directly related to surface imaging. Twenty-five failure modes resulted in a RPN of 100 or greater. Only one of these top 25 failure modes was specific to surface imaging. The riskiest surface imaging failure mode had an overall RPN-rank of eighth. Mitigation strategies for the top failure mode decreased the RPN from 288 to 72. Based on the FMEA performed in this work, the use of surface imaging for monitoring intrafraction position in Linac-based stereotactic radiosurgery (SRS) did not greatly increase the risk of the Linac-based SRS process. In some cases, SIG helped to reduce the risk of Linac-based RS. The FMEA was augmented by the use of FTA since it divided the failure modes into their fundamental components, which simplified the task of developing mitigation strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Y.A.; Feltus, M.A.
1995-07-01
Reliability-centered maintenance (RCM) methods are applied to boiling water reactor plant-specific emergency core cooling system probabilistic risk assessment (PRA) fault trees. The RCM is a technique that is system function-based, for improving a preventive maintenance (PM) program, which is applied on a component basis. Many PM programs are based on time-directed maintenance tasks, while RCM methods focus on component condition-directed maintenance tasks. Stroke time test data for motor-operated valves (MOVs) are used to address three aspects concerning RCM: (a) to determine if MOV stroke time testing was useful as a condition-directed PM task; (b) to determine and compare the plant-specificmore » MOV failure data from a broad RCM philosophy time period compared with a PM period and, also, compared with generic industry MOV failure data; and (c) to determine the effects and impact of the plant-specific MOV failure data on core damage frequency (CDF) and system unavailabilities for these emergency systems. The MOV stroke time test data from four emergency core cooling systems [i.e., high-pressure coolant injection (HPCI), reactor core isolation cooling (RCIC), low-pressure core spray (LPCS), and residual heat removal/low-pressure coolant injection (RHR/LPCI)] were gathered from Philadelphia Electric Company`s Peach Bottom Atomic Power Station Units 2 and 3 between 1980 and 1992. The analyses showed that MOV stroke time testing was not a predictor for eminent failure and should be considered as a go/no-go test. The failure data from the broad RCM philosophy showed an improvement compared with the PM-period failure rates in the emergency core cooling system MOVs. Also, the plant-specific MOV failure rates for both maintenance philosophies were shown to be lower than the generic industry estimates.« less
Reliability evaluation methodology for NASA applications
NASA Technical Reports Server (NTRS)
Taneja, Vidya S.
1992-01-01
Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.
On radicalizing behaviorism: A call for cultural analysis
Malagodi, E. F.
1986-01-01
Our culture at large continues many practices that work against the well-being of its members and its chances for survival. Our discipline has failed to realize its potential for contributing to the understanding of these practices and to the generation of solutions. This failure of realization is in part a consequence of the general failure of behavior analysts to view social and cultural analysis as a fundamental component of radical behaviorism. This omission is related to three prevailing practices of our discipline. First, radical behaviorism is characteristically defined as a “philosophy of science,” and its concerns are ordinarily restricted to certain epistemological issues. Second, theoretical extensions to social and cultural phenomena too often depend solely upon principles derived from the analysis of behavior. Third, little attention has been directed at examining the relationships that do, or that should, exist between our discipline and related sciences. These practices themselves are attributed to certain features of the history of our field. Two general remedies for this situation are suggested: first, that radical behaviorism be treated as a comprehensive world view in which epistemological, psychological, and cultural analyses constitute interdependent components; second, that principles derived from compatible social-science disciplines be incorporated into radical behaviorism. PMID:22478643
Unlocking the Mystery of Columbia's Tragic Accident Through Materials Characterization
NASA Technical Reports Server (NTRS)
Shah, Sandeep; Jerman, Gregory; Coston, James
2003-01-01
The wing and underbelly reconstruction of Space Shuttle Columbia took place at the Shuttle Landing Facility Hangar after the accident which destroyed STS-107. Fragments were placed on a grid according to their original location on the orbiter. Some Reinforced Carbon-Carbon (RCC) panels of the left wing leading edge and other parts from both leading edges were recovered and incorporated into the reconstruction. The recovered parts were tracked on a database according to a number and also tracked on a map of the orbiter. This viewgraph presentation describes the process of failure analysis undertaken by the Materials and Processes (M&P) Problem Resolution Team. The team started with factual observations about the accident, and identified highest level questions for it to answer in order to understand where on the orbiter failure occured, what component(s) failed, and what was the sequence of events. The finding of Columbia's MADS/OEX data recorder shifted the focus of the team's analysis to the left wing leading edge damage. The team placed particular attention on slag deposits on some of the RCC panels. The presentation lists analysis techniques, and lower level questions for the team to answer.
NASA Astrophysics Data System (ADS)
Arief, I. S.; Suherman, I. H.; Wardani, A. Y.; Baidowi, A.
2017-05-01
Control and monitoring system is a continuous process of securing the asset in the Marine Current Renewable Energy. A control and monitoring system is existed each critical components which is embedded in Failure Mode Effect Analysis (FMEA) method. As the result, the process in this paper developed through a matrix sensor. The matrix correlated to critical components and monitoring system which supported by sensors to conduct decision-making.
Reliability and Maintainability Analysis of Fluidic Back-Up Flight Control System and Components.
1981-09-01
industry. 2 r ~~m~ NADC 80227- 60 Maintainability Review of FMEA worksheets indicates that the standard hydraulic components of the servoactuator will...achieved. Procedures for conducting the FMEA and evaluating the 6 & | I NADC 80227- 60 severity of each failure mode are included as Appendix A...KEYSER N62269-81-M-3047 UNCLASSIFIED NADC-80227- 60 NL 66 11111.5 .4 11 6 MICROCOPY RESOLUTION TEST CHART N~ATIONAL BUR[AU Of STANDARDS 1%3A, REPORT
NASA Astrophysics Data System (ADS)
Park, Jung-Yong; Jung, Yong-Keun; Park, Jong-Jin; Kang, Yong-Ho
2002-05-01
Failures of turbine blades are identified as the leading causes of unplanned outages for steam turbine. Accidents of low-pressure turbine blade occupied more than 70 percent in turbine components. Therefore, the prevention of failures for low pressure turbine blades is certainly needed. The procedure is illustrated by the case study. This procedure is used to guide, and support the plant manager's decisions to avoid a costly, unplanned outage. In this study, we are trying to find factors of failures in LP turbine blade and to make three steps to approach the solution of blade failure. First step is to measure natural frequency in mockup test and to compare it with nozzle passing frequency. Second step is to use FEM and to calculate the natural frequencies of 7 blades and 10 blades per group in BLADE code. Third step is to find natural frequencies of grouped blade off the nozzle passing frequency.
Enhanced Component Performance Study: Motor-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2016-02-01
This report presents an enhanced performance evaluation of motor-driven pumps at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The motor-driven pump failure modes considered for standby systems are failure to start, failure to run less than or equal to one hour, and failure to run more than one hour; for normally running systems, the failure modes considered are failure to start and failure tomore » run. An eight hour unreliability estimate is also calculated and trended. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified in pump run hours per reactor year. Statistically significant decreasing trends were identified for standby systems industry-wide frequency of start demands, and run hours per reactor year for runs of less than or equal to one hour.« less
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
Reliability analysis of structural ceramic components using a three-parameter Weibull distribution
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois
1992-01-01
Described here are nonlinear regression estimators for the three-Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.
NASA Technical Reports Server (NTRS)
Yim, John T.; Soulas, George C.; Shastry, Rohit; Choi, Maria; Mackey, Jonathan A.; Sarver-Verhey, Timothy R.
2017-01-01
The service life assessment for NASA's Evolutionary Xenon Thruster is updated to incorporate the results from the successful and voluntarily early completion of the 51,184 hour long duration test which demonstrated 918 kg of total xenon throughput. The results of the numerous post-test investigations including destructive interrogations have been assessed against all of the critical known and suspected failure mechanisms to update the life and throughput expectations for each major component. Analysis results of two of the most acute failure mechanisms, namely pit-and-groove erosion and aperture enlargement of the accelerator grid, are not updated in this work but will be published at a future time after analysis completion.
a New Method for Fmeca Based on Fuzzy Theory and Expert System
NASA Astrophysics Data System (ADS)
Byeon, Yoong-Tae; Kim, Dong-Jin; Kim, Jin-O.
2008-10-01
Failure Mode Effects and Criticality Analysis (FMECA) is one of most widely used methods in modern engineering system to investigate potential failure modes and its severity upon the system. FMECA evaluates criticality and severity of each failure mode and visualize the risk level matrix putting those indices to column and row variable respectively. Generally, those indices are determined subjectively by experts and operators. However, this process has no choice but to include uncertainty. In this paper, a method for eliciting expert opinions considering its uncertainty is proposed to evaluate the criticality and severity. In addition, a fuzzy expert system is constructed in order to determine the crisp value of risk level for each failure mode. Finally, an illustrative example system is analyzed in the case study. The results are worth considering in deciding the proper policies for each component of the system.
Relating design and environmental variables to reliability
NASA Astrophysics Data System (ADS)
Kolarik, William J.; Landers, Thomas L.
The combination of space application and nuclear power source demands high reliability hardware. The possibilities of failure, either an inability to provide power or a catastrophic accident, must be minimized. Nuclear power experiences on the ground have led to highly sophisticated probabilistic risk assessment procedures, most of which require quantitative information to adequately assess such risks. In the area of hardware risk analysis, reliability information plays a key role. One of the lessons learned from the Three Mile Island experience is that thorough analyses of critical components are essential. Nuclear grade equipment shows some reliability advantages over commercial. However, no statistically significant difference has been found. A recent study pertaining to spacecraft electronics reliability, examined some 2500 malfunctions on more than 300 aircraft. The study classified the equipment failures into seven general categories. Design deficiencies and lack of environmental protection accounted for about half of all failures. Within each class, limited reliability modeling was performed using a Weibull failure model.
NASA Technical Reports Server (NTRS)
Arakere, Nagaraj K.; Swanson, Gregory R.
2000-01-01
High Cycle Fatigue (HCF) induced failures in aircraft gas-turbine engines is a pervasive problem affecting a wide range of components and materials. HCF is currently the primary cause of component failures in gas turbine aircraft engines. Turbine blades in high performance aircraft and rocket engines are increasingly being made of single crystal nickel superalloys. Single-crystal Nickel-base superalloys were developed to provide superior creep, stress rupture, melt resistance and thermomechanical fatigue capabilities over polycrystalline alloys previously used in the production of turbine blades and vanes. Currently the most widely used single crystal turbine blade superalloys are PWA 1480/1493 and PWA 1484. These alloys play an important role in commercial, military and space propulsion systems. PWA1493, identical to PWA1480, but with tighter chemical constituent control, is used in the NASA SSME (Space Shuttle Main Engine) alternate turbopump, a liquid hydrogen fueled rocket engine. Objectives for this paper are motivated by the need for developing failure criteria and fatigue life evaluation procedures for high temperature single crystal components, using available fatigue data and finite element modeling of turbine blades. Using the FE (finite element) stress analysis results and the fatigue life relations developed, the effect of variation of primary and secondary crystal orientations on life is determined, at critical blade locations. The most advantageous crystal orientation for a given blade design is determined. Results presented demonstrates that control of secondary and primary crystallographic orientation has the potential to optimize blade design by increasing its resistance to fatigue crack growth without adding additional weight or cost.
ADM guidance-Ceramics: guidance to the use of fractography in failure analysis of brittle materials.
Scherrer, Susanne S; Lohbauer, Ulrich; Della Bona, Alvaro; Vichi, Alessandro; Tholey, Michael J; Kelly, J Robert; van Noort, Richard; Cesar, Paulo Francisco
2017-06-01
To provide background information and guidance as to how to use fractography accurately, a powerful tool for failure analysis of dental ceramic structures. An extended palette of qualitative and quantitative fractography is provided, both for in vivo and in vitro fracture surface analyses. As visual support, this guidance document will provide micrographs of typical critical ceramic processing flaws, differentiating between pre- versus post sintering cracks, grinding damage related failures and occlusal contact wear origins and of failures due to surface degradation. The documentation emphasizes good labeling of crack features, precise indication of the direction of crack propagation (dcp), identification of the fracture origin, the use of fractographic photomontage of critical flaws or flaw labeling on strength data graphics. A compilation of recommendations for specific applications of fractography in Dentistry is also provided. This guidance document will contribute to a more accurate use of fractography and help researchers to better identify, describe and understand the causes of failure, for both clinical and laboratory-scale situations. If adequately performed at a large scale, fractography will assist in optimizing the methods of processing and designing of restorative materials and components. Clinical failures may be better understood and consequently reduced by sending out the correct message regarding the fracture origin in clinical trials. Copyright © 2017 The Academy of Dental Materials. All rights reserved.
Studies on Automobile Clutch Release Bearing Characteristics with Acoustic Emission
NASA Astrophysics Data System (ADS)
Chen, Guoliang; Chen, Xiaoyang
Automobile clutch release bearings are important automotive driveline components. For the clutch release bearing, early fatigue failure diagnosis is significant, but the early fatigue failure response signal is not obvious, because failure signals are susceptible to noise on the transmission path and to working environment factors such as interference. With an improvement in vehicle design, clutch release bearing fatigue life indicators have increasingly become an important requirement. Contact fatigue is the main failure mode of release rolling bearing components. Acoustic emission techniques in contact fatigue failure detection have unique advantages, which include highly sensitive nondestructive testing methods. In the acoustic emission technique to detect a bearing, signals are collected from multiple sensors. Each signal contains partial fault information, and there is overlap between the signals' fault information. Therefore, the sensor signals receive simultaneous source information integration is complete fragment rolling bearing fault acoustic emission signal, which is the key issue of accurate fault diagnosis. Release bearing comprises the following components: the outer ring, inner ring, rolling ball, cage. When a failure occurs (such as cracking, pitting), the other components will impact damaged point to produce acoustic emission signal. Release bearings mainly emit an acoustic emission waveform with a Rayleigh wave propagation. Elastic waves emitted from the sound source, and it is through the part surface bearing scattering. Dynamic simulation of rolling bearing failure will contribute to a more in-depth understanding of the characteristics of rolling bearing failure, because monitoring and fault diagnosis of rolling bearings provide a theoretical basis and foundation.
Laser engravings as reason for mechanical failure of titanium-alloyed total hip stems.
Kluess, Daniel; Steinhauser, Erwin; Joseph, Micheal; Koch, Ursula; Ellenrieder, Martin; Mittelmeier, Wolfram; Bader, Rainer
2015-07-01
Two revisions of broken β-titanium total hip stems had to be performed in our hospital after 2 and 4 years in situ. Since both fractures were located at the level of a laser engraving, a failure analysis was conducted. Both retrieved hip stems were disinfected and collected in our retrieval database after patient's signed agreement. Each fragment was macroscopically photographed. Fracture surfaces were analyzed using scanning electron microscopy (SEM). Quantification of element content was conducted using energy dispersive X-ray (EDX) analysis. Both stems show fatigue fracture, as displayed by the lines of rest on the fracture surface. The origin of fracture was identified directly at the laser engraving of the company logo at both stems by means of SEM. The EDX analysis showed an oxygen level beneath the laser engraving about twice as high as in the substrate, causing material embrittlement. Laser engravings need to be reduced to a minimum of necessary information, and should be placed at locations with minimum mechanical load. Biomechanical analyses are recommended to identify less loaded areas in implant components to avoid such implant failures.
Overview of the Systems Special Investigation Group investigation
NASA Technical Reports Server (NTRS)
Mason, James B.; Dursch, Harry; Edelman, Joel
1993-01-01
The Long Duration Exposure Facility (LDEF) carried a remarkable variety of electrical, mechanical, thermal, and optical systems, subsystems, and components. Nineteen of the fifty-seven experiments flown on LDEF contained functional systems that were active on-orbit. Almost all of the other experiments possessed at least a few specific components of interest to the Systems Special Investigation Group (Systems SIG), such as adhesives, seals, fasteners, optical components, and thermal blankets. Almost all top level functional testing of the active LDEF and experiment systems has been completed. Failure analysis of both LDEF hardware and individual experiments that failed to perform as designed has also been completed. Testing of system components and experimenter hardware of interest to the Systems SIG is ongoing. All available testing and analysis results were collected and integrated by the Systems SIG. An overview of our findings is provided. An LDEF Optical Experiment Database containing information for all 29 optical related experiments is also discussed.
Komal
2018-05-01
Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Uncemented glenoid component in total shoulder arthroplasty. Survivorship and outcomes.
Martin, Scott David; Zurakowski, David; Thornhill, Thomas S
2005-06-01
Glenoid component loosening continues to be a major factor affecting the long-term survivorship of total shoulder replacements. Radiolucent lines, cement fracture, migration, and loosening requiring revision are common problems with cemented glenoid components. The purpose of this study was to evaluate the results of total shoulder arthroplasty with an uncemented glenoid component and to identify predictors of glenoid component failure. One hundred and forty-seven consecutive total shoulder arthroplasties were performed in 132 patients (mean age, 63.3 years) with use of an uncemented glenoid component fixed with screws between 1988 and 1996. One hundred and forty shoulders in 124 patients were available for follow-up at an average of 7.5 years. One shoulder in which the arthroplasty had failed at 2.4 years and for which the duration of follow-up was four years was also included for completeness. The preoperative diagnoses included osteoarthritis in seventy-two shoulders and rheumatoid arthritis in fifty-five. Radiolucency was noted around the glenoid component and/or screws in fifty-three of the 140 shoulders. The mean modified ASES (American Shoulder and Elbow Surgeons) score (and standard deviation) improved from 15.6 +/- 11.8 points preoperatively to 75.8 +/- 17.5 points at the time of follow-up. Eighty-five shoulders were not painful, forty-two were slightly or mildly painful, ten were moderately painful, and three were severely painful. Fifteen (11%) of the glenoid components failed clinically, and ten of them also had radiographic signs of failure. Eleven other shoulders had radiographic signs of failure but no symptoms at the time of writing. Three factors had a significant independent association with clinical failure: male gender (p = 0.02), pain (p < 0.01), and radiolucency adjacent to the flat tray (p < 0.001). In addition, the annual risk of implant revision was nearly seven times higher for patients with radiographic signs of failure. Clinical survivorship was 95% at five years and 85% at ten years. The failure rates of the total shoulder arthroplasties in this study were higher than those in previously reported studies of cemented polyethylene components with similar durations of follow-up. Screw breakage and excessive polyethylene wear were common problems that may lead to additional failures of these uncemented glenoid components in the future.
(n, N) type maintenance policy for multi-component systems with failure interactions
NASA Astrophysics Data System (ADS)
Zhang, Zhuoqi; Wu, Su; Li, Binfeng; Lee, Seungchul
2015-04-01
This paper studies maintenance policies for multi-component systems in which failure interactions and opportunistic maintenance (OM) involve. This maintenance problem can be formulated as a Markov decision process (MDP). However, since an action set and state space in MDP exponentially expand as the number of components increase, traditional approaches are computationally intractable. To deal with curse of dimensionality, we decompose such a multi-component system into mutually influential single-component systems. Each single-component system is formulated as an MDP with the objective of minimising its long-run average maintenance cost. Under some reasonable assumptions, we prove the existence of the optimal (n, N) type policy for a single-component system. An algorithm to obtain the optimal (n, N) type policy is also proposed. Based on the proposed algorithm, we develop an iterative approximation algorithm to obtain an acceptable maintenance policy for a multi-component system. Numerical examples find that failure interactions and OM pose significant effects on a maintenance policy.
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
Ashley, Laura; Armitage, Gerry; Taylor, Julie
2017-03-01
Failure Modes and Effects Analysis (FMEA) is a prospective quality assurance methodology increasingly used in healthcare, which identifies potential vulnerabilities in complex, high-risk processes and generates remedial actions. We aimed, for the first time, to apply FMEA in a social care context to evaluate the process for recognising and referring children exposed to domestic abuse within one Midlands city safeguarding area in England. A multidisciplinary, multi-agency team of 10 front-line professionals undertook the FMEA, using a modified methodology, over seven group meetings. The FMEA included mapping out the process under evaluation to identify its component steps, identifying failure modes (potential errors) and possible causes for each step and generating corrective actions. In this article, we report the output from the FMEA, including illustrative examples of the failure modes and corrective actions generated. We also present an analysis of feedback from the FMEA team and provide future recommendations for the use of FMEA in appraising social care processes and practice. Although challenging, the FMEA was unequivocally valuable for team members and generated a significant number of corrective actions locally for the safeguarding board to consider in its response to children exposed to domestic abuse. © 2016 John Wiley & Sons Ltd.
Background: Electronic health records (EHRs) are now a ubiquitous component of the US healthcare system and are attractive for secondary data analysis as they contain detailed and longitudinal clinical records on potentially millions of individuals. However, due to their relative...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balboni, Tracy A.; Gaccione, Peter; Gobezie, Reuben
2007-04-01
Purpose: Radiation therapy (RT) is frequently administered to prevent heterotopic ossification (HO) after total hip arthroplasty (THA). The purpose of this study was to determine if there is an increased risk of HO after RT prophylaxis with shielding of the THA components. Methods and Materials: This is a retrospective analysis of THA patients undergoing RT prophylaxis of HO at Brigham and Women's Hospital between June 1994 and February 2004. Univariate and multivariate logistic regressions were used to assess the relationships of all variables to failure of RT prophylaxis. Results: A total of 137 patients were identified and 84 were eligiblemore » for analysis (61%). The median RT dose was 750 cGy in one fraction, and the median follow-up was 24 months. Eight of 40 unshielded patients (20%) developed any progression of HO compared with 21 of 44 shielded patients (48%) (p = 0.009). Brooker Grade III-IV HO developed in 5% of unshielded and 18% of shielded patients (p 0.08). Multivariate analysis revealed shielding (p = 0.02) and THA for prosthesis infection (p = 0.03) to be significant predictors of RT failure, with a trend toward an increasing risk of HO progression with age (p = 0.07). There was no significant difference in the prosthesis failure rates between shielded and unshielded patients. Conclusions: A significantly increased risk of failure of RT prophylaxis for HO was noted in those receiving shielding of the hip prosthesis. Shielding did not appear to reduce the risk of prosthesis failure.« less
Comparison of Models of Stress Relaxation in Failure Analysis for Connectors under Long-term Storage
NASA Astrophysics Data System (ADS)
Zhou, Yilin; Wan, Mengru
2018-03-01
Reliability requirements of the system equipment under long-term storage are put forward especially for the military products, so that the connectors in the equipment also need long-term storage life correspondingly. In this paper, the effects of stress relaxation of the elastic components on electrical contact of the connectors in long-term storage process were studied from the failure mechanism and degradation models. A wire spring connector was taken as an example to discuss the life prediction method for electrical contacts of the connectors based on stress relaxation degradation under long -term storage.
Silicon-controlled-rectifier square-wave inverter with protection against commutation failure
NASA Technical Reports Server (NTRS)
Birchenough, A. G.
1971-01-01
The square-wave SCR inverter that was designed, built, and tested includes a circuit to turn off the inverter in case of commutation failure. The basic power stage is a complementary impulse-commutated parallel inverter consisting of only six components. The 400-watt breadboard was tested while operating at + or - 28 volts, and it had a peak efficiency of 95.5 percent at 60 hertz and 91.7 percent at 400 hertz. The voltage regulation for a fixed input was 3 percent at 60 hertz. An analysis of the operation and design information is included.
Independent Orbiter Assessment (IOA): Assessment of the mechanical actuation subsystem, volume 1
NASA Technical Reports Server (NTRS)
Bradway, M. W.; Slaughter, W. T.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine draft failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline that was available. A resolution of each discrepancy from the comparison was provided through additional analysis as required. These discrepancies were flagged as issues, and recommendations were made based on the FMEA data available at the time. This report documents the results of that comparison for the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). Criticality was assigned based upon the severity of the effect for each failure mode.
NASA Astrophysics Data System (ADS)
Simpson, Amber; Maltese, Adam
2017-04-01
The term failure typically evokes negative connotations in educational settings and is likely to be accompanied by negative emotional states, low sense of confidence, and lack of persistence. These negative emotional and behavioral states may factor into an individual not pursuing a degree or career in science, technology, engineering, or mathematics (STEM). This is of particular concern considering the low number of women and underrepresented minorities pursing and working in a STEM field. Utilizing interview data with professionals across STEM, we sought to understand the role failure played in the persistence of individuals who enter and pursue paths toward STEM-related careers. Findings highlighted how participants' experiences with failure (1) shaped their outlooks or views of failure, (2) shaped their trajectories within STEM, and (3) provided them with additional skills or qualities. A few differences based on participants' sex, field, and highest degree also manifested in our analysis. We expect the results from this study to add research-based results to the current conversation around whether experiences with failure should be part of formal and informal educational settings and standards-based practices.
Reliability analysis of component-level redundant topologies for solid-state fault current limiter
NASA Astrophysics Data System (ADS)
Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam
2018-04-01
Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
Principle of maximum entropy for reliability analysis in the design of machine components
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
Yokozawa, T; Dong, E; Oura, H
1997-02-01
The effects of a green tea tannin mixture and its individual tannin components on methylguanidine were examined in rats with renal failure. The green tea tannin mixture caused a dose-dependent decrease in methylguanidine, a substance which accumulates in the blood with the progression of renal failure. Among individual tannin components, the effect was most conspicuous with (-)-epigallocatechin 3-O-gallate and (-)-epicatechin 3-O-gallate, while other components not linked to gallic acid showed only weak effects. Thus, the effect on methylguanidine was found to vary among different types of tannin.
Acoustic emissions (AE) monitoring of large-scale composite bridge components
NASA Astrophysics Data System (ADS)
Velazquez, E.; Klein, D. J.; Robinson, M. J.; Kosmatka, J. B.
2008-03-01
Acoustic Emissions (AE) has been successfully used with composite structures to both locate and give a measure of damage accumulation. The current experimental study uses AE to monitor large-scale composite modular bridge components. The components consist of a carbon/epoxy beam structure as well as a composite to metallic bonded/bolted joint. The bonded joints consist of double lap aluminum splice plates bonded and bolted to carbon/epoxy laminates representing the tension rail of a beam. The AE system is used to monitor the bridge component during failure loading to assess the failure progression and using time of arrival to give insight into the origins of the failures. Also, a feature in the AE data called Cumulative Acoustic Emission counts (CAE) is used to give an estimate of the severity and rate of damage accumulation. For the bolted/bonded joints, the AE data is used to interpret the source and location of damage that induced failure in the joint. These results are used to investigate the use of bolts in conjunction with the bonded joint. A description of each of the components (beam and joint) is given with AE results. A summary of lessons learned for AE testing of large composite structures as well as insight into failure progression and location is presented.
NASA Astrophysics Data System (ADS)
Li, Zhixiong; Yan, Xinping; Wang, Xuping; Peng, Zhongxiao
2016-06-01
In the complex gear transmission systems, in wind turbines a crack is one of the most common failure modes and can be fatal to the wind turbine power systems. A single sensor may suffer with issues relating to its installation position and direction, resulting in the collection of weak dynamic responses of the cracked gear. A multi-channel sensor system is hence applied in the signal acquisition and the blind source separation (BSS) technologies are employed to optimally process the information collected from multiple sensors. However, literature review finds that most of the BSS based fault detectors did not address the dependence/correlation between different moving components in the gear systems; particularly, the popular used independent component analysis (ICA) assumes mutual independence of different vibration sources. The fault detection performance may be significantly influenced by the dependence/correlation between vibration sources. In order to address this issue, this paper presents a new method based on the supervised order tracking bounded component analysis (SOTBCA) for gear crack detection in wind turbines. The bounded component analysis (BCA) is a state of art technology for dependent source separation and is applied limitedly to communication signals. To make it applicable for vibration analysis, in this work, the order tracking has been appropriately incorporated into the BCA framework to eliminate the noise and disturbance signal components. Then an autoregressive (AR) model built with prior knowledge about the crack fault is employed to supervise the reconstruction of the crack vibration source signature. The SOTBCA only outputs one source signal that has the closest distance with the AR model. Owing to the dependence tolerance ability of the BCA framework, interfering vibration sources that are dependent/correlated with the crack vibration source could be recognized by the SOTBCA, and hence, only useful fault information could be preserved in the reconstructed signal. The crack failure thus could be precisely identified by the cyclic spectral correlation analysis. A series of numerical simulations and experimental tests have been conducted to illustrate the advantages of the proposed SOTBCA method for fatigue crack detection. Comparisons to three representative techniques, i.e. Erdogan's BCA (E-BCA), joint approximate diagonalization of eigen-matrices (JADE), and FastICA, have demonstrated the effectiveness of the SOTBCA. Hence the proposed approach is suitable for accurate gear crack detection in practical applications.
Modeling and Hazard Analysis Using STPA
NASA Astrophysics Data System (ADS)
Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka
2010-09-01
A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis and following the NASA standards for safety-critical systems, the results of our experimental application of STPA can be compared with these more traditional safety engineering approaches in terms of the problems identified and the resources required to use it.
Life Predicted in a Probabilistic Design Space for Brittle Materials With Transient Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Palfi, Tamas; Reh, Stefan
2005-01-01
Analytical techniques have progressively become more sophisticated, and now we can consider the probabilistic nature of the entire space of random input variables on the lifetime reliability of brittle structures. This was demonstrated with NASA s CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code combined with the commercially available ANSYS/Probabilistic Design System (ANSYS/PDS), a probabilistic analysis tool that is an integral part of the ANSYS finite-element analysis program. ANSYS/PDS allows probabilistic loads, component geometry, and material properties to be considered in the finite-element analysis. CARES/Life predicts the time dependent probability of failure of brittle material structures under generalized thermomechanical loading--such as that found in a turbine engine hot-section. Glenn researchers coupled ANSYS/PDS with CARES/Life to assess the effects of the stochastic variables of component geometry, loading, and material properties on the predicted life of the component for fully transient thermomechanical loading and cyclic loading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R. K.; Peters, Scott
The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) Cyber Security for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing Cyber Security for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modifiedmore » and used as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less
Cryptographic Key Management and Critical Risk Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K
The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) CyberSecurity for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing CyberSecurity for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modified and usedmore » as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less
NASA Astrophysics Data System (ADS)
Kovács, G.
2009-09-01
Current status of (the lack of) understanding Blazhko effect is reviewed. We focus mostly on the various components of the failure of the models and touch upon the observational issues only at a degree needed for the theoretical background. Attention is to be paid to models based on radial mode resonances, since they seem to be not fully explored yet, especially if we consider possible non-standard effects (e.g., heavy element enhancement). To aid further modeling efforts, we stress the need for accurate time-series spectral line analysis to reveal any possible non-radial component(s) and thereby let to include (or exclude) non-radial modes in explaining the Blazhko phenomenon.
Structural health monitoring apparatus and methodology
NASA Technical Reports Server (NTRS)
Giurgiutiu, Victor (Inventor); Yu, Lingyu (Inventor); Bottai, Giola Santoni (Inventor)
2011-01-01
Disclosed is an apparatus and methodology for structural health monitoring (SHM) in which smart devices interrogate structural components to predict failure, expedite needed repairs, and thus increase the useful life of those components. Piezoelectric wafer active sensors (PWAS) are applied to or integrated with structural components and various data collected there from provide the ability to detect and locate cracking, corrosion, and disbanding through use of pitch-catch, pulse-echo, electro/mechanical impedance, and phased array technology. Stand alone hardware and an associated software program are provided that allow selection of multiple types of SHM investigations as well as multiple types of data analysis to perform a wholesome investigation of a structure.
CONFIG: Qualitative simulation tool for analyzing behavior of engineering devices
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.; Harris, Richard A.
1987-01-01
To design failure management expert systems, engineers mentally analyze the effects of failures and procedures as they propagate through device configurations. CONFIG is a generic device modeling tool for use in discrete event simulation, to support such analyses. CONFIG permits graphical modeling of device configurations and qualitative specification of local operating modes of device components. Computation requirements are reduced by focussing the level of component description on operating modes and failure modes, and specifying qualitative ranges of variables relative to mode transition boundaries. Simulation processing occurs only when modes change or variables cross qualitative boundaries. Device models are built graphically, using components from libraries. Components are connected at ports by graphical relations that define data flow. The core of a component model is its state transition diagram, which specifies modes of operation and transitions among them.
Lee, Seung-Mi; Kim, Jea-Yeon; Byeon, Jai-Won
2018-09-01
Accidental failures and explosions of lithium-ion batteries have been reported in recent years. To determine the root causes and mechanisms of these failures from the perspective of material degradation, failure analysis was conducted for an intentionally shorted lithium-ion battery. The battery was subjected to electrical overcharging and mechanical pressing to simulate internal short-circuiting. After in situ measurement of the temperature increase during the short-circuiting of the electrodes, the disassembled battery components (i.e., the anode, cathode, and separator) were analyzed by scanning electron microscopy and energy-dispersive X-ray spectroscopy. Regardless of the simulated short-circuit method (mechanical or electrical), damage was observed in the shorted batteries. Numerous small cracks and chemical reaction products were observed on the electrode surface, along with pore shielding on the separator. The event of short-circuiting increased the surface temperature of the battery to approximately 90 °C, which prompted the deterioration and decomposition of the electrolyte, thus affecting the overall battery performance; this was attributed to the decomposition of the lithium salt at 60 °C. The gas generation due to the breakdown of the electrolyte causes pressure accumulation inside the cell; therefore, the electrolyte leaks.
Li, Wen-Chin; Harris, Don; Yu, Chung-San
2008-03-01
The human factors analysis and classification system (HFACS) is based upon Reason's organizational model of human error. HFACS was developed as an analytical framework for the investigation of the role of human error in aviation accidents, however, there is little empirical work formally describing the relationship between the components in the model. This research analyses 41 civil aviation accidents occurring to aircraft registered in the Republic of China (ROC) between 1999 and 2006 using the HFACS framework. The results show statistically significant relationships between errors at the operational level and organizational inadequacies at both the immediately adjacent level (preconditions for unsafe acts) and higher levels in the organization (unsafe supervision and organizational influences). The pattern of the 'routes to failure' observed in the data from this analysis of civil aircraft accidents show great similarities to that observed in the analysis of military accidents. This research lends further support to Reason's model that suggests that active failures are promoted by latent conditions in the organization. Statistical relationships linking fallible decisions in upper management levels were found to directly affect supervisory practices, thereby creating the psychological preconditions for unsafe acts and hence indirectly impairing the performance of pilots, ultimately leading to accidents.
The Range Safety Debris Catalog Analysis in Preparation for the Pad Abort One Flight Test
NASA Technical Reports Server (NTRS)
Kutty, Prasad M.; Pratt, William D.
2010-01-01
The Pad Abort One flight test of the Orion Abort Flight Test Program is currently under development with the goal of demonstrating the capability of the Launch Abort System. In the event of a launch failure, this system will propel the Crew Exploration Vehicle to safety. An essential component of this flight test is range safety, which ensures the security of range assets and personnel. A debris catalog analysis was done as part of a range safety data package delivered to the White Sands Missile Range in New Mexico where the test will be conducted. The analysis discusses the consequences of an overpressurization of the Abort Motor. The resulting structural failure was assumed to create a debris field of vehicle fragments that could potentially pose a hazard to the range. A statistical model was used to assemble the debris catalog of potential propellant fragments. Then, a thermodynamic, energy balance model was applied to the system in order to determine the imparted velocity to these propellant fragments. This analysis was conducted at four points along the flight trajectory to better understand the failure consequences over the entire flight. The methods used to perform this analysis are outlined in detail and the corresponding results are presented and discussed.
NASA Technical Reports Server (NTRS)
Gotch, S. M.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NAA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Electrical Power Generation (EPG)/Power Reactants Storage and Distribution (PRSD) System Hardware is documented. The EPG/PRSD hardware is required for performing critical functions of cryogenic hydrogen and oxygen storage and distribution to the Fuel Cell Powerplants (FCP) and Atmospheric Revitalization Pressure Control Subsystem (ARPCS). Specifically, the EPG/PRSD hardware consists of the following: Hydryogen (H2) tanks; Oxygen (O2) tanks; H2 Relief Valve/Filter Packages (HRVFP); O2 Relief Valve/Filter Packages (ORVFP); H2 Valve Modules (HVM); O2 Valve Modules (OVM); and O2 and H2 lines, components, and fittings.
Oyanguren, Juana; Latorre García, Pedro María; Torcal Laguna, Jesús; Lekuona Goya, Iñaki; Rubio Martín, Susana; Maull Lafuente, Elena; Grandes, Gonzalo
2016-10-01
Heart failure management programs reduce hospitalizations. Some studies also show reduced mortality. The determinants of program success are unknown. The aim of the present study was to update our understanding of the reductions in mortality and readmissions produced by these programs, elucidate their components, and identify the factors determining program success. Systematic literature review (1990-2014; PubMed, EMBASE, CINAHL, Cochrane Library) and manual search of relevant journals. The studies were selected by 3 independent reviewers. Methodological quality was evaluated in a blinded manner by an external researcher (Jadad scale). These results were pooled using random effects models. Heterogeneity was evaluated with the I 2 statistic, and its explanatory factors were determined using metaregression analysis. Of the 3914 studies identified, 66 randomized controlled clinical trials were selected (18 countries, 13 535 patients). We determined the relative risks to be 0.88 for death (95% confidence interval [95%CI], 0.81-0.96; P < .002; I 2 , 6.1%), 0.92 for all-cause readmissions (95%CI, 0.86-0.98; P < .011; I 2 , 58.7%), and 0.80 for heart failure readmissions (95%CI, 0.71-0.90; P < .0001; I 2 , 52.7%). Factors associated with program success were implementation after 2001, program location outside the United States, greater baseline use of angiotensin-converting enzyme inhibitors/angiotensin receptor blockers, a higher number of intervention team members and components, specialized heart failure cardiologists and nurses, protocol-driven education and its assessment, self-monitoring of signs and symptoms, detection of deterioration, flexible diuretic regimen, early care-seeking among patients and prompt health care response, psychosocial intervention, professional coordination, and program duration. We confirm the reductions in mortality and readmissions with heart failure management programs. Their success is associated with various structural and intervention variables. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Analytical Method to Evaluate Failure Potential During High-Risk Component Development
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Clancy, Daniel (Technical Monitor)
2001-01-01
Communicating failure mode information during design and manufacturing is a crucial task for failure prevention. Most processes use Failure Modes and Effects types of analyses, as well as prior knowledge and experience, to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. This paper makes use of similarities and tradeoffs that exist between different failure modes based on the functionality of each component/product. In this light, a function-failure method is developed to help the design of new products with solutions for functions that eliminate or reduce the potential of a failure mode. The method is applied to a simplified rotating machinery example in this paper, and is proposed as a means to account for helicopter failure modes during design and production, addressing stringent safety and performance requirements for NASA applications.
Enhanced Component Performance Study: Turbine-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of turbine-driven pumps (TDPs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The TDP failure modes considered are failure to start (FTS), failure to run less than or equal to one hour (FTR=1H), failure to run more than one hour (FTR>1H), and normally running systems FTS and failure to run (FTR). The component reliability estimates and themore » reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified for TDP unavailability, for frequency of start demands for standby TDPs, and for run hours in the first hour after start. Statistically significant decreasing trends were identified for start demands for normally running TDPs, and for run hours per reactor critical year for normally running TDPs.« less
NASA Technical Reports Server (NTRS)
Campbell, Colin
2015-01-01
As the Shuttle/ISS EMU Program exceeds 35 years in duration and is still supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.
Shuttle/ISS EMU Failure History and the Impact on Advanced EMU PLSS Design
NASA Technical Reports Server (NTRS)
Campbell, Colin
2011-01-01
As the Shuttle/ISS EMU Program exceeds 30 years in duration and is still successfully supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.
Shuttle/ISS EMU Failure History and the Impact on Advanced EMU PLSS Design
NASA Technical Reports Server (NTRS)
Campbell, Colin
2015-01-01
As the Shuttle/ISS EMU Program exceeds 30 years in duration and is still supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.
Radiographic methods of wear analysis in total hip arthroplasty.
Rahman, Luthfur; Cobb, Justin; Muirhead-Allwood, Sarah
2012-12-01
Polyethylene wear is an important factor in failure of total hip arthroplasty (THA). With increasing numbers of THAs being performed worldwide, particularly in younger patients, the burden of failure and revision arthroplasty is increasing, as well, along with associated costs and workload. Various radiographic methods of measuring polyethylene wear have been developed to assist in deciding when to monitor patients more closely and when to consider revision surgery. Radiographic methods that have been developed to measure polyethylene wear include manual and computer-assisted plain radiography, two- and three-dimensional techniques, and radiostereometric analysis. Some of these methods are important in both clinical and research settings. CT has the potential to provide additional information on component orientation and enables assessment of periprosthetic osteolysis, which is an important consequence of polyethylene wear.
Adhesive in the buckling failure of corrugated fiberboard : a finite element investigation
Adeeb A. Rahman; Said M. Abubakr
1998-01-01
This research study proposed to include the glue material in a finite element model that represents the actual geometry and material properties of a corrugated fiberboard. The model is a detailed representation of the different components of the structure (adhesive, linerboard, medium) to perform buckling analysis of corrugated structures under compressive loads. The...
10 CFR 34.101 - Notifications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... written report to the NRC's Office of Federal and State Materials and Environmental Management Programs... shielded position and secure it in this position; or (3) Failure of any component (critical to safe... overexposure submitted under 10 CFR 20.2203 which involves failure of safety components of radiography...
Immunity-based detection, identification, and evaluation of aircraft sub-system failures
NASA Astrophysics Data System (ADS)
Moncayo, Hever Y.
This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.
NASA Technical Reports Server (NTRS)
Smart, Christian
1998-01-01
During 1997, a team from Hernandez Engineering, MSFC, Rocketdyne, Thiokol, Pratt & Whitney, and USBI completed the first phase of a two year Quantitative Risk Assessment (QRA) of the Space Shuttle. The models for the Shuttle systems were entered and analyzed by a new QRA software package. This system, termed the Quantitative Risk Assessment System(QRAS), was designed by NASA and programmed by the University of Maryland. The software is a groundbreaking PC-based risk assessment package that allows the user to model complex systems in a hierarchical fashion. Features of the software include the ability to easily select quantifications of failure modes, draw Event Sequence Diagrams(ESDs) interactively, perform uncertainty and sensitivity analysis, and document the modeling. This paper illustrates both the approach used in modeling and the particular features of the software package. The software is general and can be used in a QRA of any complex engineered system. The author is the project lead for the modeling of the Space Shuttle Main Engines (SSMEs), and this paper focuses on the modeling completed for the SSMEs during 1997. In particular, the groundrules for the study, the databases used, the way in which ESDs were used to model catastrophic failure of the SSMES, the methods used to quantify the failure rates, and how QRAS was used in the modeling effort are discussed. Groundrules were necessary to limit the scope of such a complex study, especially with regard to a liquid rocket engine such as the SSME, which can be shut down after ignition either on the pad or in flight. The SSME was divided into its constituent components and subsystems. These were ranked on the basis of the possibility of being upgraded and risk of catastrophic failure. Once this was done the Shuttle program Hazard Analysis and Failure Modes and Effects Analysis (FMEA) were used to create a list of potential failure modes to be modeled. The groundrules and other criteria were used to screen out the many failure modes that did not contribute significantly to the catastrophic risk. The Hazard Analysis and FMEA for the SSME were also used to build ESDs that show the chain of events leading from the failure mode occurence to one of the following end states: catastrophic failure, engine shutdown, or siccessful operation( successful with respect to the failure mode under consideration).
NASA Astrophysics Data System (ADS)
Wu, W.; Zhou, D. J.; Adamski, D. J.; Young, D.; Wang, Y. W.
2017-09-01
A study of die wear was performed using an uncoated dual phase, 1,180 MPa ultimate tensile strength steel (DP1180) in a progressive die. The objectives of the current study are to evaluate the die durability of various tooling materials and coatings for forming operations on uncoated DP1180 steel and update OEM’s die standards based on the experimental results in the real production environment. In total, 100,800 hits were performed in manufacturing production conditions, where 33 die inserts with the combination of 10 die materials and 9 coatings were investigated. The die inserts were evaluated for surface wear using scanning electron microscopy and characterized in terms of die material and/or coating defects, failure mode, failure initiation and propagation. Surface roughness of the formed parts was characterized using a WYKO NT110 machine. The analytical analysis of the die inserts and formed parts, combined with the failure mode and service life, provide a basis for die material and coating selection for forming AHSS components. The conclusions of this study will guide the selection of die material and coatings for high-volume production of AHSS components.
Risk Importance Measures in the Designand Operation of Nuclear Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vrbanic I.; Samanta P.; Basic, I
This monograph presents and discusses risk importance measures as quantified by the probabilistic risk assessment (PRA) models of nuclear power plants (NPPs) developed according to the current standards and practices. Usually, PRA tools calculate risk importance measures related to a single ?basic event? representing particular failure mode. This is, then, reflected in many current PRA applications. The monograph focuses on the concept of ?component-level? importance measures that take into account different failure modes of the component including common-cause failures (CCFs). In opening sections the roleof risk assessment in safety analysis of an NPP is introduced and discussion given of ?traditional?,more » mainly deterministic, design principles which have been established to assign a level of importance to a particular system, structure or component. This is followed by an overview of main risk importance measures for risk increase and risk decrease from current PRAs. Basic relations which exist among the measures are shown. Some of the current practical applications of risk importancemeasures from the field of NPP design, operation and regulation are discussed. The core of the monograph provides a discussion on theoreticalbackground and practical aspects of main risk importance measures at the level of ?component? as modeled in a PRA, starting from the simplest case, single basic event, and going toward more complexcases with multiple basic events and involvements in CCF groups. The intent is to express the component-level importance measures via theimportance measures and probabilities of the underlying single basic events, which are the inputs readily available from a PRA model andits results. Formulas are derived and discussed for some typical cases. The formulas and their results are demonstrated through some practicalexamples, done by means of a simplified PRA model developed in and run by RiskSpectrum? tool, which are presented in the appendices. The monograph concludes with discussion of limitations of the use of risk importance measures and a summary of component-level importance cases evaluated.« less
Failure detection and identification
NASA Technical Reports Server (NTRS)
Massoumnia, Mohammad-Ali; Verghese, George C.; Willsky, Alan S.
1989-01-01
Using the geometric concept of an unobservability subspace, a solution is given to the problem of detecting and identifying control system component failures in linear, time-invariant systems. Conditions are developed for the existence of a causal, linear, time-invariant processor that can detect and uniquely identify a component failure, first for the case where components can fail simultaneously, and then for the case where they fail only one at a time. Explicit design algorithms are provided when these conditions are satisfied. In addition to time-domain solvability conditions, frequency-domain interpretations of the results are given, and connections are drawn with results already available in the literature.
Enhanced Component Performance Study: Air-Operated Valves 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents a performance evaluation of air-operated valves (AOVs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The AOV failure modes considered are failure-to-open/close, failure to operate or control, and spurious operation. The component reliability estimates and the reliability data are trended for the most recent 10-year period, while yearly estimates for reliability are provided for the entire active period. One statistically significantmore » trend was observed in the AOV data: The frequency of demands per reactor year for valves recording the fail-to-open or fail-to-close failure modes, for high-demand valves (those with greater than twenty demands per year), was found to be decreasing. The decrease was about three percent over the ten year period trended.« less
Tenofovir in second-line ART in Zambia and South Africa: Collaborative analysis of cohort studies
Wandeler, Gilles; Keiser, Olivia; Mulenga, Lloyd; Hoffmann, Christopher J; Wood, Robin; Chaweza, Thom; Brennan, Alana; Prozesky, Hans; Garone, Daniela; Giddy, Janet; Chimbetete, Cleophas; Boulle, Andrew; Egger, Matthias
2012-01-01
Objectives Tenofovir (TDF) is increasingly used in second-line antiretroviral treatment (ART) in sub-Saharan Africa. We compared outcomes of second-line ART containing and not containing TDF in cohort studies from Zambia and the Republic of South Africa (RSA). Methods Patients aged ≥ 16 years starting protease inhibitor-based second-line ART in Zambia (1 cohort) and RSA (5 cohorts) were included. We compared mortality, immunological failure (all cohorts) and virological failure (RSA only) between patients receiving and not receiving TDF. Competing risk models and Cox models adjusted for age, sex, CD4 count, time on first-line ART and calendar year were used to analyse mortality and treatment failure, respectively. Hazard ratios (HRs) were combined in fixed-effects meta-analysis. Findings 1,687 patients from Zambia and 1,556 patients from RSA, including 1,350 (80.0%) and 206 (13.2%) patients starting TDF, were followed over 4,471 person-years. Patients on TDF were more likely to have started second-line ART in recent years, and had slightly higher baseline CD4 counts than patients not on TDF. Overall 127 patients died, 532 were lost to follow-up and 240 patients developed immunological failure. In RSA 94 patients had virologic failure. Combined HRs comparing tenofovir with other regimens were 0.60 (95% CI 0.41–0.87) for immunologic failure and 0.63 (0.38–1.05) for mortality. The HR for virologic failure in RSA was 0.28 (0.09–0.90). Conclusions In this observational study patients on TDF-containing second-line ART were less likely to develop treatment failure than patients on other regimens. TDF seems to be an effective component of second-line ART in southern Africa. PMID:22743595
An Experimental Study of Launch Vehicle Propellant Tank Fragmentation
NASA Technical Reports Server (NTRS)
Richardson, Erin; Jackson, Austin; Hays, Michael; Bangham, Mike; Blackwood, James; Skinner, Troy; Richman, Ben
2014-01-01
In order to better understand launch vehicle abort environments, Bangham Engineering Inc. (BEi) built a test assembly that fails sample materials (steel and aluminum plates of various alloys and thicknesses) under quasi-realistic vehicle failure conditions. Samples are exposed to pressures similar to those expected in vehicle failure scenarios and filmed at high speed to increase understanding of complex fracture mechanics. After failure, the fragments of each test sample are collected, catalogued and reconstructed for further study. Post-test analysis shows that aluminum samples consistently produce fewer fragments than steel samples of similar thickness and at similar failure pressures. Video analysis shows that there are several failure 'patterns' that can be observed for all test samples based on configuration. Fragment velocities are also measured from high speed video data. Sample thickness and material are analyzed for trends in failure pressure. Testing is also done with cryogenic and noncryogenic liquid loading on the samples. It is determined that liquid loading and cryogenic temperatures can decrease material fragmentation for sub-flight thicknesses. A method is developed for capture and collection of fragments that is greater than 97 percent effective in recovering sample mass, addressing the generation of tiny fragments. Currently, samples tested do not match actual launch vehicle propellant tank material thicknesses because of size constraints on test assembly, but test findings are used to inform the design and build of another, larger test assembly with the purpose of testing actual vehicle flight materials that include structural components such as iso-grid and friction stir welds.
NASA Technical Reports Server (NTRS)
Ko, William L.; Chen, Tony
2006-01-01
The previously developed Ko closed-form aging theory has been reformulated into a more compact mathematical form for easier application. A new equivalent loading theory and empirical loading theories have also been developed and incorporated into the revised Ko aging theory for the prediction of a safe operational life of airborne failure-critical structural components. The new set of aging and loading theories were applied to predict the safe number of flights for the B-52B aircraft to carry a launch vehicle, the structural life of critical components consumed by load excursion to proof load value, and the ground-sitting life of B-52B pylon failure-critical structural components. A special life prediction method was developed for the preflight predictions of operational life of failure-critical structural components of the B-52H pylon system, for which no flight data are available.
NASA Astrophysics Data System (ADS)
Murrad, Muhamad; Leong, M. Salman
Based on the experiences of the Malaysian Armed Forces (MAF), failure of the main rotor gearbox (MRGB) was one of the major contributing factors to helicopter breakdowns. Even though vibration and oil analysis are the effective techniques for monitoring the health of helicopter components, these two techniques were rarely combined to form an effective assessment tool in MAF. Results of the oil analysis were often used only for oil changing schedule while assessments of MRGB condition were mainly based on overall vibration readings. A study group was formed and given a mandate to improve the maintenance strategy of S61-A4 helicopter fleet in the MAF. The improvement consisted of a structured approach to the reassessment/redefinition suitable maintenance actions that should be taken for the MRGB. Basic and enhanced tools for condition monitoring (CM) are investigated to address the predominant failures of the MRGB. Quantitative accelerated life testing (QALT) was considered in this work with an intent to obtain the required reliability information in a shorter time with tests under normal stress conditions. These tests when performed correctly can provide valuable information about MRGB performance under normal operating conditions which enable maintenance personnel to make decision more quickly, accurately and economically. The time-to-failure and probability of failure information of the MRGB were generated by applying QALT analysis principles. This study is anticipated to make a dramatic change in its approach to CM, bringing significant savings and various benefits to MAF.
Callaghan, John J; O'Rourke, Michael R; Goetz, Devon D; Lewallen, David G; Johnston, Richard C; Capello, William N
2004-12-01
Constrained acetabular components have been used to treat certain cases of intraoperative instability and postoperative dislocation after total hip arthroplasty. We report our experience with a tripolar constrained component used in these situations since 1988. The outcomes of the cases where this component was used were analyzed for component failure, component loosening, and osteolysis. At average 10-year followup, for cases treated for intraoperative instability (2 cases) or postoperative dislocation (4 cases), the component failure rate was 6% (6 of 101 hips in 5 patients). For cases where the constrained liner was cemented into a fixed cementless acetabular shell, the failure rate was 7% (2 of 31 hips in 2 patients) at 3.9-year average followup. Use of a constrained liner was not associated with an increased osteolysis or aseptic loosening rate. This tripolar constrained acetabular liner provided total hip arthroplasty construct stability in most cases in which it was used for intraoperative instability or postoperative dislocation.
NASA Technical Reports Server (NTRS)
Soeder, James F.; Pinero, Luis; Schneidegger, Robert; Dunning, John; Birchenough, Art
2012-01-01
The NASA's Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hours and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hours of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.
NASA Technical Reports Server (NTRS)
Soeder, James F.; Scheidegger, Robert J.; Pinero, Luis R.; Birchenough, Arthur J.; Dunning, John W.
2012-01-01
The NASA s Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hr and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location-the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hr of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meneses, Esteban; Ni, Xiang; Jones, Terry R
The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of faultmore » tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terlip, Danny
2016-03-28
Diaphragm compressors have become the primary source of on-site hydrogen compression for hydrogen fueling stations around the world. NREL and PDC have undertaken two studies aimed at improving hydrogen compressor operation and reducing the cost contribution to dispensed fuel. The first study identified the failure mechanisms associated with mechanical compression to reduce the maintenance and down-time. The second study will investigate novel station configurations to maximize hydrogen usage and compressor lifetime. This partnership will allow for the simulation of operations in the field and a thorough analysis of the component failure to improve the reliability of diaphragm compression.
Independent Orbiter Assessment (IOA): Assessment of the mechanical actuation subsystem, volume 2
NASA Technical Reports Server (NTRS)
Bradway, M. W.; Slaughter, W. T.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine draft failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline that was available. A resolution of each discrepancy from the comparison was provided through additional analysis as required. These discrepancies were flagged as issues, and recommendations were made based on the FMEA data available at the time. This report documents the results of that comparison for the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). Criticality was assigned based upon the severity of the effect for each failure mode. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list, detailed analysis, and NASA FMEA/CIL to IOA worksheet cross reference and recommendations.
Residual stress prediction in a powder bed fusion manufactured Ti6Al4V hip stem
NASA Astrophysics Data System (ADS)
Barrett, Richard A.; Etienne, Titouan; Duddy, Cormac; Harrison, Noel M.
2017-10-01
Powder bed fusion (PBF) is a category of additive manufacturing (AM) that is particularly suitable for the production of 3D metallic components. In PBF, only material in the current build layer is at the required melt temperature, with the previously melted and solidified layers reducing in temperature, thus generating a significant thermal gradient within the metallic component, particularly for laser based PBF components. The internal thermal stresses are subsequently relieved in a post-processing heat-treatment step. Failure to adequately remove these stresses can result in cracking and component failure. A prototype hip stem was manufactured from Ti6Al4V via laser PBF but was found to have fractured during over-seas shipping. This study examines the evolution of thermal stresses during the laser PBF manufacturing and heat treatment processes of the hip stem in a 2D finite element analysis (FEA) and compares it to an electron beam PBF process. A custom written script for the automatic conversion of a gross geometry finite element model into a thin layer- by-layer finite element model was developed. The build process, heat treatment (for laser PBF) and the subsequent cooling were simulated at the component level. The results demonstrate the effectiveness of the heat treatment in reducing PBF induced thermal stresses, and the concentration of stresses in the region that fractured.
System Lifetimes, The Memoryless Property, Euler's Constant, and Pi
ERIC Educational Resources Information Center
Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon
2013-01-01
A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…
Failure Analysis of Cracked FS-85 Tubing and ASTAR-811C End Caps
DOE Office of Scientific and Technical Information (OSTI.GOV)
ME Petrichek
2006-02-09
Failure analyses were performed on cracked FS-85 tubing and ASTAR-811C and caps which had been fabricated as components of biaxial creep specimens meant to support materials testing for the NR Space program. During the failure analyses of cracked FS-85 tubing, it was determined that the failure potentially could be due to two effects: possible copper contamination from the EDM (electro-discharge machined) recast layer and/or an insufficient solution anneal. to prevent similar failures in the future, a more formal analysis should be done after each processing step to ensure the quality of the material before further processing. During machining of themore » ASTAR-811FC rod to form end caps for biaxial creep specimens, linear defects were observed along the center portion of the end caps. These defects were only found in material that was processed from the top portion of the ingot. The linear defects were attributed to a probable residual ingot pipe that was not removed from the ingot. During the subsequent processing of the ingot to rod, the processing temperatures were not high enough to allow self healing of the ingot's residual pipe defect. To prevent this from occurring in the future, it is necessary to ensure that complete removal of the as-melted ingot pipe is verified by suitable non-destructive evaluation (NDE).« less
Compression Strength of Composite Primary Structural Components
NASA Technical Reports Server (NTRS)
Johnson, Eric R.
1998-01-01
Research conducted under NASA Grant NAG-1-537 focussed on the response and failure of advanced composite material structures for application to aircraft. Both experimental and analytical methods were utilized to study the fundamental mechanics of the response and failure of selected structural components subjected to quasi-static loads. Most of the structural components studied were thin-walled elements subject to compression, such that they exhibited buckling and postbuckling responses prior to catastrophic failure. Consequently, the analyses were geometrically nonlinear. Structural components studied were dropped-ply laminated plates, stiffener crippling, pressure pillowing of orthogonally stiffened cylindrical shells, axisymmetric response of pressure domes, and the static crush of semi-circular frames. Failure of these components motivated analytical studies on an interlaminar stress postprocessor for plate and shell finite element computer codes, and global/local modeling strategies in finite element modeling. These activities are summarized in the following section. References to literature published under the grant are listed on pages 5 to 10 by a letter followed by a number under the categories of journal publications, conference publications, presentations, and reports. These references are indicated in the text by their letter and number as a superscript.
NASA Astrophysics Data System (ADS)
Pantazopoulos, G.; Vazdirvanidis, A.
2014-03-01
Emphasis is placed on the evaluation of corrosion failures of copper and machineable brass alloys during service. Typical corrosion failures of the presented case histories mainly focussed on stress corrosion cracking and dezincification that acted as the major degradation mechanisms in components used in piping and water supply systems. SEM assessment, coupled with EDS spectroscopy, revealed the main cracking modes together with the root-source(s) that are responsible for the damage initiation and evolution. In addition, fracture surface observations contributed to the identification of the incurred fracture mechanisms and potential environmental issues that stimulated crack initiation and propagation. Very frequently, the detection of chlorides among the corrosion products served as a suggestive evidence of the influence of working environment on passive layer destabilisation and metal dissolution.
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
Software reliability experiments data analysis and investigation
NASA Technical Reports Server (NTRS)
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
Test and Analysis of Sub-Components of Aluminum-Lithium Alloy Cylinders
NASA Technical Reports Server (NTRS)
Haynie, Waddy T.; Chunchu, Prasad B.; Satyanarayana, Arunkumar; Hilburger, Mark W.; Smith, Russell W.
2012-01-01
Integrally machined blade-stiffened panels subjected to an axial compressive load were tested and analyzed to observe the buckling, crippling, and postcrippling response of the panels. The panels were fabricated from aluminum-lithium alloys 2195 and 2050, and both alloys have reduced material properties in the short transverse material direction. The tests were designed to capture a failure mode characterized by the stiffener separating from the panel in the postbuckling range. This failure mode is attributed to the reduced properties in the short transverse direction. Full-field measurements of displacements and strains using three-dimensional digital image correlation systems and local measurements using strain gages were used to capture the deformation of the panel leading up to the failure of the panel for specimens fabricated from 2195. High-speed cameras were used to capture the initiation of the failure. Finite element models were developed using an isotropic strain-hardening material model. Good agreement was observed between the measured and predicted responses for both alloys.
Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Song-Hua Shen; Gary DeMoss
2010-06-01
Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less
PRA and Risk Informed Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernsen, Sidney A.; Simonen, Fredric A.; Balkey, Kenneth R.
2006-01-01
The Boiler and Pressure Vessel Code (BPVC) of the American Society of Mechanical Engineers (ASME) has introduced a risk based approach into Section XI that covers Rules for Inservice Inspection of Nuclear Power Plant Components. The risk based approach requires application of the probabilistic risk assessments (PRA). Because no industry consensus standard existed for PRAs, ASME has developed a standard to evaluate the quality level of an available PRA needed to support a given risk based application. The paper describes the PRA standard, Section XI application of PRAs, and plans for broader applications of PRAs to other ASME nuclear codesmore » and standards. The paper addresses several specific topics of interest to Section XI. Important consideration are special methods (surrogate components) used to overcome the lack of PRA treatments of passive components in PRAs. The approach allows calculations of conditional core damage probabilities both for component failures that cause initiating events and failures in standby systems that decrease the availability of these systems. The paper relates the explicit risk based methods of the new Section XI code cases to the implicit consideration of risk used in the development of Section XI. Other topics include the needed interactions of ISI engineers, plant operating staff, PRA specialists, and members of expert panels that review the risk based programs.« less
Continuum Damage Mechanics Used to Predict the Creep Life of Monolithic Ceramics
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Jadaan, Osama M.
1998-01-01
Significant improvements in propulsion and power generation for the next century will require revolutionary advances in high-temperature materials and structural design. Advanced ceramics are candidate materials for these elevated temperature applications. High-temperature and long-duration applications of monolithic ceramics can place their failure mode in the creep rupture regime. An analytical methodology in the form of the integrated design program-Ceramics Analysis and Reliability Evaluation of Structures/Creep (CARES/Creep) has been developed by the NASA Lewis Research Center to predict the life of ceramic structural components subjected to creep rupture conditions. This program utilizes commercially available finite element packages and takes into account the transient state of stress and creep strain distributions (stress relaxation as well as the asymmetric response to tension and compression). The creep life of a component is discretized into short time steps, during which the stress distribution is assumed constant. Then, the damage is calculated for each time step on the basis of a modified Monkman-Grant (MMG) creep rupture criterion. The cumulative damage is subsequently calculated as time elapses in a manner similar to Miner's rule for cyclic fatigue loading. Failure is assumed to occur when the normalized cumulative damage at any point in the component reaches unity. The corresponding time is the creep rupture life for that component.
Code of Federal Regulations, 2014 CFR
2014-01-01
... pool slide shall be such that no structural failures of any component part shall cause failures of any... such fasteners shall not cause a failure of the tread under the ladder loading conditions specified in... without failure or permanent deformation. (d) Handrails. Swimming pool slide ladders shall be equipped...
Code of Federal Regulations, 2012 CFR
2012-01-01
... pool slide shall be such that no structural failures of any component part shall cause failures of any... such fasteners shall not cause a failure of the tread under the ladder loading conditions specified in... without failure or permanent deformation. (d) Handrails. Swimming pool slide ladders shall be equipped...
Develop advanced nonlinear signal analysis topographical mapping system
NASA Technical Reports Server (NTRS)
Jong, Jen-Yi
1993-01-01
This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an advanced nonlinear signal analysis topographical mapping system (ATMS) of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbopump families.
Method of detecting leakage of reactor core components of liquid metal cooled fast reactors
Holt, Fred E.; Cash, Robert J.; Schenter, Robert E.
1977-01-01
A method of detecting the failure of a sealed non-fueled core component of a liquid-metal cooled fast reactor having an inert cover gas. A gas mixture is incorporated in the component which includes Xenon-124; under neutron irradiation, Xenon-124 is converted to radioactive Xenon-125. The cover gas is scanned by a radiation detector. The occurrence of 188 Kev gamma radiation and/or other identifying gamma radiation-energy level indicates the presence of Xenon-125 and therefore leakage of a component. Similarly, Xe-126, which transmutes to Xe-127 and Kr-84, which produces Kr-85.sup.m can be used for detection of leakage. Different components are charged with mixtures including different ratios of isotopes other than Xenon-124. On detection of the identifying radiation, the cover gas is subjected to mass spectroscopic analysis to locate the leaking component.
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.
2014-01-01
This report documents the results of spiral bevel gear rig tests performed under a NASA Space Act Agreement with the Federal Aviation Administration (FAA) to support validation and demonstration of rotorcraft Health and Usage Monitoring Systems (HUMS) for maintenance credits via FAA Advisory Circular (AC) 29-2C, Section MG-15, Airworthiness Approval of Rotorcraft (HUMS) (Ref. 1). The overarching goal of this work was to determine a method to validate condition indicators in the lab that better represent their response to faults in the field. Using existing in-service helicopter HUMS flight data from faulted spiral bevel gears as a "Case Study," to better understand the differences between both systems, and the availability of the NASA Glenn Spiral Bevel Gear Fatigue Rig, a plan was put in place to design, fabricate and test comparable gear sets with comparable failure modes within the constraints of the test rig. The research objectives of the rig tests were to evaluate the capability of detecting gear surface pitting fatigue and other generated failure modes on spiral bevel gear teeth using gear condition indicators currently used in fielded HUMS. Nineteen final design gear sets were tested. Tables were generated for each test, summarizing the failure modes observed on the gear teeth for each test during each inspection interval and color coded based on damage mode per inspection photos. Gear condition indicators (CI) Figure of Merit 4 (FM4), Root Mean Square (RMS), +/- 1 Sideband Index (SI1) and +/- 3 Sideband Index (SI3) were plotted along with rig operational parameters. Statistical tables of the means and standard deviations were calculated within inspection intervals for each CI. As testing progressed, it became clear that certain condition indicators were more sensitive to a specific component and failure mode. These tests were clustered together for further analysis. Maintenance actions during testing were also documented. Correlation coefficients were calculated between each CI, component, damage state and torque. Results found test rig and gear design, type of fault and data acquisition can affect CI performance. Results found FM4, SI1 and SI3 can be used to detect macro pitting on two more gear or pinion teeth as long as it is detected prior to progressing to other components or transitioning to another failure mode. The sensitivity of RMS to system and operational conditions limit its reliability for systems that are not maintained at steady state. Failure modes that occurred due to scuffing or fretting were challenging to detect with current gear diagnostic tools, since the damage is distributed across all the gear and pinion teeth, smearing the impacting signatures typically used to differentiate between a healthy and damaged tooth contact. This is one of three final reports published on the results of this project. In the second report, damage modes experienced in the field will be mapped to the failure modes created in the test rig. The helicopter CI data will then be re-processed with the same analysis techniques applied to spiral bevel rig test data. In the third report, results from the rig and helicopter data analysis will be correlated. Observations, findings and lessons learned using sub-scale rig failure progression tests to validate helicopter gear condition indicators will be presented.
NASA Astrophysics Data System (ADS)
Lee, Tae-Hee; Park, Ka-Young; Kim, Ji-Tae; Seo, Yongho; Kim, Ki Buem; Song, Sun-Ju; Park, Byoungnam; Park, Jun-Young
2015-02-01
This study focuses on mechanisms and symptoms of several simulated failure modes, which may have significant influences on the long-term durability and operational stability of intermediate temperature-solid oxide fuel cells (IT-SOFCs), including fuel/oxidation starvation by breakdown of fuel/air supply components and wet and dry cycling atmospheres. Anode-supported IT-SOFCs consisting of a Ba0.5Sr0.5Co0.8Fe0.2O3-δ (BSCF)-Nd0.1Ce0.9O2-δ (NDC) composite cathode with an NDC electrolyte on a Ni-NDC anode substrate are fabricated via dry-pressings followed by the co-firing method. Comprehensive and systematic research based on the failure mode and effect analysis (FMEA) of anode-supported IT-SOFCs is conducted using various electrochemical and physiochemical analysis techniques to extend our understanding of the major mechanisms of performance deterioration under SOFC operating conditions. The fuel-starvation condition in the fuel-pump failure mode causes irreversible mechanical degradation of the electrolyte and cathode interface by the dimensional expansion of the anode support due to the oxidation of Ni metal to NiO. In contrast, the BSCF cathode shows poor stability under wet and dry cycling modes of cathode air due to the strong electroactivity of SrO with H2O. On the other hand, the air-depletion phenomena under air-pump failure mode results in the recovery of cell performance during the long-term operation without the visible microstructural transformation through the reduction of anode overvoltage.
NASA Technical Reports Server (NTRS)
Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard
1991-01-01
Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/ mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.
NASA Technical Reports Server (NTRS)
Shalkhauser, Kurt A.; Bartos, Karen F.; Fite, E. B.; Sharp, G. R.
1992-01-01
Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.
NASA Technical Reports Server (NTRS)
Bean, E. E.; Bloomquist, C. E.
1972-01-01
A summary of the KSC program for investigating the reliability aspects of the ground support activities is presented. An analysis of unsatisfactory condition reports (RC), and the generation of reliability assessment of components based on the URC are discussed along with the design considerations for attaining reliable real time hardware/software configurations.
A System for Integrated Reliability and Safety Analyses
NASA Technical Reports Server (NTRS)
Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Coumeri, Marc; Scheidler, Peter, Jr.; Bonesteel, Charles
1999-01-01
We present an integrated reliability and aviation safety analysis tool. The reliability models for selected infrastructure components of the air traffic control system are described. The results of this model are used to evaluate the likelihood of seeing outcomes predicted by simulations with failures injected. We discuss the design of the simulation model, and the user interface to the integrated toolset.
Independent Orbiter Assessment (IOA): Analysis of the instrumentation subsystem
NASA Technical Reports Server (NTRS)
Howard, B. S.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Instrumentation Subsystem are documented. The Instrumentation Subsystem (SS) consists of transducers, signal conditioning equipment, pulse code modulation (PCM) encoding equipment, tape recorders, frequency division multiplexers, and timing equipment. For this analysis, the SS is broken into two major groupings: Operational Instrumentation (OI) equipment and Modular Auxiliary Data System (MADS) equipment. The OI equipment is required to acquire, condition, scale, digitize, interleave/multiplex, format, and distribute operational Orbiter and payload data and voice for display, recording, telemetry, and checkout. It also must provide accurate timing for time critical functions for crew and payload specialist use. The MADS provides additional instrumentation to measure and record selected pressure, temperature, strain, vibration, and event data for post-flight playback and analysis. MADS data is used to assess vehicle responses to the flight environment and to permit correlation of such data from flight to flight. The IOA analysis utilized available SS hardware drawings and schematics for identifying hardware assemblies and components and their interfaces. Criticality for each item was assigned on the basis of the worst-case effect of the failure modes identified.
An overview of fatigue failures at the Rocky Flats Wind System Test Center
NASA Technical Reports Server (NTRS)
Waldon, C. A.
1981-01-01
Potential small wind energy conversion (SWECS) design problems were identified to improve product quality and reliability. Mass produced components such as gearboxes, generators, bearings, etc., are generally reliable due to their widespread uniform use in other industries. The likelihood of failure increases, though, in the interfacing of these components and in SWECS components designed for a specific system use. Problems relating to the structural integrity of such components are discussed and analyzed with techniques currently used in quality assurance programs in other manufacturing industries.
Progressive Damage and Failure Analysis of Composite Laminates
NASA Astrophysics Data System (ADS)
Joseph, Ashith P. K.
Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.
Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J
2003-09-01
As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.
Wodecki, P; Sabbah, D; Kermarrec, G; Semaan, I
2013-10-01
Total hip replacements (THR) with modular femoral components (stem-neck interface) make it possible to adapt to extramedullary femoral parameters (anteversion, offset, and length) theoretically improving muscle function and stability. Nevertheless, adding a new interface has its disadvantages: reduced mechanical resistance, fretting corrosion and material fatigue fracture. We report the case of a femoral stem fracture of the female part of the component where the modular morse taper of the neck is inserted. An extended trochanteric osteotomy was necessary during revision surgery because the femoral stump could not be grasped for extraction, so that a long stem had to be used. In this case, the patient had the usual risk factors for modular neck failure: he was an active overweight male patient with a long varus neck. This report shows that the female part of the stem of a small femoral component may also be at increased failure risk and should be added to the list of risk factors. To our knowledge, this is the first reported case of this type of failure. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Bonte, M. H. A.; de Boer, A.; Liebregts, R.
2007-04-01
This paper provides a new formula to take into account phase differences in the determination of an equivalent von Mises stress power spectral density (PSD) from multiple random inputs. The obtained von Mises PSD can subsequently be used for fatigue analysis. The formula was derived for use in the commercial vehicle business and was implemented in combination with Finite Element software to predict and analyse fatigue failure in the frequency domain.
Availability analysis of an HTGR fuel recycle facility. Summary report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharmahd, J.N.
1979-11-01
An availability analysis of reprocessing systems in a high-temperature gas-cooled reactor (HTGR) fuel recycle facility was completed. This report summarizes work done to date to define and determine reprocessing system availability for a previously planned HTGR recycle reference facility (HRRF). Schedules and procedures for further work during reprocessing development and for HRRF design and construction are proposed in this report. Probable failure rates, transfer times, and repair times are estimated for major system components. Unscheduled down times are summarized.
A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample
Chang, Yu-Yen; Chao, Rikon; Wang, Wei-Hao; ...
2012-01-01
Disney emore » t al. (2008) have found a striking correlation among global parameters of H i -selected galaxies and concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L -band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color ( g - i ) is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color ( i - J ) shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.« less
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
Ceramic component reliability with the restructured NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.
1992-01-01
The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).
Design and Optimization of Composite Gyroscope Momentum Wheel Rings
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.
Validation of PV-RPM Code in the System Advisor Model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine
2017-04-01
This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less
Modelling the failure behaviour of wind turbines
NASA Astrophysics Data System (ADS)
Faulstich, S.; Berkhout, V.; Mayer, J.; Siebenlist, D.
2016-09-01
Modelling the failure behaviour of wind turbines is an essential part of offshore wind farm simulation software as it leads to optimized decision making when specifying the necessary resources for the operation and maintenance of wind farms. In order to optimize O&M strategies, a thorough understanding of a wind turbine's failure behaviour is vital and is therefore being developed at Fraunhofer IWES. Within this article, first the failure models of existing offshore O&M tools are presented to show the state of the art and strengths and weaknesses of the respective models are briefly discussed. Then a conceptual framework for modelling different failure mechanisms of wind turbines is being presented. This framework takes into account the different wind turbine subsystems and structures as well as the failure modes of a component by applying several influencing factors representing wear and break failure mechanisms. A failure function is being set up for the rotor blade as exemplary component and simulation results have been compared to a constant failure rate and to empirical wind turbine fleet data as a reference. The comparison and the breakdown of specific failure categories demonstrate the overall plausibility of the model.
NASA Technical Reports Server (NTRS)
Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.
2004-01-01
This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.
Auxiliary feedwater system risk-based inspection guide for the Salem Nuclear Power Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugh, R.; Gore, B.F. Vo, T.V.
In a study by the US Nuclear Regulatory Commission (NRC), Pacific Northwest Laboratory has developed and applied a methodology for deriving plant-specific risk-based inspection guidance for the auxiliary feedwater (AFW) system at pressurized water reactors that have not undergone probabilistic risk assessment (PRA). This methodology uses existing PRA results and plant operating experience information. Existing PRA-based inspection guidance information recently developed for the NRC for various plants was used to identify generic component failure modes. This information was then combined with plant-specific and industry-wide component information and failure data to identify failure modes and failure mechanisms for the AFW systemmore » at the selected plants. Salem was selected as the fifth plant for study. The product of this effort is a prioritized listing of AFW failures which have occurred at the plant and at other PWRs. This listing is intended for use by NRC inspectors in the preparation of inspection plans addressing AFW risk-important components at the Salem plant. 23 refs., 1 fig., 1 tab.« less
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroh, K.R.
1980-01-01
The Composite HTGR Analysis Program (CHAP) consists of a model-independent systems analysis mainframe named LASAN and model-dependent linked code modules, each representing a component, subsystem, or phenomenon of an HTGR plant. The Fort St. Vrain (FSV) version (CHAP-2) includes 21 coded modules that model the neutron kinetics and thermal response of the core; the thermal-hydraulics of the reactor primary coolant system, secondary steam supply system, and balance-of-plant; the actions of the control system and plant protection system; the response of the reactor building; and the relative hazard resulting from fuel particle failure. FSV steady-state and transient plant data are beingmore » used to partially verify the component modeling and dynamic smulation techniques used to predict plant response to postulated accident sequences.« less
NASA Technical Reports Server (NTRS)
Reveley, Mary S.; Briggs, Jeffrey L.; Thomas, Megan A.; Evans, Joni K.; Jones, Sharon M.
2011-01-01
The Integrated Vehicle Health Management (IVHM) Project is one of the four projects within the National Aeronautics and Space Administration's (NASA) Aviation Safety Program (AvSafe). The IVHM Project conducts research to develop validated tools and technologies for automated detection, diagnosis, and prognosis that enable mitigation of adverse events during flight. Adverse events include those that arise from system, subsystem, or component failure, faults, and malfunctions due to damage, degradation, or environmental hazards that occur during flight. Determining the causal factors and adverse events related to IVHM technologies will help in the formulation of research requirements and establish a list of example adverse conditions against which IVHM technologies can be evaluated. This paper documents the results of an examination of the most recent statistical/prognostic accident and incident data that is available from the Aviation Safety Information Analysis and Sharing (ASIAS) System to determine the causal factors of system/component failures and/or malfunctions in U.S. commercial aviation accidents and incidents.
Rolling Bearing Fault Diagnosis Based on an Improved HTT Transform
Tang, Guiji; Tian, Tian; Zhou, Chong
2018-01-01
When rolling bearing failure occurs, vibration signals generally contain different signal components, such as impulsive fault feature signals, background noise and harmonic interference signals. One of the most challenging aspects of rolling bearing fault diagnosis is how to inhibit noise and harmonic interference signals, while enhancing impulsive fault feature signals. This paper presents a novel bearing fault diagnosis method, namely an improved Hilbert time–time (IHTT) transform, by combining a Hilbert time–time (HTT) transform with principal component analysis (PCA). Firstly, the HTT transform was performed on vibration signals to derive a HTT transform matrix. Then, PCA was employed to de-noise the HTT transform matrix in order to improve the robustness of the HTT transform. Finally, the diagonal time series of the de-noised HTT transform matrix was extracted as the enhanced impulsive fault feature signal and the contained fault characteristic information was identified through further analyses of amplitude and envelope spectrums. Both simulated and experimental analyses validated the superiority of the presented method for detecting bearing failures. PMID:29662013
Probabilistic risk analysis of building contamination.
Bolster, D T; Tartakovsky, D M
2008-10-01
We present a general framework for probabilistic risk assessment (PRA) of building contamination. PRA provides a powerful tool for the rigorous quantification of risk in contamination of building spaces. A typical PRA starts by identifying relevant components of a system (e.g. ventilation system components, potential sources of contaminants, remediation methods) and proceeds by using available information and statistical inference to estimate the probabilities of their failure. These probabilities are then combined by means of fault-tree analyses to yield probabilistic estimates of the risk of system failure (e.g. building contamination). A sensitivity study of PRAs can identify features and potential problems that need to be addressed with the most urgency. Often PRAs are amenable to approximations, which can significantly simplify the approach. All these features of PRA are presented in this paper via a simple illustrative example, which can be built upon in further studies. The tool presented here can be used to design and maintain adequate ventilation systems to minimize exposure of occupants to contaminants.
Develop advanced nonlinear signal analysis topographical mapping system
NASA Technical Reports Server (NTRS)
1994-01-01
The Space Shuttle Main Engine (SSME) has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature, pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system; (2) develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amount of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. High compression ratio can be achieved to allow minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities; and (3) integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an ATMS system of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbo pump families.
Develop advanced nonlinear signal analysis topographical mapping system
NASA Technical Reports Server (NTRS)
Jong, Jen-Yi
1993-01-01
The SSME has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) Develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system. (2) Develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amounts of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. A high compression ratio can be achieved to allow the minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities. (3) Integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for a quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an ATMS system of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbo pump families.
Fatigue resistant carbon coatings for rolling/sliding contacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Harpal; Ramirez, Giovanni; Eryilmaz, Osman
2016-06-01
The growing demands for renewable energy production have recently resulted in a significant increase in wind plant installation. Field data from these plants show that wind turbines suffer from costly repair, maintenance and high failure rates. Often times the reliability issues are linked with tribological components used in wind turbine drivetrains. The primary failure modes in bearings and gears are associated with micropitting, wear, brinelling, scuffing, smearing and macropitting all of which occur at or near the surface. Accordingly, a variety of surface engineering approaches are currently being considered to alter the near surface properties of such bearings and gearsmore » to prevent these tribological failures. In the present work, we have evaluated the tribological performance of compliant highly hydrogenated diamond like carbon coating developed at Argonne National Laboratory, under mixed rolling/sliding contact conditions for wind turbine drivetrain components. The coating was deposited on AISI 52100 steel specimens using a magnetron sputter deposition system. The experiments were performed on a PCS Micro-Pitting-Rig (MPR) with four material pairs at 1.79 GPa contact stress, 40% slide to roll ratio and in polyalphaolefin (PAO4) basestock oil (to ensure extreme boundary conditions). The post-test analysis was performed using optical microscopy, surface profilometry, and Raman spectroscopy. The results obtained show a potential for these coatings in sliding/rolling contact applications as no failures were observed with coated specimens even after 100 million cycles compared to uncoated pair in which they failed after 32 million cycles, under the given test conditions.« less
Intelligent on-line fault tolerant control for unanticipated catastrophic failures.
Yen, Gary G; Ho, Liang-Wei
2004-10-01
As dynamic systems become increasingly complex, experience rapidly changing environments, and encounter a greater variety of unexpected component failures, solving the control problems of such systems is a grand challenge for control engineers. Traditional control design techniques are not adequate to cope with these systems, which may suffer from unanticipated dynamic failures. In this research work, we investigate the on-line fault tolerant control problem and propose an intelligent on-line control strategy to handle the desired trajectories tracking problem for systems suffering from various unanticipated catastrophic faults. Through theoretical analysis, the sufficient condition of system stability has been derived and two different on-line control laws have been developed. The approach of the proposed intelligent control strategy is to continuously monitor the system performance and identify what the system's current state is by using a fault detection method based upon our best knowledge of the nominal system and nominal controller. Once a fault is detected, the proposed intelligent controller will adjust its control signal to compensate for the unknown system failure dynamics by using an artificial neural network as an on-line estimator to approximate the unexpected and unknown failure dynamics. The first control law is derived directly from the Lyapunov stability theory, while the second control law is derived based upon the discrete-time sliding mode control technique. Both control laws have been implemented in a variety of failure scenarios to validate the proposed intelligent control scheme. The simulation results, including a three-tank benchmark problem, comply with theoretical analysis and demonstrate a significant improvement in trajectory following performance based upon the proposed intelligent control strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward
This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less
Walston, Steve; Salloum, Joseph; Grieco, Carmine; Wuthrick, Evan; Diaz, Dayssy A; Barney, Christian; Manilchuk, Andrei; Schmidt, Carl; Dillhoff, Mary; Pawlik, Timothy M; Williams, Terence M
2018-05-04
The role of radiation therapy (RT) in resected pancreatic cancer (PC) remains incompletely defined. We sought to determine clinical variables which predict for local-regional recurrence (LRR) to help select patients for adjuvant RT. We identified 73 patients with PC who underwent resection and adjuvant gemcitabine-based chemotherapy alone. We performed detailed radiologic analysis of first patterns of failure. LRR was defined as recurrence of PC within standard postoperative radiation volumes. Univariate analyses (UVA) were conducted using the Kaplan-Meier method and multivariate analyses (MVA) utilized the Cox proportional hazard ratio model. Factors significant on UVA were used for MVA. At median follow-up of 20 months, rates of local-regional recurrence only (LRRO) were 24.7%, LRR as a component of any failure 68.5%, metastatic recurrence (MR) as a component of any failure 65.8%, and overall disease recurrence (OR) 90.5%. On UVA, elevated postoperative CA 19-9 (>90 U/mL), pathologic lymph node positive (pLN+) disease, and higher tumor grade were associated with increased LRR, MR, and OR. On MVA, elevated postoperative CA 19-9 and pLN+ were associated with increased MR and OR. In addition, positive resection margin was associated with increased LRRO on both UVA and MVA. About 25% of patients with PC treated without adjuvant RT develop LRRO as initial failure. The only independent predictor of LRRO was positive margin, while elevated postoperative CA 19-9 and pLN+ were associated with predicting MR and overall survival. These data may help determine which patients benefit from intensification of local therapy with radiation.
Diverse Redundant Systems for Reliable Space Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.
Biau, D J; Meziane, M; Bhumbra, R S; Dumaine, V; Babinet, A; Anract, P
2011-09-01
The purpose of this study was to define immediate post-operative 'quality' in total hip replacements and to study prospectively the occurrence of failure based on these definitions of quality. The evaluation and assessment of failure were based on ten radiological and clinical criteria. The cumulative summation (CUSUM) test was used to study 200 procedures over a one-year period. Technical criteria defined failure in 17 cases (8.5%), those related to the femoral component in nine (4.5%), the acetabular component in 32 (16%) and those relating to discharge from hospital in five (2.5%). Overall, the procedure was considered to have failed in 57 of the 200 total hip replacements (28.5%). The use of a new design of acetabular component was associated with more failures. For the CUSUM test, the level of adequate performance was set at a rate of failure of 20% and the level of inadequate performance set at a failure rate of 40%; no alarm was raised by the test, indicating that there was no evidence of inadequate performance. The use of a continuous monitoring statistical method is useful to ensure that the quality of total hip replacement is maintained, especially as newer implants are introduced.
Investigation of improving MEMS-type VOA reliability
NASA Astrophysics Data System (ADS)
Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.
2003-12-01
MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).
Investigation of improving MEMS-type VOA reliability
NASA Astrophysics Data System (ADS)
Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.
2004-01-01
MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua
2014-11-01
Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper representsmore » an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation environment such as RELAP-7. • Identify the risk-significant passive components, their failure modes and anticipated rates of degradation • Incorporate surveillance and maintenance activities and their effects into the plant state and into component aging progress. • Asses aging affects in a dynamic simulation environment 1. C. L. SMITH, V. N. SHAH, T. KAO, G. APOSTOLAKIS, “Incorporating Ageing Effects into Probabilistic Risk Assessment –A Feasibility Study Utilizing Reliability Physics Models,” NUREG/CR-5632, USNRC, (2001). 2. T. ALDEMIR, “A Survey of Dynamic Methodologies for Probabilistic Safety Assessment of Nuclear Power Plants, Annals of Nuclear Energy, 52, 113-124, (2013). 3. C. RABITI, A. ALFONSI, J. COGLIATI, D. MANDELLI and R. KINOSHITA “Reactor Analysis and Virtual Control Environment (RAVEN) FY12 Report,” INL/EXT-12-27351, (2012). 4. D. ANDERS et.al, "RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7," INL/EXT-12-25924, (2012).« less
Optimization of replacement and inspection decisions for multiple components on a power system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauney, D.A.
1994-12-31
The use of optimization on the rescheduling of replacement dates provided a very proactive approach to deciding when components on individual units need to be addressed with a run/repair/replace decision. Including the effects of time value of money and taxes and unit need inside the spreadsheet model allowed the decision maker to concentrate on the effects of engineering input and replacement date decisions on the final net present value (NPV). The personal computer (PC)-based model was applied to a group of 140 forced outage critical fossil plant tube components across a power system. The estimated resulting NPV of the optimizationmore » was in the tens of millions of dollars. This PC spreadsheet model allows the interaction of inputs from structural reliability risk assessment models, plant foreman interviews, and actual failure history on a by component by unit basis across a complete power production system. This model includes not only the forced outage performance of these components caused by tube failures but, in addition, the forecasted need of the individual units on the power system and the expected cost of their replacement power if forced off line. The use of cash flow analysis techniques in the spreadsheet model results in the calculation of an NPV for a whole combination of replacement dates. This allows rapid assessments of {open_quotes}what if{close_quotes} scenarios of major maintenance projects on a systemwide basis and not just on a unit-by-unit basis.« less
TSTA Piping and Flame Arrestor Operating Experience Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadwallader, Lee C.; Willms, R. Scott
The Tritium Systems Test Assembly (TSTA) was a facility dedicated to tritium handling technology and experiment research at the Los Alamos National Laboratory. The facility operated from 1984 to 2001, running a prototype fusion fuel processing loop with ~100 grams of tritium as well as small experiments. There have been several operating experience reports written on this facility’s operation and maintenance experience. This paper describes analysis of two additional components from TSTA, small diameter gas piping that handled small amounts of tritium in a nitrogen carrier gas, and the flame arrestor used in this piping system. The operating experiences andmore » the component failure rates for these components are discussed in this paper. Comparison data from other applications are also presented.« less
Quantitative risk assessment system (QRAS)
NASA Technical Reports Server (NTRS)
Tan, Zhibin (Inventor); Mosleh, Ali (Inventor); Weinstock, Robert M (Inventor); Smidts, Carol S (Inventor); Chang, Yung-Hsien (Inventor); Groen, Francisco J (Inventor); Swaminathan, Sankaran (Inventor)
2001-01-01
A quantitative risk assessment system (QRAS) builds a risk model of a system for which risk of failure is being assessed, then analyzes the risk of the system corresponding to the risk model. The QRAS performs sensitivity analysis of the risk model by altering fundamental components and quantifications built into the risk model, then re-analyzes the risk of the system using the modifications. More particularly, the risk model is built by building a hierarchy, creating a mission timeline, quantifying failure modes, and building/editing event sequence diagrams. Multiplicities, dependencies, and redundancies of the system are included in the risk model. For analysis runs, a fixed baseline is first constructed and stored. This baseline contains the lowest level scenarios, preserved in event tree structure. The analysis runs, at any level of the hierarchy and below, access this baseline for risk quantitative computation as well as ranking of particular risks. A standalone Tool Box capability exists, allowing the user to store application programs within QRAS.
Microtensile bond strength of etch and rinse versus self-etch adhesive systems.
Hamouda, Ibrahim M; Samra, Nagia R; Badawi, Manal F
2011-04-01
The aim of this study was to compare the microtensile bond strength of the etch and rinse adhesive versus one-component or two-component self-etch adhesives. Twelve intact human molar teeth were cleaned and the occlusal enamel of the teeth was removed. The exposed dentin surfaces were polished and rinsed, and the adhesives were applied. A microhybride composite resin was applied to form specimens of 4 mm height and 6 mm diameter. The specimens were sectioned perpendicular to the adhesive interface to produce dentin-resin composite sticks, with an adhesive area of approximately 1.4 mm(2). The sticks were subjected to tensile loading until failure occurred. The debonded areas were examined with a scanning electron microscope to determine the site of failure. The results showed that the microtensile bond strength of the etch and rinse adhesive was higher than that of one-component or two-component self-etch adhesives. The scanning electron microscope examination of the dentin surfaces revealed adhesive and mixed modes of failure. The adhesive mode of failure occurred at the adhesive/dentin interface, while the mixed mode of failure occurred partially in the composite and partially at the adhesive/dentin interface. It was concluded that the etch and rinse adhesive had higher microtensile bond strength when compared to that of the self-etch adhesives. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Watanabe, Shunsuke; Kabashima, Yoshiyuki
2016-09-01
In this study we investigate the resilience of duplex networked layers α and β coupled with antagonistic interlinks, each layer of which inhibits its counterpart at the microscopic level, changing the following factors: whether the influence of the initial failures in α remains [quenched (case Q )] or not [free (case F )]; the effect of intralayer degree-degree correlations in each layer and interlayer degree-degree correlations; and the type of the initial failures, such as random failures or targeted attacks (TAs). We illustrate that the percolation processes repeat in both cases Q and F , although only in case F are nodes that initially failed reactivated. To analytically evaluate the resilience of each layer, we develop a methodology based on the cavity method for deriving the size of a giant component (GC). Strong hysteresis, which is ignored in the standard cavity analysis, is observed in the repetition of the percolation processes particularly in case F . To handle this, we heuristically modify interlayer messages for macroscopic analysis, the utility of which is verified by numerical experiments. The percolation transition in each layer is continuous in both cases Q and F . We also analyze the influences of degree-degree correlations on the robustness of layer α , in particular for the case of TAs. The analysis indicates that the critical fraction of initial failures that makes the GC size in layer α vanish depends only on its intralayer degree-degree correlations. Although our model is defined in a somewhat abstract manner, it may have relevance to ecological systems that are composed of endangered species (layer α ) and invaders (layer β ), the former of which are damaged by the latter whereas the latter are exterminated in the areas where the former are active.
Reliability analysis and initial requirements for FC systems and stacks
NASA Astrophysics Data System (ADS)
Åström, K.; Fontell, E.; Virtanen, S.
In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.
NASA Astrophysics Data System (ADS)
Jackson, Andrew
2015-07-01
On launch, one of Swarm's absolute scalar magnetometers (ASMs) failed to function, leaving an asymmetrical arrangement of redundant spares on different spacecrafts. A decision was required concerning the deployment of individual satellites into the low-orbit pair or the higher "lonely" orbit. I analyse the probabilities for successful operation of two of the science components of the Swarm mission in terms of a classical probabilistic failure analysis, with a view to concluding a favourable assignment for the satellite with the single working ASM. I concentrate on the following two science aspects: the east-west gradiometer aspect of the lower pair of satellites and the constellation aspect, which requires a working ASM in each of the two orbital planes. I use the so-called "expert solicitation" probabilities for instrument failure solicited from Mission Advisory Group (MAG) members. My conclusion from the analysis is that it is better to have redundancy of ASMs in the lonely satellite orbit. Although the opposite scenario, having redundancy (and thus four ASMs) in the lower orbit, increases the chance of a working gradiometer late in the mission; it does so at the expense of a likely constellation. Although the results are presented based on actual MAG members' probabilities, the results are rather generic, excepting the case when the probability of individual ASM failure is very small; in this case, any arrangement will ensure a successful mission since there is essentially no failure expected at all. Since the very design of the lower pair is to enable common mode rejection of external signals, it is likely that its work can be successfully achieved during the first 5 years of the mission.
A review of typical thermal fatigue failure models for solder joints of electronic components
NASA Astrophysics Data System (ADS)
Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong
2017-09-01
For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.
Clarke, S G; Phillips, A T M; Bull, A M J; Cobb, J P
2012-06-01
The impact of anatomical variation and surgical error on excessive wear and loosening of the acetabular component of large diameter metal-on-metal hip arthroplasties was measured using a multi-factorial analysis through 112 different simulations. Each surgical scenario was subject to eight different daily loading activities using finite element analysis. Excessive wear appears to be predominantly dependent on cup orientation, with inclination error having a higher influence than version error, according to the study findings. Acetabular cup loosening, as inferred from initial implant stability, appears to depend predominantly on factors concerning the area of cup-bone contact, specifically the level of cup seating achieved and the individual patient's anatomy. The extent of press fit obtained at time of surgery did not appear to influence either mechanism of failure in this study. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CARES/Life software developed at the NASA Lewis Research Center eases this by providing a tool that uses probabilistic reliability analysis techniques to optimize the design and manufacture of brittle material components. CARES/Life is an integrated package that predicts the probability of a monolithic ceramic component's failure as a function of its time in service. It couples commercial finite element programs--which resolve a component's temperature and stress distribution - with reliability evaluation and fracture mechanics routines for modeling strength - limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength.
Product component genealogy modeling and field-failure prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Caleb; Hong, Yili; Meeker, William Q.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Product component genealogy modeling and field-failure prediction
King, Caleb; Hong, Yili; Meeker, William Q.
2016-04-13
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Clinical assessment of pacemaker power sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilitch, M.; Parsonnet, V.; Furman, S.
1980-01-01
The development of power sources for cardiac pacemakers has progressed from a 15-year usage of mercury-zinc batteries to widely used and accepted lithium cells. At present, there are about 6 different types of lithium cells incorporated into commercially distributed pacemakers. The authors reviewed experience over a 5-year period with 1711 mercury-zinc, 130 nuclear (P238) and 1912 lithium powered pacemakers. The lithium units have included 698 lithium-iodide, 270 lithium-silver chromate, 135 lithium-thionyl chloride, 31 lithium-lead and 353 lithium-cupric sulfide batteries. 57 of the lithium units have failed (91.2% component failure and 5.3% battery failure). 459 mercury-zinc units failed (25% component failuremore » and 68% battery depletion). The data show that lithium powered pacemaker failures are primarily component, while mercury-zinc failures are primarily battery related. It is concluded that mercury-zinc powered pulse generators are obsolete and that lithium and nuclear (P238) power sources are highly reliable over the 5 years for which data are available. 3 refs.« less
Packaging-induced failure of semiconductor lasers and optical telecommunications components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharps, J.A.
1996-12-31
Telecommunications equipment for field deployment generally have specified lifetimes of > 100,000 hr. To achieve this high reliability, it is common practice to package sensitive components in hermetic, inert gas environments. The intent is to protect components from particulate and organic contamination, oxidation, and moisture. However, for high power density 980 nm diode lasers used in optical amplifiers, the authors found that hermetic, inert gas packaging induced a failure mode not observed in similar, unpackaged lasers. They refer to this failure mode as packaging-induced failure, or PIF. PIF is caused by nanomole amounts of organic contamination which interact with highmore » intensity 980 nm light to form solid deposits over the emitting regions of the lasers. These deposits absorb 980 nm light, causing heating of the laser, narrowing of the band gap, and eventual thermal runaway. The authors have found PIF is averted by packaging with free O{sub 2} and/or a getter material that sequesters organics.« less
New understandings of failure modes in SSL luminaires
NASA Astrophysics Data System (ADS)
Shepherd, Sarah D.; Mills, Karmann C.; Yaga, Robert; Johnson, Cortina; Davis, J. Lynn
2014-09-01
As SSL products are being rapidly introduced into the market, there is a need to develop standard screening and testing protocols that can be performed quickly and provide data surrounding product lifetime and performance. These protocols, derived from standard industry tests, are known as ALTs (accelerated life tests) and can be performed in a timeframe of weeks to months instead of years. Accelerated testing utilizes a combination of elevated temperature and humidity conditions as well as electrical power cycling to control aging of the luminaires. In this study, we report on the findings of failure modes for two different luminaire products exposed to temperature-humidity ALTs. LEDs are typically considered the determining component for the rate of lumen depreciation. However, this study has shown that each luminaire component can independently or jointly influence system performance and reliability. Material choices, luminaire designs, and driver designs all have significant impacts on the system reliability of a product. From recent data, it is evident that the most common failure modes are not within the LED, but instead occur within resistors, capacitors, and other electrical components of the driver. Insights into failure modes and rates as a result of ALTs are reported with emphasis on component influence on overall system reliability.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Reliability analysis of a robotic system using hybridized technique
NASA Astrophysics Data System (ADS)
Kumar, Naveen; Komal; Lather, J. S.
2017-09-01
In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.
NASA Technical Reports Server (NTRS)
Fayssal, Safie; Weldon, Danny
2008-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program called Constellation to send crew and cargo to the international Space Station, to the moon, and beyond. As part of the Constellation program, a new launch vehicle, Ares I, is being developed by NASA Marshall Space Flight Center. Designing a launch vehicle with high reliability and increased safety requires a significant effort in understanding design variability and design uncertainty at the various levels of the design (system, element, subsystem, component, etc.) and throughout the various design phases (conceptual, preliminary design, etc.). In a previous paper [1] we discussed a probabilistic functional failure analysis approach intended mainly to support system requirements definition, system design, and element design during the early design phases. This paper provides an overview of the application of probabilistic engineering methods to support the detailed subsystem/component design and development as part of the "Design for Reliability and Safety" approach for the new Ares I Launch Vehicle. Specifically, the paper discusses probabilistic engineering design analysis cases that had major impact on the design and manufacturing of the Space Shuttle hardware. The cases represent important lessons learned from the Space Shuttle Program and clearly demonstrate the significance of probabilistic engineering analysis in better understanding design deficiencies and identifying potential design improvement for Ares I. The paper also discusses the probabilistic functional failure analysis approach applied during the early design phases of Ares I and the forward plans for probabilistic design analysis in the detailed design and development phases.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
NASA Astrophysics Data System (ADS)
Sin, Yongkun; Ayvazian, Talin; Brodie, Miles; Lingley, Zachary
2018-03-01
High-power single-mode (SM) and multi-mode (MM) InGaAs-AlGaAs strained quantum well (QW) lasers are critical components for both terrestrial and space satellite communications systems. Since these lasers predominantly fail by catastrophic and sudden degradation due to catastrophic optical damage (COD), it is especially crucial for space satellite applications to investigate reliability, failure modes, precursor signatures of failure, and degradation mechanisms of these lasers. Our group reported a new failure mode in MM and SM InGaAs-AlGaAs strained QW lasers in 2009 and 2016, respectively. Our group also reported in 2017 that bulk failure due to catastrophic optical bulk damage (COBD) is the dominant failure mode of both SM and MM lasers that were subject to long-term life-tests. For the present study, we continued our physics of failure investigation by performing long-term life-tests followed by failure mode analysis (FMA) using nondestructive and destructive micro-analytical techniques. We performed long-term accelerated life-tests on state-of-the-art SM and MM InGaAs- AlGaAs strained QW lasers under ACC mode. Our life-tests have accumulated over 25,000 test hours for SM lasers and over 35,000 test hours for MM lasers. We first employed electron beam induced current (EBIC) technique to identify failure modes of degraded SM lasers by observing dark line defects. All the SM failures that we studied showed catastrophic and sudden degradation and all of these failures were bulk failures. Since degradation mechanisms responsible for COBD are still not well understood, we also employed other techniques including focused ion beam (FIB) and high-resolution TEM to further study dark line defects and dislocations in post-aged lasers. Keywor
Calvert, George T; Cummings, Judd E; Bowles, Austin J; Jones, Kevin B; Wurtz, L Daniel; Randall, R Lor
2014-03-01
Aseptic failure of massive endoprostheses used in the reconstruction of major skeletal defects remains a major clinical problem. Fixation using compressive osseointegration was developed as an alternative to cemented and traditional press-fit fixation in an effort to decrease aseptic failure rates. The purpose of this study was to answer the following questions: (1) What is the survivorship of this technique at minimum 2-year followup? (2) Were patient demographic variables (age, sex) or anatomic location associated with implant failure? (3) Were there any prosthesis-related variables (eg, spindle size) associated with failure? (4) Was there a discernible learning curve associated with the use of the new device as defined by a difference in failure rate early in the series versus later on? The first 50 cases using compressive osseointegration fixation from two tertiary referral centers were retrospectively studied. Rates of component removal for any reason and for aseptic failure were calculated. Demographic, surgical, and oncologic factors were analyzed using regression analysis to assess for association with implant failure. Minimum followup was 2 years with a mean of 66 months. Median age at the time of surgery was 14.5 years. A total of 15 (30%) implants were removed for any reason. Of these revisions, seven (14%) were the result of aseptic failure. Five of the seven aseptic failures occurred at less than 1 year (average, 8.3 months), and none occurred beyond 17 months. With the limited numbers available, no demographic, surgical, or prosthesis-related factors correlated with failure. Most aseptic failures of compressive osseointegration occurred early. Longer followup is needed to determine if this technique is superior to other forms of fixation.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
NASA Astrophysics Data System (ADS)
Mullin, Daniel Richard
2013-09-01
The majority of space programs whether manned or unmanned for science or exploration require that a Failure Modes Effects and Criticality Analysis (FMECA) be performed as part of their safety and reliability activities. This comes as no surprise given that FMECAs have been an integral part of the reliability engineer's toolkit since the 1950s. The reasons for performing a FMECA are well known including fleshing out system single point failures, system hazards and critical components and functions. However, in the author's ten years' experience as a space systems safety and reliability engineer, findings demonstrate that the FMECA is often performed as an afterthought, simply to meet contract deliverable requirements and is often started long after the system requirements allocation and preliminary design have been completed. There are also important qualitative and quantitative components often missing which can provide useful data to all of project stakeholders. These include; probability of occurrence, probability of detection, time to effect and time to detect and, finally, the Risk Priority Number. This is unfortunate as the FMECA is a powerful system design tool that when used effectively, can help optimize system function while minimizing the risk of failure. When performed as early as possible in conjunction with writing the top level system requirements, the FMECA can provide instant feedback on the viability of the requirements while providing a valuable sanity check early in the design process. It can indicate which areas of the system will require redundancy and which areas are inherently the most risky from the onset. Based on historical and practical examples, it is this author's contention that FMECAs are an immense source of important information for all involved stakeholders in a given project and can provide several benefits including, efficient project management with respect to cost and schedule, system engineering and requirements management, assembly integration and test (AI&T) and operations if applied early, performed to completion and updated along with system design.
Modelling Wind Turbine Failures based on Weather Conditions
NASA Astrophysics Data System (ADS)
Reder, Maik; Melero, Julio J.
2017-11-01
A large proportion of the overall costs of a wind farm is directly related to operation and maintenance (O&M) tasks. By applying predictive O&M strategies rather than corrective approaches these costs can be decreased significantly. Here, especially wind turbine (WT) failure models can help to understand the components’ degradation processes and enable the operators to anticipate upcoming failures. Usually, these models are based on the age of the systems or components. However, latest research shows that the on-site weather conditions also affect the turbine failure behaviour significantly. This study presents a novel approach to model WT failures based on the environmental conditions to which they are exposed to. The results focus on general WT failures, as well as on four main components: gearbox, generator, pitch and yaw system. A penalised likelihood estimation is used in order to avoid problems due to for example highly correlated input covariates. The relative importance of the model covariates is assessed in order to analyse the effect of each weather parameter on the model output.
NASA Astrophysics Data System (ADS)
Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue
2012-05-01
In this article, we investigate the reliability of M-for-N (M:N) shared protection systems. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner under the condition that the failed units are not repairable. Mathematical analysis gives the closed-form solution of the reliability and mean time to failure (MTTF). We also analyse several numerical examples of the reliability and MTTF. This result can be applied, for example, to the analysis and design of an integrated circuit consisting of redundant backup components. In such a device, repairing a failed component is unrealistic. The analysis provides useful information for the design for general shared protection systems in which the failed units are not repaired.
Robustness surfaces of complex networks
NASA Astrophysics Data System (ADS)
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Cantilever testing of sintered-silver interconnects
Wereszczak, Andrew A.; Chen, Branndon R.; Jadaan, Osama M.; ...
2017-10-19
Cantilever testing is an underutilized test method from which results and interpretations promote greater understanding of the tensile and shear failure responses of interconnects, metallizations, or bonded joints. The use and analysis of this method were pursued through the mechanical testing of sintered-silver interconnects that joined Ni/Au-plated copper pillars or Ti/Ni/Ag-plated silicon pillars to Ag-plated direct bonded copper substrates. Sintered-silver was chosen as the interconnect test medium because of its high electrical and thermal conductivities and high-temperature capability—attractive characteristics for a candidate interconnect in power electronic components and other devices. Deep beam theory was used to improve upon the estimationsmore » of the tensile and shear stresses calculated from classical beam theory. The failure stresses of the sintered-silver interconnects were observed to be dependent on test-condition and test-material-system. In conclusion, the experimental simplicity of cantilever testing, and the ability to analytically calculate tensile and shear stresses at failure, result in it being an attractive mechanical test method to evaluate the failure response of interconnects.« less
Cantilever testing of sintered-silver interconnects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wereszczak, Andrew A.; Chen, Branndon R.; Jadaan, Osama M.
Cantilever testing is an underutilized test method from which results and interpretations promote greater understanding of the tensile and shear failure responses of interconnects, metallizations, or bonded joints. The use and analysis of this method were pursued through the mechanical testing of sintered-silver interconnects that joined Ni/Au-plated copper pillars or Ti/Ni/Ag-plated silicon pillars to Ag-plated direct bonded copper substrates. Sintered-silver was chosen as the interconnect test medium because of its high electrical and thermal conductivities and high-temperature capability—attractive characteristics for a candidate interconnect in power electronic components and other devices. Deep beam theory was used to improve upon the estimationsmore » of the tensile and shear stresses calculated from classical beam theory. The failure stresses of the sintered-silver interconnects were observed to be dependent on test-condition and test-material-system. In conclusion, the experimental simplicity of cantilever testing, and the ability to analytically calculate tensile and shear stresses at failure, result in it being an attractive mechanical test method to evaluate the failure response of interconnects.« less
Robustness surfaces of complex networks.
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-02
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Organ failure and tight glycemic control in the SPRINT study.
Chase, J Geoffrey; Pretty, Christopher G; Pfeifer, Leesa; Shaw, Geoffrey M; Preiser, Jean-Charles; Le Compte, Aaron J; Lin, Jessica; Hewett, Darren; Moorhead, Katherine T; Desaive, Thomas
2010-01-01
Intensive care unit mortality is strongly associated with organ failure rate and severity. The sequential organ failure assessment (SOFA) score is used to evaluate the impact of a successful tight glycemic control (TGC) intervention (SPRINT) on organ failure, morbidity, and thus mortality. A retrospective analysis of 371 patients (3,356 days) on SPRINT (August 2005 - April 2007) and 413 retrospective patients (3,211 days) from two years prior, matched by Acute Physiology and Chronic Health Evaluation (APACHE) III. SOFA is calculated daily for each patient. The effect of the SPRINT TGC intervention is assessed by comparing the percentage of patients with SOFA ≤5 each day and its trends over time and cohort/group. Organ-failure free days (all SOFA components ≤2) and number of organ failures (SOFA components >2) are also compared. Cumulative time in 4.0 to 7.0 mmol/L band (cTIB) was evaluated daily to link tightness and consistency of TGC (cTIB ≥0.5) to SOFA ≤5 using conditional and joint probabilities. Admission and maximum SOFA scores were similar (P = 0.20; P = 0.76), with similar time to maximum (median: one day; IQR: 13 days; P = 0.99). Median length of stay was similar (4.1 days SPRINT and 3.8 days Pre-SPRINT; P = 0.94). The percentage of patients with SOFA ≤5 is different over the first 14 days (P = 0.016), rising to approximately 75% for Pre-SPRINT and approximately 85% for SPRINT, with clear separation after two days. Organ-failure-free days were different (SPRINT = 41.6%; Pre-SPRINT = 36.5%; P < 0.0001) as were the percent of total possible organ failures (SPRINT = 16.0%; Pre-SPRINT = 19.0%; P < 0.0001). By Day 3 over 90% of SPRINT patients had cTIB ≥0.5 (37% Pre-SPRINT) reaching 100% by Day 7 (50% Pre-SPRINT). Conditional and joint probabilities indicate tighter, more consistent TGC under SPRINT (cTIB ≥0.5) increased the likelihood SOFA ≤5. SPRINT TGC resolved organ failure faster, and for more patients, from similar admission and maximum SOFA scores, than conventional control. These reductions mirror the reduced mortality with SPRINT. The cTIB ≥0.5 metric provides a first benchmark linking TGC quality to organ failure. These results support other physiological and clinical results indicating the role tight, consistent TGC can play in reducing organ failure, morbidity and mortality, and should be validated on data from randomised trials.
Organ failure and tight glycemic control in the SPRINT study
2010-01-01
Introduction Intensive care unit mortality is strongly associated with organ failure rate and severity. The sequential organ failure assessment (SOFA) score is used to evaluate the impact of a successful tight glycemic control (TGC) intervention (SPRINT) on organ failure, morbidity, and thus mortality. Methods A retrospective analysis of 371 patients (3,356 days) on SPRINT (August 2005 - April 2007) and 413 retrospective patients (3,211 days) from two years prior, matched by Acute Physiology and Chronic Health Evaluation (APACHE) III. SOFA is calculated daily for each patient. The effect of the SPRINT TGC intervention is assessed by comparing the percentage of patients with SOFA ≤5 each day and its trends over time and cohort/group. Organ-failure free days (all SOFA components ≤2) and number of organ failures (SOFA components >2) are also compared. Cumulative time in 4.0 to 7.0 mmol/L band (cTIB) was evaluated daily to link tightness and consistency of TGC (cTIB ≥0.5) to SOFA ≤5 using conditional and joint probabilities. Results Admission and maximum SOFA scores were similar (P = 0.20; P = 0.76), with similar time to maximum (median: one day; IQR: [1,3] days; P = 0.99). Median length of stay was similar (4.1 days SPRINT and 3.8 days Pre-SPRINT; P = 0.94). The percentage of patients with SOFA ≤5 is different over the first 14 days (P = 0.016), rising to approximately 75% for Pre-SPRINT and approximately 85% for SPRINT, with clear separation after two days. Organ-failure-free days were different (SPRINT = 41.6%; Pre-SPRINT = 36.5%; P < 0.0001) as were the percent of total possible organ failures (SPRINT = 16.0%; Pre-SPRINT = 19.0%; P < 0.0001). By Day 3 over 90% of SPRINT patients had cTIB ≥0.5 (37% Pre-SPRINT) reaching 100% by Day 7 (50% Pre-SPRINT). Conditional and joint probabilities indicate tighter, more consistent TGC under SPRINT (cTIB ≥0.5) increased the likelihood SOFA ≤5. Conclusions SPRINT TGC resolved organ failure faster, and for more patients, from similar admission and maximum SOFA scores, than conventional control. These reductions mirror the reduced mortality with SPRINT. The cTIB ≥0.5 metric provides a first benchmark linking TGC quality to organ failure. These results support other physiological and clinical results indicating the role tight, consistent TGC can play in reducing organ failure, morbidity and mortality, and should be validated on data from randomised trials. PMID:20704712
Frick, Marcel; Fischer, Jörg; Helbling, Arthur; Ruëff, Franziska; Wieczorek, Dorothea; Ollert, Markus; Pfützner, Wolfgang; Müller, Sabine; Huss-Marp, Johannes; Dorn, Britta; Biedermann, Tilo; Lidholm, Jonas; Ruecker, Gerta; Bantleon, Frank; Miehe, Michaela; Spillner, Edzard; Jakob, Thilo
2016-12-01
Component resolution recently identified distinct sensitization profiles in honey bee venom (HBV) allergy, some of which were dominated by specific IgE to Api m 3 and/or Api m 10, which have been reported to be underrepresented in therapeutic HBV preparations. We performed a retrospective analysis of component-resolved sensitization profiles in HBV-allergic patients and association with treatment outcome. HBV-allergic patients who had undergone controlled honey bee sting challenge after at least 6 months of HBV immunotherapy (n = 115) were included and classified as responder (n = 79) or treatment failure (n = 36) on the basis of absence or presence of systemic allergic reactions upon sting challenge. IgE reactivity to a panel of HBV allergens was analyzed in sera obtained before immunotherapy and before sting challenge. No differences were observed between responders and nonresponders regarding levels of IgE sensitization to Api m 1, Api m 2, Api m 3, and Api m 5. In contrast, Api m 10 specific IgE was moderately but significantly increased in nonresponders. Predominant Api m 10 sensitization (>50% of specific IgE to HBV) was the best discriminator (specificity, 95%; sensitivity, 25%) with an odds ratio of 8.444 (2.127-33.53; P = .0013) for treatment failure. Some but not all therapeutic HBV preparations displayed a lack of Api m 10, whereas Api m 1 and Api m 3 immunoreactivity was comparable to that of crude HBV. In line with this, significant Api m 10 sIgG 4 induction was observed only in those patients who were treated with HBV in which Api m 10 was detectable. Component-resolved sensitization profiles in HBV allergy suggest predominant IgE sensitization to Api m 10 as a risk factor for treatment failure in HBV immunotherapy. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Filiatrault, Andre; Sullivan, Timothy
2014-08-01
With the development and implementation of performance-based earthquake engineering, harmonization of performance levels between structural and nonstructural components becomes vital. Even if the structural components of a building achieve a continuous or immediate occupancy performance level after a seismic event, failure of architectural, mechanical or electrical components can lower the performance level of the entire building system. This reduction in performance caused by the vulnerability of nonstructural components has been observed during recent earthquakes worldwide. Moreover, nonstructural damage has limited the functionality of critical facilities, such as hospitals, following major seismic events. The investment in nonstructural components and building contents is far greater than that of structural components and framing. Therefore, it is not surprising that in many past earthquakes, losses from damage to nonstructural components have exceeded losses from structural damage. Furthermore, the failure of nonstructural components can become a safety hazard or can hamper the safe movement of occupants evacuating buildings, or of rescue workers entering buildings. In comparison to structural components and systems, there is relatively limited information on the seismic design of nonstructural components. Basic research work in this area has been sparse, and the available codes and guidelines are usually, for the most part, based on past experiences, engineering judgment and intuition, rather than on objective experimental and analytical results. Often, design engineers are forced to start almost from square one after each earthquake event: to observe what went wrong and to try to prevent repetitions. This is a consequence of the empirical nature of current seismic regulations and guidelines for nonstructural components. This review paper summarizes current knowledge on the seismic design and analysis of nonstructural building components, identifying major knowledge gaps that will need to be filled by future research. Furthermore, considering recent trends in earthquake engineering, the paper explores how performance-based seismic design might be conceived for nonstructural components, drawing on recent developments made in the field of seismic design and hinting at the specific considerations required for nonstructural components.
Review of Literature on Probability of Detection for Liquid Penetrant Nondestructive Testing
2011-11-01
increased maintenance costs , or catastrophic failure of safety- critical structure. Knowledge of the reliability achieved by NDT methods, including...representative components to gather data for statistical analysis, which can be prohibitively expensive. To account for sampling variability inherent in any...Sioux City and Pensacola. (Those recommendations were discussed in Section 3.4.) Drury et al report on a factorial experiment aimed at identifying the
Ground Vehicle Condition Based Maintenance
2010-10-04
Diagnostic Process Map 32 FMEAs Developed : • Diesel Engine • Transmission • Alternators Analysis : • Identify failure modes • Derive design factors and...S&T Initiatives TARDEC P&D Process Map Component Testing ARL CBM Research AMSAA SDC & Terrain Modeling UNCLASSIFIED 3 CBM+ Overview...UNCLASSIFIED 4 RCM and CBM are core processes for CBM+ System Development • Army Regulation 750-1, 20 Sep 2007, p. 79 - Reliability Centered Maintenance (RCM
Application of the NUREG/CR-6850 EPRI/NRC Fire PRA Methodology to a DOE Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Richard Yorg
2011-03-01
The application NUREG/CR-6850 EPRI/NRC fire PRA methodology to DOE facility presented several challenges. This paper documents the process and discusses several insights gained during development of the fire PRA. A brief review of the tasks performed is provided with particular focus on the following: • Tasks 5 and 14: Fire-induced risk model and fire risk quantification. A key lesson learned was to begin model development and quantification as early as possible in the project using screening values and simplified modeling if necessary. • Tasks 3 and 9: Fire PRA cable selection and detailed circuit failure analysis. In retrospect, it wouldmore » have been beneficial to perform the model development and quantification in 2 phases with detailed circuit analysis applied during phase 2. This would have allowed for development of a robust model and quantification earlier in the project and would have provided insights into where to focus the detailed circuit analysis efforts. • Tasks 8 and 11: Scoping fire modeling and detailed fire modeling. More focus should be placed on detailed fire modeling and less focus on scoping fire modeling. This was the approach taken for the fire PRA. • Task 14: Fire risk quantification. Typically, multiple safe shutdown (SSD) components fail during a given fire scenario. Therefore dependent failure analysis is critical to obtaining a meaningful fire risk quantification. Dependent failure analysis for the fire PRA presented several challenges which will be discussed in the full paper.« less
Chou, Cheng-Chen; Pressler, Susan J; Giordani, Bruno; Fetzer, Susan Jane
2015-11-01
To evaluate the validity of the Chinese version of the CogState battery, a computerised cognitive testing among patients with heart failure in Taiwan. Cognitive deficits are common in patients with heart failure and a validated Chinese measurement is required for assessing cognitive change for this population. The CogState computerised battery is a measurement of cognitive function and has been validated in many languages, but not Chinese. A cross-sectional study. A convenience sample consisted of 76 women with heart failure and 64 healthy women in northern Taiwan. Women completed the Chinese version of the CogState battery and the Montreal Cognitive Assessment. Construct validity of the Chinese version of the battery was evaluated by exploratory factor analysis and known-group comparisons. Convergent validity of the CogState tasks was examined by Pearson correlation coefficients. Principal components factor analysis with promax rotation showed two factors reflecting the speed and memory dimensions of the tests. Scores for CogState battery tasks showed significant differences between the heart failure and healthy control group. Examination of convergent validity of the CogState found a significant association with the Montreal Cognitive Assessment. The Chinese CogState Battery has satisfactory construct and convergent validity to measure cognitive deficits in patients with heart failure in Taiwan. The Chinese CogState battery is a valid instrument for detecting cognitive deficits that may be subtle in the early stages, and identifying changes that provide insights into patients' abilities to implement treatment accurately and consistently. Better interventions tailored to the needs of the cognitive impaired population can be developed. © 2015 John Wiley & Sons Ltd.
Reliability of hybrid microcircuit discrete components
NASA Technical Reports Server (NTRS)
Allen, R. V.
1972-01-01
Data accumulated during 4 years of research and evaluation of ceramic chip capacitors, ceramic carrier mounted active devices, beam-lead transistors, and chip resistors are presented. Life and temperature coefficient test data, and optical and scanning electron microscope photographs of device failures are presented and the failure modes are described. Particular interest is given to discrete component qualification, power burn-in, and procedures for testing and screening discrete components. Burn-in requirements and test data will be given in support of 100 percent burn-in policy on all NASA flight programs.
Ceramic applications in turbine engines. [for improved component performance and reduced fuel usage
NASA Technical Reports Server (NTRS)
Hudson, M. S.; Janovicz, M. A.; Rockwood, F. A.
1980-01-01
Ceramic material characterization and testing of ceramic nozzle vanes, turbine tip shrouds, and regenerators disks at 36 C above the baseline engine TIT and the design, analysis, fabrication and development activities are described. The design of ceramic components for the next generation engine to be operated at 2070 F was completed. Coupons simulating the critical 2070 F rotor blade was hot spin tested for failure with sufficient margin to quality sintered silicon nitride and sintered silicon carbide, validating both the attachment design and finite element strength. Progress made in increasing strength, minimizing variability, and developing nondestructive evaluation techniques is reported.
Flight test of a full authority Digital Electronic Engine Control system in an F-15 aircraft
NASA Technical Reports Server (NTRS)
Barrett, W. J.; Rembold, J. P.; Burcham, F. W.; Myers, L.
1981-01-01
The Digital Electronic Engine Control (DEEC) system considered is a relatively low cost digital full authority control system containing selectively redundant components and fault detection logic with capability for accommodating faults to various levels of operational capability. The DEEC digital control system is built around a 16-bit, 1.2 microsecond cycle time, CMOS microprocessor, microcomputer system with approximately 14 K of available memory. Attention is given to the control mode, component bench testing, closed loop bench testing, a failure mode and effects analysis, sea-level engine testing, simulated altitude engine testing, flight testing, the data system, cockpit, and real time display.
NASA Technical Reports Server (NTRS)
Steele, John; Metselaar, Carol; Peyton, Barbara; Rector, Tony; Rossato, Robert; Macias, Brian; Weigel, Dana; Holder, Don
2015-01-01
Water entered the Extravehicular Mobility Unit (EMU) helmet during extravehicular activity (EVA) no. 23 aboard the International Space Station on July 16, 2013, resulting in the termination of the EVA approximately 1 hour after it began. It was estimated that 1.5 liters of water had migrated up the ventilation loop into the helmet, adversely impacting the astronaut's hearing, vision, and verbal communication. Subsequent on-board testing and ground-based test, tear-down, and evaluation of the affected EMU hardware components determined that the proximate cause of the mishap was blockage of all water separator drum holes with a mixture of silica and silicates. The blockages caused a failure of the water separator degassing function, which resulted in EMU cooling water spilling into the ventilation loop, migrating around the circulating fan, and ultimately pushing into the helmet. The root cause of the failure was determined to be ground-processing shortcomings of the Airlock Cooling Loop Recovery (ALCLR) Ion Filter Beds, which led to various levels of contaminants being introduced into the filters before they left the ground. Those contaminants were thereafter introduced into the EMU hardware on-orbit during ALCLR scrubbing operations. This paper summarizes the failure analysis results along with identified process, hardware, and operational corrective actions that were implemented as a result of findings from this investigation.
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Xiao, Mingqing; Liang, Yajun; Tang, Xilang; Li, Chao
2018-01-01
The solenoid valve is a kind of basic automation component applied widely. It’s significant to analyze and predict its degradation failure mechanism to improve the reliability of solenoid valve and do research on prolonging life. In this paper, a three-dimensional finite element analysis model of solenoid valve is established based on ANSYS Workbench software. A sequential coupling method used to calculate temperature filed and mechanical stress field of solenoid valve is put forward. The simulation result shows the sequential coupling method can calculate and analyze temperature and stress distribution of solenoid valve accurately, which has been verified through the accelerated life test. Kalman filtering algorithm is introduced to the data processing, which can effectively reduce measuring deviation and restore more accurate data information. Based on different driving current, a kind of failure mechanism which can easily cause the degradation of coils is obtained and an optimization design scheme of electro-insulating rubbers is also proposed. The high temperature generated by driving current and the thermal stress resulting from thermal expansion can easily cause the degradation of coil wires, which will decline the electrical resistance of coils and result in the eventual failure of solenoid valve. The method of finite element analysis can be applied to fault diagnosis and prognostic of various solenoid valves and improve the reliability of solenoid valve’s health management.
NASA Technical Reports Server (NTRS)
Steele, John; Metselaar, Carol; Peyton, Barbara; Rector, Tony; Rossato, Robert; Macias, Brian; Weigel, Dana; Holder, Don
2015-01-01
During EVA (Extravehicular Activity) No. 23 aboard the ISS (International Space Station) on 07/16/2013 water entered the EMU (Extravehicular Mobility Unit) helmet resulting in the termination of the EVA (Extravehicular Activity) approximately 1-hour after it began. It was estimated that 1.5-L of water had migrated up the ventilation loop into the helmet, adversely impacting the astronauts hearing, vision and verbal communication. Subsequent on-board testing and ground-based TT and E (Test, Tear-down and Evaluation) of the affected EMU hardware components led to the determination that the proximate cause of the mishap was blockage of all water separator drum holes with a mixture of silica and silicates. The blockages caused a failure of the water separator function which resulted in EMU cooling water spilling into the ventilation loop, around the circulating fan, and ultimately pushing into the helmet. The root cause of the failure was determined to be ground-processing short-comings of the ALCLR (Airlock Cooling Loop Recovery) Ion Filter Beds which led to various levels of contaminants being introduced into the Filters before they left the ground. Those contaminants were thereafter introduced into the EMU hardware on-orbit during ALCLR scrubbing operations. This paper summarizes the failure analysis results along with identified process, hardware and operational corrective actions that were implemented as a result of findings from this investigation.
NASA Astrophysics Data System (ADS)
Jin, Liu; Yongguang, Chen; Zhiliang, Tan; Jie, Yang; Xijun, Zhang; Zhenxing, Wang
2011-10-01
Electrostatic discharge (ESD) phenomena involve both electrical and thermal effects, and a direct electrostatic discharge to an electronic device is one of the most severe threats to component reliability. Therefore, the electrical and thermal stability of multifinger microwave bipolar transistors (BJTs) under ESD conditions has been investigated theoretically and experimentally. 100 samples have been tested for multiple pulses until a failure occurred. Meanwhile, the distributions of electric field, current density and lattice temperature have also been analyzed by use of the two-dimensional device simulation tool Medici. There is a good agreement between the simulated results and failure analysis. In the case of a thermal couple, the avalanche current distribution in the fingers is in general spatially unstable and results in the formation of current crowding effects and crystal defects. The experimental results indicate that a collector-base junction is more sensitive to ESD than an emitter-base junction based on the special device structure. When the ESD level increased to 1.3 kV, the collector-base junction has been burnt out first. The analysis has also demonstrated that ESD failures occur generally by upsetting the breakdown voltage of the dielectric or overheating of the aluminum-silicon eutectic. In addition, fatigue phenomena are observed during ESD testing, with devices that still function after repeated low-intensity ESDs but whose performances have been severely degraded.
Effect of Crystal Orientation on Analysis of Single-Crystal, Nickel-Based Turbine Blade Superalloys
NASA Technical Reports Server (NTRS)
Swanson, G. R.; Arakere, N. K.
2000-01-01
High-cycle fatigue-induced failures in turbine and turbopump blades is a pervasive problem. Single-crystal nickel turbine blades are used because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities. Single-crystal materials have highly orthotropic properties making the position of the crystal lattice relative to the part geometry a significant and complicating factor. A fatigue failure criterion based on the maximum shear stress amplitude on the 24 octahedral and 6 cube slip systems is presented for single-crystal nickel superalloys (FCC crystal). This criterion greatly reduces the scatter in uniaxial fatigue data for PWA 1493 at 1,200 F in air. Additionally, single-crystal turbine blades used in the Space Shuttle main engine high pressure fuel turbopump/alternate turbopump are modeled using a three-dimensional finite element (FE) model. This model accounts for material orthotrophy and crystal orientation. Fatigue life of the blade tip is computed using FE stress results and the failure criterion that was developed. Stress analysis results in the blade attachment region are also presented. Results demonstrate that control of crystallographic orientation has the potential to significantly increase a component's resistance to fatigue crack growth without adding additional weight or cost.
NASA Astrophysics Data System (ADS)
Sin, Yongkun; Lingley, Zachary; Brodie, Miles; Presser, Nathan; Moss, Steven C.
2017-02-01
High-power single-mode (SM) and multi-mode (MM) InGaAs-AlGaAs strained quantum well (QW) lasers are critical components for both telecommunications and space satellite communications systems. However, little has been reported on failure modes and degradation mechanisms of high-power SM and MM InGaAs-AlGaAs strained QW lasers although it is crucial to understand failure modes and underlying degradation mechanisms in developing these lasers that meet lifetime requirements for space satellite systems, where extremely high reliability of these lasers is required. Our present study addresses the aforementioned issues by performing long-term life-tests followed by failure mode analysis (FMA) and physics of failure investigation. We performed long-term accelerated life-tests on state-of-the-art SM and MM InGaAs-AlGaAs strained QW lasers under ACC (automatic current control) mode. Our life-tests have accumulated over 25,000 test hours for SM lasers and over 35,000 test hours for MM lasers. FMA was performed on failed SM lasers using electron beam induced current (EBIC). This technique allowed us to identify failure types by observing dark line defects. All the SM failures we studied showed catastrophic and sudden degradation and all of these failures were bulk failures. Our group previously reported that bulk failure or COBD (catastrophic optical bulk damage) is the dominant failure mode of MM InGaAs-AlGaAs strained QW lasers. Since degradation mechanisms responsible for COBD are still not well understood, we also employed other techniques including focused ion beam (FIB) processing and high-resolution TEM to further study dark line defects and dislocations in post-aged lasers. Our long-term life-test results and FMA results are reported.
Defense strategies for asymmetric networked systems under composite utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less
NASA Astrophysics Data System (ADS)
Bunget, Gheorghe; Tilmon, Brevin; Yee, Andrew; Stewart, Dylan; Rogers, James; Webster, Matthew; Farinholt, Kevin; Friedersdorf, Fritz; Pepi, Marc; Ghoshal, Anindya
2018-04-01
Widespread damage in aging aircraft is becoming an increasing concern as both civil and military fleet operators are extending the service lifetime of their aircraft. Metallic components undergoing variable cyclic loadings eventually fatigue and form dislocations as precursors to ultimate failure. In order to characterize the progression of fatigue damage precursors (DP), the acoustic nonlinearity parameter is measured as the primary indicator. However, using proven standard ultrasonic technology for nonlinear measurements presents limitations for settings outside of the laboratory environment. This paper presents an approach for ultrasonic inspection through automated immersion scanning of hot section engine components where mature ultrasonic technology is used during periodic inspections. Nonlinear ultrasonic measurements were analyzed using wavelet analysis to extract multiple harmonics from the received signals. Measurements indicated strong correlations of nonlinearity coefficients and levels of fatigue in aluminum and Ni-based superalloys. This novel wavelet cross-correlation (WCC) algorithm is a potential technique to scan for fatigue damage precursors and identify critical locations for remaining life prediction.
Improving the Reliability of Technological Subsystems Equipment for Steam Turbine Unit in Operation
NASA Astrophysics Data System (ADS)
Brodov, Yu. M.; Murmansky, B. E.; Aronson, R. T.
2017-11-01
The authors’ conception is presented of an integrated approach to reliability improving of the steam turbine unit (STU) state along with its implementation examples for the various STU technological subsystems. Basing on the statistical analysis of damage to turbine individual parts and components, on the development and application of modern methods and technologies of repair and on operational monitoring techniques, the critical components and elements of equipment are identified and priorities are proposed for improving the reliability of STU equipment in operation. The research results are presented of the analysis of malfunctions for various STU technological subsystems equipment operating as part of power units and at cross-linked thermal power plants and resulting in turbine unit shutdown (failure). Proposals are formulated and justified for adjustment of maintenance and repair for turbine components and parts, for condenser unit equipment, for regeneration subsystem and oil supply system that permit to increase the operational reliability, to reduce the cost of STU maintenance and repair and to optimize the timing and amount of repairs.
NASA Astrophysics Data System (ADS)
Amirat, Yassine; Choqueuse, Vincent; Benbouzid, Mohamed
2013-12-01
Failure detection has always been a demanding task in the electrical machines community; it has become more challenging in wind energy conversion systems because sustainability and viability of wind farms are highly dependent on the reduction of the operational and maintenance costs. Indeed the most efficient way of reducing these costs would be to continuously monitor the condition of these systems. This allows for early detection of the generator health degeneration, facilitating a proactive response, minimizing downtime, and maximizing productivity. This paper provides then an assessment of a failure detection techniques based on the homopolar component of the generator stator current and attempts to highlight the use of the ensemble empirical mode decomposition as a tool for failure detection in wind turbine generators for stationary and non-stationary cases.
A theoretical basis for the analysis of redundant software subject to coincident errors
NASA Technical Reports Server (NTRS)
Eckhardt, D. E., Jr.; Lee, L. D.
1985-01-01
Fundamental to the development of redundant software techniques fault-tolerant software, is an understanding of the impact of multiple-joint occurrences of coincident errors. A theoretical basis for the study of redundant software is developed which provides a probabilistic framework for empirically evaluating the effectiveness of the general (N-Version) strategy when component versions are subject to coincident errors, and permits an analytical study of the effects of these errors. The basic assumptions of the model are: (1) independently designed software components are chosen in a random sample; and (2) in the user environment, the system is required to execute on a stationary input series. The intensity of coincident errors, has a central role in the model. This function describes the propensity to introduce design faults in such a way that software components fail together when executing in the user environment. The model is used to give conditions under which an N-Version system is a better strategy for reducing system failure probability than relying on a single version of software. A condition which limits the effectiveness of a fault-tolerant strategy is studied, and it is posted whether system failure probability varies monotonically with increasing N or whether an optimal choice of N exists.
Developing Reliable Life Support for Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.
Catastrophic Fault Recovery with Self-Reconfigurable Chips
NASA Technical Reports Server (NTRS)
Zheng, Will Hua; Marzwell, Neville I.; Chau, Savio N.
2006-01-01
Mission critical systems typically employ multi-string redundancy to cope with possible hardware failure. Such systems are only as fault tolerant as there are many redundant strings. Once a particular critical component exhausts its redundant spares, the multi-string architecture cannot tolerate any further hardware failure. This paper aims at addressing such catastrophic faults through the use of 'Self-Reconfigurable Chips' as a last resort effort to 'repair' a faulty critical component.
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
Load to failure of different zirconia implant abutments with titanium components.
Mascarenhas, Faye; Yilmaz, Burak; McGlumphy, Edwin; Clelland, Nancy; Seidt, Jeremy
2017-06-01
Abutments with a zirconia superstructure and a titanium insert have recently become popular. Although they have been tested under static load, their performance under simulated mastication is not well known. The purpose of this in vitro study was to compare the cyclic load to failure of 3 types of zirconia abutments with different mechanisms of retention of the zirconia to the titanium interface. Fifteen implants (n=5 per system) and abutments (3 groups: 5 friction fit [Frft]; 5 bonded; and 5 titanium ring friction fit [Ringfrft]) were used. Abutments were thermocycled in water between 5°C and 55°C for 15000 cycles and then cyclically loaded for 20000 cycles or until failure at a frequency of 2 Hz by using a sequentially increased loading protocol up to a maximum of 720 N. The load to failure for each group was recorded, and 1-way analysis of variance was performed. The mean load-to-failure values for the Frft group was 526 N, for the Bond group 605 N, and for the Ringfrft group 288 N. A statistically significant difference was found among all abutments tested (P<.05). Abutments with the bonded connection showed the highest load-to-failure value, and the abutment with the titanium ring friction fit connection showed the lowest load-to-failure value. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, B; Sun, B; Yaddanapudi, S
Purpose: To describe the clinical use of a Linear Accelerator (Linac) DailyQA system with only EPID and OBI. To assess the reliability over an 18-month period and improve the robustness of this system based on QA failure analysis. Methods: A DailyQA solution utilizing an in-house designed phantom, combined EPID and OBI image acquisitions, and a web-based data analysis and reporting system was commissioned and used in our clinic to measure geometric, dosimetry and imaging components of a Varian Truebeam Linac. During an 18-month period (335 working days), the Daily QA results, including the output constancy, beam flatness and symmetry, uniformity,more » TPR20/10, MV and KV imaging quality, were collected and analyzed. For output constancy measurement, an independent monthly QA system with an ionization chamber (IC) and annual/incidental TG51 measurements with ADCL IC were performed and cross-compared to Daily QA system. Thorough analyses were performed on the recorded QA failures to evaluate the machine performance, optimize the data analysis algorithm, adjust the tolerance setting and improve the training procedure to prevent future failures. Results: A clinical workflow including beam delivery, data analysis, QA report generation and physics approval was established and optimized to suit daily clinical operation. The output tests over the 335 working day period cross-correlated with the monthly QA system within 1.3% and TG51 results within 1%. QA passed with one attempt on 236 days out of 335 days. Based on the QA failures analysis, the Gamma criteria is revised from (1%, 1mm) to (2%, 1mm) considering both QA accuracy and efficiency. Data analysis algorithm is improved to handle multiple entries for a repeating test. Conclusion: We described our 18-month clinical experience on a novel DailyQA system using only EPID and OBI. The long term data presented demonstrated the system is suitable and reliable for Linac daily QA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Iver E.; Boesenberg, Adam; Harringa, Joel
2011-09-28
Pb-free solder alloys based on the Sn-Ag-Cu (SAC) ternary eutectic have promise for widespread adoption across assembly conditions and operating environments, but enhanced microstructural control is needed. Micro-alloying with elements such as Zn was demonstrated for promoting a preferred solidification path and joint microstructure earlier in simple (Cu/Cu) solder joints studies for different cooling rates. This beneficial behavior now has been verified in reworked ball grid array (BGA) joints, using dissimilar SAC305 (Sn-3.0Ag-0.5Cu, wt.%) solder paste. After industrial assembly, BGA components joined with Sn-3.5Ag-0.74Cu-0.21Zn solder were tested in thermal cycling (-55 C/+125 C) along with baseline SAC305 BGA joints beyondmore » 3000 cycles with continuous failure monitoring. Weibull analysis of the results demonstrated that BGA components joined with SAC + Zn/SAC305 have less joint integrity than SAC305 joints, but their lifetime is sufficient for severe applications in consumer, defense, and avionics electronic product field environments. Failure analysis of the BGA joints revealed that cracking did not deviate from the typical top area (BGA component side) of each joint, in spite of different Ag3Sn blade content. Thus, SAC + Zn solder has not shown any advantage over SAC305 solder in these thermal cycling trials, but other characteristics of SAC + Zn solder may make it more attractive for use across the full range of harsh conditions of avionics or defense applications.« less
Phased-mission system analysis using Boolean algebraic methods
NASA Technical Reports Server (NTRS)
Somani, Arun K.; Trivedi, Kishor S.
1993-01-01
Most reliability analysis techniques and tools assume that a system is used for a mission consisting of a single phase. However, multiple phases are natural in many missions. The failure rates of components, system configuration, and success criteria may vary from phase to phase. In addition, the duration of a phase may be deterministic or random. Recently, several researchers have addressed the problem of reliability analysis of such systems using a variety of methods. A new technique for phased-mission system reliability analysis based on Boolean algebraic methods is described. Our technique is computationally efficient and is applicable to a large class of systems for which the failure criterion in each phase can be expressed as a fault tree (or an equivalent representation). Our technique avoids state space explosion that commonly plague Markov chain-based analysis. A phase algebra to account for the effects of variable configurations and success criteria from phase to phase was developed. Our technique yields exact (as opposed to approximate) results. The use of our technique was demonstrated by means of an example and present numerical results to show the effects of mission phases on the system reliability.
Residual strength of thin panels with cracks
NASA Technical Reports Server (NTRS)
Madenci, Erdogan
1994-01-01
The previous design philosophies involving safe life, fail-safe and damage tolerance concepts become inadequate for assuring the safety of aging aircraft structures. For example, the failure mechanism for the Aloha Airline accident involved the coalescence of undetected small cracks at the rivet holes causing a section of the fuselage to peel open during flight. Therefore, the fuselage structure should be designed to have sufficient residual strength under worst case crack configurations and in-flight load conditions. Residual strength is interpreted as the maximum load carrying capacity prior to unstable crack growth. Internal pressure and bending moment constitute the two major components of the external loads on the fuselage section during flight. Although the stiffeners in the form of stringers, frames and tear straps sustain part of the external loads, the significant portion of the load is taken up by the skin. In the presence of a large crack in the skin, the crack lips bulge out with considerable yielding; thus, the geometric and material nonlinearities must be included in the analysis for predicting residual strength. Also, these nonlinearities do not permit the decoupling of in-plane and out-of-plane bending deformations. The failure criterion combining the concepts of absorbed specific energy and strain energy density addresses the aforementioned concerns. The critical absorbed specific energy (local toughness) for the material is determined from the global specimen response and deformation geometry based on the uniaxial tensile test data and detailed finite element modeling of the specimen response. The use of the local toughness and stress-strain response at the continuum level eliminates the size effect. With this critical parameter and stress-strain response, the finite element analysis of the component by using STAGS along with the application of this failure criterion provides the stable crack growth calculations for residual strength predictions.
Mechanical properties of canine osteosarcoma-affected antebrachia.
Steffey, Michele A; Garcia, Tanya C; Daniel, Leticia; Zwingenberger, Allison L; Stover, Susan M
2017-05-01
To determine the influence of neoplasia on the biomechanical properties of canine antebrachia. Ex vivo biomechanical study. Osteosarcoma (OSA)-affected canine antebrachia (n = 12) and unaffected canine antebrachia (n = 9). Antebrachia were compressed in axial loading until failure. A load-deformation curve was used to acquire the structural mechanical properties of neoplastic and unaffected specimens. Structural properties and properties normalized by body weight (BW) and radius length were compared using analysis of variance (ANOVA). Modes of failure were compared descriptively. Neoplastic antebrachia fractured at, or adjacent to, the OSA in the distal radial diaphysis. Unaffected antebrachia failed via mid-diaphyseal radial fractures with a transverse cranial component and an oblique caudal component. Structural mechanical properties were more variable in neoplastic antebrachia than unaffected antebrachia, which was partially attributable to differences in bone geometry related to dog size. When normalized by dog BW and radial length, strength, stiffness, and energy to yield and failure, were lower in neoplastic antebrachia than in unaffected antebrachia. OSA of the distal radial metaphysis in dogs presented for limb amputation markedly compromises the structural integrity of affected antebrachia. However, biomechanical properties of affected bones was sufficient for weight-bearing, as none of the neoplastic antebrachia fractured before amputation. The behavior of tumor invaded bone under cyclic loading warrants further investigations to evaluate the viability of in situ therapies for bone tumors in dogs. © 2017 The American College of Veterinary Surgeons.
Failure Impact Analysis of Key Management in AMI Using Cybernomic Situational Assessment (CSA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T; Hauser, Katie R
2013-01-01
In earlier work, we presented a computational framework for quantifying the security of a system in terms of the average loss a stakeholder stands to sustain as a result of threats to the system. We named this system, the Cyberspace Security Econometrics System (CSES). In this paper, we refine the framework and apply it to cryptographic key management within the Advanced Metering Infrastructure (AMI) as an example. The stakeholders, requirements, components, and threats are determined. We then populate the matrices with justified values by addressing the AMI at a higher level, rather than trying to consider every piece of hardwaremore » and software involved. We accomplish this task by leveraging the recently established NISTR 7628 guideline for smart grid security. This allowed us to choose the stakeholders, requirements, components, and threats realistically. We reviewed the literature and selected an industry technical working group to select three representative threats from a collection of 29 threats. From this subset, we populate the stakes, dependency, and impact matrices, and the threat vector with realistic numbers. Each Stakeholder s Mean Failure Cost is then computed.« less
NASA Technical Reports Server (NTRS)
Steurer, W. H.
1980-01-01
A survey of all presently defined or proposed large space systems indicated an ever increasing demand for flexible components and materials, primarily as a result of the widening disparity between the stowage space of launch vehicles and the size of advanced systems. Typical flexible components and material requirements were identified on the basis of recurrence and/or functional commonality. This was followed by the evaluation of candidate materials and the search for material capabilities which promise to satisfy the postulated requirements. Particular attention was placed on thin films, and on the requirements of deployable antennas. The assessment of the performance of specific materials was based primarily on the failure mode, derived from a detailed failure analysis. In view of extensive on going work on thermal and environmental degradation effects, prime emphasis was placed on the assessment of the performance loss by meteoroid damage. Quantitative data were generated for tension members and antenna reflector materials. A methodology was developed for the representation of the overall materials performance as related to systems service life. A number of promising new concepts for flexible materials were identified.
NASA Astrophysics Data System (ADS)
Sang, Z. X.; Huang, J. Q.; Yan, J.; Du, Z.; Xu, Q. S.; Lei, H.; Zhou, S. X.; Wang, S. C.
2017-11-01
The protection is an essential part for power device, especially for those in power grid, as the failure may cost great losses to the society. A study on the voltage and current abnormality in the power electronic devices in Distribution Electronic Power Transformer (D-EPT) during the failures on switching components is presented, as well as the operational principles for 10 kV rectifier, 10 kV/400 V DC-DC converter and 400 V inverter in D-EPT. Derived from the discussion on the effects of voltage and current distortion, the fault characteristics as well as a fault diagnosis method for D-EPT are introduced.
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Faegri, Knut, Jr.
1990-01-01
The paper investigates bounds failure in calculations using Gaussian basis sets for the solution of the one-electron Dirac equation for the 2p1/2 state of Hg(79+). It is shown that bounds failure indicates inadequacies in the basis set, both in terms of the exponent range and the number of functions. It is also shown that overrepresentation of the small component space may lead to unphysical results. It is concluded that it is important to use matched large and small component basis sets with an adequate size and exponent range.
NASA Technical Reports Server (NTRS)
Stalnaker, Dale K.
1993-01-01
ACARA (Availability, Cost, and Resource Allocation) is a computer program which analyzes system availability, lifecycle cost (LCC), and resupply scheduling using Monte Carlo analysis to simulate component failure and replacement. This manual was written to: (1) explain how to prepare and enter input data for use in ACARA; (2) explain the user interface, menus, input screens, and input tables; (3) explain the algorithms used in the program; and (4) explain each table and chart in the output.
NASA Technical Reports Server (NTRS)
Ling, Lisa
2014-01-01
For the purpose of performing safety analysis and risk assessment for a potential off-nominal atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. The software and methodology have been validated against actual flights, telemetry data, and validated software, and safety/risk analyses were performed for various programs using SPEAD. This report discusses the capabilities, modeling, validation, and application of the SPEAD analysis tool.
Accelerated Aging System for Prognostics of Power Semiconductor Devices
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Vashchenko, Vladislav; Wysocki, Philip; Saha, Sankalita
2010-01-01
Prognostics is an engineering discipline that focuses on estimation of the health state of a component and the prediction of its remaining useful life (RUL) before failure. Health state estimation is based on actual conditions and it is fundamental for the prediction of RUL under anticipated future usage. Failure of electronic devices is of great concern as future aircraft will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. Therefore, development of prognostics solutions for electronics is of key importance. This paper presents an accelerated aging system for gate-controlled power transistors. This system allows for the understanding of the effects of failure mechanisms, and the identification of leading indicators of failure which are essential in the development of physics-based degradation models and RUL prediction. In particular, this system isolates electrical overstress from thermal overstress. Also, this system allows for a precise control of internal temperatures, enabling the exploration of intrinsic failure mechanisms not related to the device packaging. By controlling the temperature within safe operation levels of the device, accelerated aging is induced by electrical overstress only, avoiding the generation of thermal cycles. The temperature is controlled by active thermal-electric units. Several electrical and thermal signals are measured in-situ and recorded for further analysis in the identification of leading indicators of failures. This system, therefore, provides a unique capability in the exploration of different failure mechanisms and the identification of precursors of failure that can be used to provide a health management solution for electronic devices.
Schierjott, Ronja A; Giurea, Alexander; Neuhaus, Hans-Joachim; Schwiesau, Jens; Pfaff, Andreas M; Utzschneider, Sandra; Tozzi, Gianluca; Grupp, Thomas M
2016-01-01
Carbon fiber reinforced poly-ether-ether-ketone (CFR-PEEK) represents a promising alternative material for bushings in total knee replacements, after early clinical failures of polyethylene in this application. The objective of the present study was to evaluate the damage modes and the extent of damage observed on CFR-PEEK hinge mechanism articulation components after in vivo service in a rotating hinge knee (RHK) system and to compare the results with corresponding components subjected to in vitro wear tests. Key question was if there were any similarities or differences between in vivo and in vitro damage characteristics. Twelve retrieved RHK systems after an average of 34.9 months in vivo underwent wear damage analysis with focus on the four integrated CFR-PEEK components and distinction between different damage modes and classification with a scoring system. The analysis included visual examination, scanning electron microscopy, and energy dispersive X-ray spectroscopy, as well as surface roughness and profile measurements. The main wear damage modes were comparable between retrieved and in vitro specimens ( n = 3), whereby the size of affected area on the retrieved components showed a higher variation. Overall, the retrieved specimens seemed to be slightly heavier damaged which was probably attributable to the more complex loading and kinematic conditions in vivo.
An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks
Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei
2014-01-01
The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005
Rodrigues, Samantha A; Thambyah, Ashvin; Broom, Neil D
2015-03-01
The annulus-endplate anchorage system performs a critical role in the disc, creating a strong structural link between the compliant annulus and the rigid vertebrae. Endplate failure is thought to be associated with disc herniation, a recent study indicating that this failure mode occurs more frequently than annular rupture. The aim was to investigate the structural principles governing annulus-endplate anchorage and the basis of its strength and mechanisms of failure. Loading experiments were performed on ovine lumbar motion segments designed to induce annulus-endplate failure, followed by macro- to micro- to fibril-level structural analyses. The study was funded by a doctoral scholarship from our institution. Samples were loaded to failure in three modes: torsion using intact motion segments, in-plane tension of the anterior annulus-endplate along one of the oblique fiber angles, and axial tension of the anterior annulus-endplate. The anterior region was chosen for its ease of access. Decalcification was used to investigate the mechanical influence of the mineralized component. Structural analysis was conducted on both the intact and failed samples using differential interference contrast optical microscopy and scanning electron microscopy. Two main modes of anchorage failure were observed--failure at the tidemark or at the cement line. Samples subjected to axial tension contained more tidemark failures compared with those subjected to torsion and in-plane tension. Samples decalcified before testing frequently contained damage at the cement line, this being more extensive than in fresh samples. Analysis of the intact samples at their anchorage sites revealed that annular subbundle fibrils penetrate beyond the cement line to a limited depth and appear to merge with those in the vertebral and cartilaginous endplates. Annulus-endplate anchorage is more vulnerable to failure in axial tension compared with both torsion and in-plane tension and is probably due to acute fiber bending at the soft-hard interface of the tidemark. This finding is consistent with evidence showing that flexion, which induces a similar pattern of axial tension, increases the risk of herniation involving endplate failure. The study also highlights the important strengthening role of calcification at this junction and provides new evidence of a fibril-based form of structural integration across the cement line. Copyright © 2015 Elsevier Inc. All rights reserved.
A new method of converter transformer protection without commutation failure
NASA Astrophysics Data System (ADS)
Zhang, Jiayu; Kong, Bo; Liu, Mingchang; Zhang, Jun; Guo, Jianhong; Jing, Xu
2018-01-01
With the development of AC / DC hybrid transmission technology, converter transformer as nodes of AC and DC conversion of HVDC transmission technology, its reliable safe and stable operation plays an important role in the DC transmission. As a common problem of DC transmission, commutation failure poses a serious threat to the safe and stable operation of power grid. According to the commutation relation between the AC bus voltage of converter station and the output DC voltage of converter, the generalized transformation ratio is defined, and a new method of converter transformer protection based on generalized transformation ratio is put forward. The method uses generalized ratio to realize the on-line monitoring of the fault or abnormal commutation components, and the use of valve side of converter transformer bushing CT current characteristics of converter transformer fault accurately, and is not influenced by the presence of commutation failure. Through the fault analysis and EMTDC/PSCAD simulation, the protection can be operated correctly under the condition of various faults of the converter.
NASA Astrophysics Data System (ADS)
Ferrer, Laetitia; Curt, Corinne; Tacnet, Jean-Marc
2018-04-01
Major hazard prevention is a main challenge given that it is specifically based on information communicated to the public. In France, preventive information is notably provided by way of local regulatory documents. Unfortunately, the law requires only few specifications concerning their content; therefore one can question the impact on the general population relative to the way the document is concretely created. Ergo, the purpose of our work is to propose an analytical methodology to evaluate preventive risk communication document effectiveness. The methodology is based on dependability approaches and is applied in this paper to the Document d'Information Communal sur les Risques Majeurs (DICRIM; in English, Municipal Information Document on Major Risks). DICRIM has to be made by mayors and addressed to the public to provide information on major hazards affecting their municipalities. An analysis of law compliance of the document is carried out thanks to the identification of regulatory detection elements. These are applied to a database of 30 DICRIMs. This analysis leads to a discussion on points such as usefulness of the missing elements. External and internal function analysis permits the identification of the form and content requirements and service and technical functions of the document and its components (here its sections). Their results are used to carry out an FMEA (failure modes and effects analysis), which allows us to define the failure and to identify detection elements. This permits the evaluation of the effectiveness of form and content of each components of the document. The outputs are validated by experts from the different fields investigated. Those results are obtained to build, in future works, a decision support model for the municipality (or specialised consulting firms) in charge of drawing up documents.
On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sappok, Alex; Ragaller, Paul; Herman, Andrew
The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less
Ares I-X Malfunction Turn Range Safety Analysis
NASA Technical Reports Server (NTRS)
Beaty, J. R.
2011-01-01
Ares I-X was the designation given to the flight test version of the Ares I rocket which was developed by NASA (also known as the Crew Launch Vehicle (CLV) component of the Constellation Program). The Ares I-X flight test vehicle achieved a successful flight test on October 28, 2009, from Pad LC-39B at Kennedy Space Center, Florida (KSC). As part of the flight plan approval for the test vehicle, a range safety malfunction turn analysis was performed to support the risk assessment and vehicle destruct criteria development processes. Several vehicle failure scenarios were identified which could have caused the vehicle trajectory to deviate from its normal flight path. The effects of these failures were evaluated with an Ares I-X 6 degrees-of-freedom (6-DOF) digital simulation, using the Program to Optimize Simulated Trajectories Version II (POST2) simulation tool. The Ares I-X simulation analysis provided output files containing vehicle trajectory state information. These were used by other risk assessment and vehicle debris trajectory simulation tools to determine the risk to personnel and facilities in the vicinity of the launch area at KSC, and to develop the vehicle destruct criteria used by the flight test range safety officer in the event of a flight test anomaly of the vehicle. The simulation analysis approach used for this study is described, including descriptions of the failure modes which were considered and the underlying assumptions and ground rules of the study.