A reliability analysis tool for SpaceWire network
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
NASA Astrophysics Data System (ADS)
Zhang, Ding; Zhang, Yingjie
2017-09-01
A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Applying reliability analysis to design electric power systems for More-electric aircraft
NASA Astrophysics Data System (ADS)
Zhang, Baozhu
The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.
Theory of reliable systems. [systems analysis and design
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1973-01-01
The analysis and design of reliable systems are discussed. The attributes of system reliability studied are fault tolerance, diagnosability, and reconfigurability. Objectives of the study include: to determine properties of system structure that are conducive to a particular attribute; to determine methods for obtaining reliable realizations of a given system; and to determine how properties of system behavior relate to the complexity of fault tolerant realizations. A list of 34 references is included.
Multidisciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Multi-Disciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
A Passive System Reliability Analysis for a Station Blackout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunett, Acacia; Bucknor, Matthew; Grabaskas, David
2015-05-03
The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Valerie A.; Ogilvie, Alistair B.
2012-01-01
This report addresses the general data requirements for reliability analysis of fielded wind turbines and other wind plant equipment. The report provides a rationale for why this data should be collected, a list of the data needed to support reliability and availability analysis, and specific data recommendations for a Computerized Maintenance Management System (CMMS) to support automated analysis. This data collection recommendations report was written by Sandia National Laboratories to address the general data requirements for reliability analysis of operating wind turbines. This report is intended to help develop a basic understanding of the data needed for reliability analysis frommore » a Computerized Maintenance Management System (CMMS) and other data systems. The report provides a rationale for why this data should be collected, a list of the data needed to support reliability and availability analysis, and specific recommendations for a CMMS to support automated analysis. Though written for reliability analysis of wind turbines, much of the information is applicable to a wider variety of equipment and analysis and reporting needs. The 'Motivation' section of this report provides a rationale for collecting and analyzing field data for reliability analysis. The benefits of this type of effort can include increased energy delivered, decreased operating costs, enhanced preventive maintenance schedules, solutions to issues with the largest payback, and identification of early failure indicators.« less
Integrating Reliability Analysis with a Performance Tool
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael
1995-01-01
A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.
The 747 primary flight control systems reliability and maintenance study
NASA Technical Reports Server (NTRS)
1979-01-01
The major operational characteristics of the 747 Primary Flight Control Systems (PFCS) are described. Results of reliability analysis for separate control functions are presented. The analysis makes use of a NASA computer program which calculates reliability of redundant systems. Costs for maintaining the 747 PFCS in airline service are assessed. The reliabilities and cost will provide a baseline for use in trade studies of future flight control system design.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram
2017-03-01
The binary states, i.e., success or failed state assumptions used in conventional reliability are inappropriate for reliability analysis of complex industrial systems due to lack of sufficient probabilistic information. For large complex systems, the uncertainty of each individual parameter enhances the uncertainty of the system reliability. In this paper, the concept of fuzzy reliability has been used for reliability analysis of the system, and the effect of coverage factor, failure and repair rates of subsystems on fuzzy availability for fault-tolerant crystallization system of sugar plant is analyzed. Mathematical modeling of the system is carried out using the mnemonic rule to derive Chapman-Kolmogorov differential equations. These governing differential equations are solved with Runge-Kutta fourth-order method.
Reliability/safety analysis of a fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goddman, H. A.
1980-01-01
An analysis technique has been developed to estimate the reliability of a very complex, safety-critical system by constructing a diagram of the reliability equations for the total system. This diagram has many of the characteristics of a fault-tree or success-path diagram, but is much easier to construct for complex redundant systems. The diagram provides insight into system failure characteristics and identifies the most likely failure modes. A computer program aids in the construction of the diagram and the computation of reliability. Analysis of the NASA F-8 Digital Fly-by-Wire Flight Control System is used to illustrate the technique.
NASA Astrophysics Data System (ADS)
Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung
2007-07-01
This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.
Design of fuel cell powered data centers for sufficient reliability and availability
NASA Astrophysics Data System (ADS)
Ritchie, Alexa J.; Brouwer, Jacob
2018-04-01
It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
Reliability studies of Integrated Modular Engine system designs
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1993-01-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Reliability studies of integrated modular engine system designs
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1993-01-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Reliability studies of integrated modular engine system designs
NASA Astrophysics Data System (ADS)
Hardy, Terry L.; Rapp, Douglas C.
1993-06-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Reliability studies of Integrated Modular Engine system designs
NASA Astrophysics Data System (ADS)
Hardy, Terry L.; Rapp, Douglas C.
1993-06-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
NASA Astrophysics Data System (ADS)
Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey
2018-05-01
The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.
Preliminary Analysis of LORAN-C System Reliability for Civil Aviation.
1981-09-01
overviev of the analysis technique. Section 3 describes the computerized LORAN-C coverage model which is used extensively in the reliability analysis...Xth Plenary Assembly, Geneva, 1963, published by International Telecomunications Union. S. Braff, R., Computer program to calculate a Karkov Chain Reliability Model, unpublished york, MITRE Corporation. A-1 I.° , 44J Ili *Y 0E 00 ...F i8 1110 Prelim inary Analysis of Program Engineering & LORAN’C System ReliabilityMaintenance Service i ~Washington. D.C.
NASA Technical Reports Server (NTRS)
Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.
2005-01-01
This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
Reliability Analysis of the MSC System
NASA Astrophysics Data System (ADS)
Kim, Young-Soo; Lee, Do-Kyoung; Lee, Chang-Ho; Woo, Sun-Hee
2003-09-01
MSC (Multi-Spectral Camera) is the payload of KOMPSAT-2, which is being developed for earth imaging in optical and near-infrared region. The design of the MSC is completed and its reliability has been assessed from part level to the MSC system level. The reliability was analyzed in worst case and the analysis results showed that the value complies the required value of 0.9. In this paper, a calculation method of reliability for the MSC system is described, and assessment result is presented and discussed.
Reliability analysis and initial requirements for FC systems and stacks
NASA Astrophysics Data System (ADS)
Åström, K.; Fontell, E.; Virtanen, S.
In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Reliability evaluation methodology for NASA applications
NASA Technical Reports Server (NTRS)
Taneja, Vidya S.
1992-01-01
Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.
NASA Technical Reports Server (NTRS)
Juhasz, A. J.; Bloomfield, H. S.
1985-01-01
A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.
Reliability Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange Kevin E.; Anderson, Molly S.
2012-01-01
Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.
Reliability analysis in interdependent smart grid systems
NASA Astrophysics Data System (ADS)
Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong
2018-06-01
Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.
Reliability analysis of airship remote sensing system
NASA Astrophysics Data System (ADS)
Qin, Jun
1998-08-01
Airship Remote Sensing System (ARSS) for obtain the dynamic or real time images in the remote sensing of the catastrophe and the environment, is a mixed complex system. Its sensor platform is a remote control airship. The achievement of a remote sensing mission depends on a series of factors. For this reason, it is very important for us to analyze reliability of ARSS. In first place, the system model was simplified form multi-stage system to two-state system on the basis of the result of the failure mode and effect analysis and the failure tree failure mode effect and criticality analysis. The failure tree was created after analyzing all factors and their interrelations. This failure tree includes four branches, e.g. engine subsystem, remote control subsystem, airship construction subsystem, flying metrology and climate subsystem. By way of failure tree analysis and basic-events classing, the weak links were discovered. The result of test running shown no difference in comparison with theory analysis. In accordance with the above conclusions, a plan of the reliability growth and reliability maintenance were posed. System's reliability are raised from 89 percent to 92 percent with the reformation of the man-machine interactive interface, the augmentation of the secondary better-groupie and the secondary remote control equipment.
Reliability analysis in the Office of Safety, Environmental, and Mission Assurance (OSEMA)
NASA Astrophysics Data System (ADS)
Kauffmann, Paul J.
1994-12-01
The technical personnel in the SEMA office are working to provide the highest degree of value-added activities to their support of the NASA Langley Research Center mission. Management perceives that reliability analysis tools and an understanding of a comprehensive systems approach to reliability will be a foundation of this change process. Since the office is involved in a broad range of activities supporting space mission projects and operating activities (such as wind tunnels and facilities), it was not clear what reliability tools the office should be familiar with and how these tools could serve as a flexible knowledge base for organizational growth. Interviews and discussions with the office personnel (both technicians and engineers) revealed that job responsibilities ranged from incoming inspection to component or system analysis to safety and risk. It was apparent that a broad base in applied probability and reliability along with tools for practical application was required by the office. A series of ten class sessions with a duration of two hours each was organized and scheduled. Hand-out materials were developed and practical examples based on the type of work performed by the office personnel were included. Topics covered were: Reliability Systems - a broad system oriented approach to reliability; Probability Distributions - discrete and continuous distributions; Sampling and Confidence Intervals - random sampling and sampling plans; Data Analysis and Estimation - Model selection and parameter estimates; and Reliability Tools - block diagrams, fault trees, event trees, FMEA. In the future, this information will be used to review and assess existing equipment and processes from a reliability system perspective. An analysis of incoming materials sampling plans was also completed. This study looked at the issues associated with Mil Std 105 and changes for a zero defect acceptance sampling plan.
Reliability analysis in the Office of Safety, Environmental, and Mission Assurance (OSEMA)
NASA Technical Reports Server (NTRS)
Kauffmann, Paul J.
1994-01-01
The technical personnel in the SEMA office are working to provide the highest degree of value-added activities to their support of the NASA Langley Research Center mission. Management perceives that reliability analysis tools and an understanding of a comprehensive systems approach to reliability will be a foundation of this change process. Since the office is involved in a broad range of activities supporting space mission projects and operating activities (such as wind tunnels and facilities), it was not clear what reliability tools the office should be familiar with and how these tools could serve as a flexible knowledge base for organizational growth. Interviews and discussions with the office personnel (both technicians and engineers) revealed that job responsibilities ranged from incoming inspection to component or system analysis to safety and risk. It was apparent that a broad base in applied probability and reliability along with tools for practical application was required by the office. A series of ten class sessions with a duration of two hours each was organized and scheduled. Hand-out materials were developed and practical examples based on the type of work performed by the office personnel were included. Topics covered were: Reliability Systems - a broad system oriented approach to reliability; Probability Distributions - discrete and continuous distributions; Sampling and Confidence Intervals - random sampling and sampling plans; Data Analysis and Estimation - Model selection and parameter estimates; and Reliability Tools - block diagrams, fault trees, event trees, FMEA. In the future, this information will be used to review and assess existing equipment and processes from a reliability system perspective. An analysis of incoming materials sampling plans was also completed. This study looked at the issues associated with Mil Std 105 and changes for a zero defect acceptance sampling plan.
[Reliability theory based on quality risk network analysis for Chinese medicine injection].
Li, Zheng; Kang, Li-Yuan; Fan, Xiao-Hui
2014-08-01
A new risk analysis method based upon reliability theory was introduced in this paper for the quality risk management of Chinese medicine injection manufacturing plants. The risk events including both cause and effect ones were derived in the framework as nodes with a Bayesian network analysis approach. It thus transforms the risk analysis results from failure mode and effect analysis (FMEA) into a Bayesian network platform. With its structure and parameters determined, the network can be used to evaluate the system reliability quantitatively with probabilistic analytical appraoches. Using network analysis tools such as GeNie and AgenaRisk, we are able to find the nodes that are most critical to influence the system reliability. The importance of each node to the system can be quantitatively evaluated by calculating the effect of the node on the overall risk, and minimization plan can be determined accordingly to reduce their influences and improve the system reliability. Using the Shengmai injection manufacturing plant of SZYY Ltd as a user case, we analyzed the quality risk with both static FMEA analysis and dynamic Bayesian Network analysis. The potential risk factors for the quality of Shengmai injection manufacturing were identified with the network analysis platform. Quality assurance actions were further defined to reduce the risk and improve the product quality.
NASA Astrophysics Data System (ADS)
Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue
2012-05-01
In this article, we investigate the reliability of M-for-N (M:N) shared protection systems. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner under the condition that the failed units are not repairable. Mathematical analysis gives the closed-form solution of the reliability and mean time to failure (MTTF). We also analyse several numerical examples of the reliability and MTTF. This result can be applied, for example, to the analysis and design of an integrated circuit consisting of redundant backup components. In such a device, repairing a failed component is unrealistic. The analysis provides useful information for the design for general shared protection systems in which the failed units are not repaired.
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
NASA Technical Reports Server (NTRS)
Park, Nohpill; Reagan, Shawn; Franks, Greg; Jones, William G.
1999-01-01
This paper discusses analytical approaches to evaluating performance of Spacecraft On-Board Computing systems, thereby ultimately achieving a reliable spacecraft data communications systems. The sensitivity analysis approach of memory system on the ProSEDS (Propulsive Small Expendable Deployer System) as a part of its data communication system will be investigated. Also, general issues and possible approaches to reliable Spacecraft On-Board Interconnection Network and Processor Array will be shown. The performance issues of a spacecraft on-board computing systems such as sensitivity, throughput, delay and reliability will be introduced and discussed.
Combining System Safety and Reliability to Ensure NASA CoNNeCT's Success
NASA Technical Reports Server (NTRS)
Havenhill, Maria; Fernandez, Rene; Zampino, Edward
2012-01-01
Hazard Analysis, Failure Modes and Effects Analysis (FMEA), the Limited-Life Items List (LLIL), and the Single Point Failure (SPF) List were applied by System Safety and Reliability engineers on NASA's Communications, Navigation, and Networking reConfigurable Testbed (CoNNeCT) Project. The integrated approach involving cross reviews of these reports by System Safety, Reliability, and Design engineers resulted in the mitigation of all identified hazards. The outcome was that the system met all the safety requirements it was required to meet.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
NASA Technical Reports Server (NTRS)
Migneault, Gerard E.
1987-01-01
Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.
NASA Technical Reports Server (NTRS)
Morehouse, Dennis V.
2006-01-01
In order to perform public risk analyses for vehicles containing Flight Termination Systems (FTS), it is necessary for the analyst to know the reliability of each of the components of the FTS. These systems are typically divided into two segments; a transmitter system and associated equipment, typically in a ground station or on a support aircraft, and a receiver system and associated equipment on the target vehicle. This analysis attempts to analyze the reliability of the NASA DFRC flight termination system ground transmitter segment for use in the larger risk analysis and to compare the results against two established Department of Defense availability standards for such equipment.
Ren, Pengyu; Li, Bowen; Dong, Shiyao; Chen, Lin; Zhang, Yuelin
2018-01-01
Although many mathematical methods were used to analyze the neural activity under sinusoidal stimulation within linear response range in vestibular system, the reliabilities of these methods are still not reported, especially in nonlinear response range. Here we chose nonlinear least-squares algorithm (NLSA) with sinusoidal model to analyze the neural response of semicircular canal neurons (SCNs) during sinusoidal rotational stimulation (SRS) over a nonlinear response range. Our aim was to acquire a reliable mathematical method for data analysis under SRS in vestibular system. Our data indicated that the reliability of this method in an entire SCNs population was quite satisfactory. However, the reliability was strongly negatively depended on the neural discharge regularity. In addition, stimulation parameters were the vital impact factors influencing the reliability. The frequency had a significant negative effect but the amplitude had a conspicuous positive effect on the reliability. Thus, NLSA with sinusoidal model resulted a reliable mathematical tool for data analysis of neural response activity under SRS in vestibular system and more suitable for those under the stimulation with low frequency but high amplitude, suggesting that this method can be used in nonlinear response range. This method broke out of the restriction of neural activity analysis under nonlinear response range and provided a solid foundation for future study in nonlinear response range in vestibular system.
Li, Bowen; Dong, Shiyao; Chen, Lin; Zhang, Yuelin
2018-01-01
Although many mathematical methods were used to analyze the neural activity under sinusoidal stimulation within linear response range in vestibular system, the reliabilities of these methods are still not reported, especially in nonlinear response range. Here we chose nonlinear least-squares algorithm (NLSA) with sinusoidal model to analyze the neural response of semicircular canal neurons (SCNs) during sinusoidal rotational stimulation (SRS) over a nonlinear response range. Our aim was to acquire a reliable mathematical method for data analysis under SRS in vestibular system. Our data indicated that the reliability of this method in an entire SCNs population was quite satisfactory. However, the reliability was strongly negatively depended on the neural discharge regularity. In addition, stimulation parameters were the vital impact factors influencing the reliability. The frequency had a significant negative effect but the amplitude had a conspicuous positive effect on the reliability. Thus, NLSA with sinusoidal model resulted a reliable mathematical tool for data analysis of neural response activity under SRS in vestibular system and more suitable for those under the stimulation with low frequency but high amplitude, suggesting that this method can be used in nonlinear response range. This method broke out of the restriction of neural activity analysis under nonlinear response range and provided a solid foundation for future study in nonlinear response range in vestibular system. PMID:29304173
Handbook of experiences in the design and installation of solar heating and cooling systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, D.S.; Oberoi, H.S.
1980-07-01
A large array of problems encountered are detailed, including design errors, installation mistakes, cases of inadequate durability of materials and unacceptable reliability of components, and wide variations in the performance and operation of different solar systems. Durability, reliability, and design problems are reviewed for solar collector subsystems, heat transfer fluids, thermal storage, passive solar components, piping/ducting, and reliability/operational problems. The following performance topics are covered: criteria for design and performance analysis, domestic hot water systems, passive space heating systems, active space heating systems, space cooling systems, analysis of systems performance, and performance evaluations. (MHR)
Liu, Zengkai; Liu, Yonghong; Cai, Baoping
2014-01-01
Reliability analysis of the electrical control system of a subsea blowout preventer (BOP) stack is carried out based on Markov method. For the subsea BOP electrical control system used in the current work, the 3-2-1-0 and 3-2-0 input voting schemes are available. The effects of the voting schemes on system performance are evaluated based on Markov models. In addition, the effects of failure rates of the modules and repair time on system reliability indices are also investigated. PMID:25409010
Reliability and coverage analysis of non-repairable fault-tolerant memory systems
NASA Technical Reports Server (NTRS)
Cox, G. W.; Carroll, B. D.
1976-01-01
A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.
interest: mechanical system design sensitivity analysis and optimization of linear and nonlinear structural systems, reliability analysis and reliability-based design optimization, computational methods in committee member, ISSMO; Associate Editor, Mechanics Based Design of Structures and Machines; Associate
Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.
Reliability Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Anderson, Molly S.
2011-01-01
Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.
Method of Testing and Predicting Failures of Electronic Mechanical Systems
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, Frances A.
1996-01-01
A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...
Culture Representation in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Gertman; Julie Marble; Steven Novack
Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991)more » cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.« less
On the next generation of reliability analysis tools
NASA Technical Reports Server (NTRS)
Babcock, Philip S., IV; Leong, Frank; Gai, Eli
1987-01-01
The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.
Phased-mission system analysis using Boolean algebraic methods
NASA Technical Reports Server (NTRS)
Somani, Arun K.; Trivedi, Kishor S.
1993-01-01
Most reliability analysis techniques and tools assume that a system is used for a mission consisting of a single phase. However, multiple phases are natural in many missions. The failure rates of components, system configuration, and success criteria may vary from phase to phase. In addition, the duration of a phase may be deterministic or random. Recently, several researchers have addressed the problem of reliability analysis of such systems using a variety of methods. A new technique for phased-mission system reliability analysis based on Boolean algebraic methods is described. Our technique is computationally efficient and is applicable to a large class of systems for which the failure criterion in each phase can be expressed as a fault tree (or an equivalent representation). Our technique avoids state space explosion that commonly plague Markov chain-based analysis. A phase algebra to account for the effects of variable configurations and success criteria from phase to phase was developed. Our technique yields exact (as opposed to approximate) results. The use of our technique was demonstrated by means of an example and present numerical results to show the effects of mission phases on the system reliability.
Pocket Handbook on Reliability
1975-09-01
exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future
System design of the Pioneer Venus spacecraft. Volume 3: Systems analysis
NASA Technical Reports Server (NTRS)
Fisher, J. N.
1973-01-01
The mission, systems, operations, ground systems, and reliability analysis of the Thor/Delta baseline design used for the Pioneer Space Probe are discussed. Tradeoff decisions concerning spin axis orientation, bus antenna design, communication system design, probe descent, and reduced science payload are analyzed. The reliability analysis is made for the probe bus mission, large probe mission, and small probe mission. Detailed mission sequences were established to identify critical areas and provide phasing of critical operation.
Gao, Zhongyang; Song, Hui; Ren, Fenggang; Li, Yuhuan; Wang, Dong; He, Xijing
2017-12-01
The aim of the present study was to evaluate the reliability of the Cartesian Optoelectronic Dynamic Anthropometer (CODA) motion system in measuring the cervical range of motion (ROM) and verify the construct validity of the CODA motion system. A total of 26 patients with cervical spondylosis and 22 patients with anterior cervical fusion were enrolled and the CODA motion analysis system was used to measure the three-dimensional cervical ROM. Intra- and inter-rater reliability was assessed by interclass correlation coefficients (ICCs), standard error of measurement (SEm), Limits of Agreements (LOA) and minimal detectable change (MDC). Independent samples t-tests were performed to examine the differences of cervical ROM between cervical spondylosis and anterior cervical fusion patients. The results revealed that in the cervical spondylosis group, the reliability was almost perfect (intra-rater reliability: ICC, 0.87-0.95; LOA, -12.86-13.70; SEm, 2.97-4.58; inter-rater reliability: ICC, 0.84-0.95; LOA, -13.09-13.48; SEm, 3.13-4.32). In the anterior cervical fusion group, the reliability was high (intra-rater reliability: ICC, 0.88-0.97; LOA, -10.65-11.08; SEm, 2.10-3.77; inter-rater reliability: ICC, 0.86-0.96; LOA, -10.91-13.66; SEm, 2.20-4.45). The cervical ROM in the cervical spondylosis group was significantly higher than that in the anterior cervical fusion group in all directions except for left rotation. In conclusion, the CODA motion analysis system is highly reliable in measuring cervical ROM and the construct validity was verified, as the system was sufficiently sensitive to distinguish between the cervical spondylosis and anterior cervical fusion groups based on their ROM.
Remotely piloted vehicle: Application of the GRASP analysis method
NASA Technical Reports Server (NTRS)
Andre, W. L.; Morris, J. B.
1981-01-01
The application of General Reliability Analysis Simulation Program (GRASP) to the remotely piloted vehicle (RPV) system is discussed. The model simulates the field operation of the RPV system. By using individual component reliabilities, the overall reliability of the RPV system is determined. The results of the simulations are given in operational days. The model represented is only a basis from which more detailed work could progress. The RPV system in this model is based on preliminary specifications and estimated values. The use of GRASP from basic system definition, to model input, and to model verification is demonstrated.
Performance Evaluation of Reliable Multicast Protocol for Checkout and Launch Control Systems
NASA Technical Reports Server (NTRS)
Shu, Wei Wennie; Porter, John
2000-01-01
The overall objective of this project is to study reliability and performance of Real Time Critical Network (RTCN) for checkout and launch control systems (CLCS). The major tasks include reliability and performance evaluation of Reliable Multicast (RM) package and fault tolerance analysis and design of dual redundant network architecture.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Reliability analysis of a robotic system using hybridized technique
NASA Astrophysics Data System (ADS)
Kumar, Naveen; Komal; Lather, J. S.
2017-09-01
In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.
Optimization of life support systems and their systems reliability
NASA Technical Reports Server (NTRS)
Fan, L. T.; Hwang, C. L.; Erickson, L. E.
1971-01-01
The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.
The Challenges of Credible Thermal Protection System Reliability Quantification
NASA Technical Reports Server (NTRS)
Green, Lawrence L.
2013-01-01
The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.
Software reliability models for fault-tolerant avionics computers and related topics
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1987-01-01
Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.
Validation and Improvement of Reliability Methods for Air Force Building Systems
focusing primarily on HVAC systems . This research used contingency analysis to assess the performance of each model for HVAC systems at six Air Force...probabilistic model produced inflated reliability calculations for HVAC systems . In light of these findings, this research employed a stochastic method, a...Nonhomogeneous Poisson Process (NHPP), in an attempt to produce accurate HVAC system reliability calculations. This effort ultimately concluded that
Boerebach, Benjamin C M; Lombarts, Kiki M J M H; Arah, Onyebuchi A
2016-03-01
The System for Evaluation of Teaching Qualities (SETQ) was developed as a formative system for the continuous evaluation and development of physicians' teaching performance in graduate medical training. It has been seven years since the introduction and initial exploratory psychometric analysis of the SETQ questionnaires. This study investigates the validity and reliability of the SETQ questionnaires across hospitals and medical specialties using confirmatory factor analyses (CFAs), reliability analysis, and generalizability analysis. The SETQ questionnaires were tested in a sample of 3,025 physicians and 2,848 trainees in 46 hospitals. The CFA revealed acceptable fit of the data to the previously identified five-factor model. The high internal consistency estimates suggest satisfactory reliability of the subscales. These results provide robust evidence for the validity and reliability of the SETQ questionnaires for evaluating physicians' teaching performance. © The Author(s) 2014.
Reliability culture at La Silla Paranal Observatory
NASA Astrophysics Data System (ADS)
Gonzalez, Sergio
2010-07-01
The Maintenance Department at the La Silla - Paranal Observatory has been an important base to keep the operations of the observatory at a good level of reliability and availability. Several strategies have been implemented and improved in order to cover these requirements and keep the system and equipment working properly when it is required. For that reason, one of the latest improvements has been the introduction of the concept of reliability, which implies that we don't simply speak about reliability concepts. It involves much more than that. It involves the use of technologies, data collecting, data analysis, decision making, committees concentrated in analysis of failure modes and how they can be eliminated, aligning the results with the requirements of our internal partners and establishing steps to achieve success. Some of these steps have already been implemented: data collection, use of technologies, analysis of data, development of priority tools, committees dedicated to analyze data and people dedicated to reliability analysis. This has permitted us to optimize our process, analyze where we can improve, avoid functional failures, reduce the failures range in several systems and subsystems; all this has had a positive impact in terms of results for our Observatory. All these tools are part of the reliability culture that allows our system to operate with a high level of reliability and availability.
An overview of the mathematical and statistical analysis component of RICIS
NASA Technical Reports Server (NTRS)
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
1977-03-01
system acquisition cycle since they provide necessary inputs to comparative analyses, cost/benefit trade -offs, and system simulations. In addition, the...Management Program from above performs the function of analyzing the system trade -offs with respect to reliability to determine a reliability goal...one encounters the problem of comparing present dollars with future dollars. In this analysis, we are trading off costs expended initially (or at
Comprehensive Design Reliability Activities for Aerospace Propulsion Systems
NASA Technical Reports Server (NTRS)
Christenson, R. L.; Whitley, M. R.; Knight, K. C.
2000-01-01
This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.
Management of reliability and maintainability; a disciplined approach to fleet readiness
NASA Technical Reports Server (NTRS)
Willoughby, W. J., Jr.
1981-01-01
Material acquisition fundamentals were reviewed and include: mission profile definition, stress analysis, derating criteria, circuit reliability, failure modes, and worst case analysis. Military system reliability was examined with emphasis on the sparing of equipment. The Navy's organizational strategy for 1980 is presented.
Estimates Of The Orbiter RSI Thermal Protection System Thermal Reliability
NASA Technical Reports Server (NTRS)
Kolodziej, P.; Rasky, D. J.
2002-01-01
In support of the Space Shuttle Orbiter post-flight inspection, structure temperatures are recorded at selected positions on the windward, leeward, starboard and port surfaces. Statistical analysis of this flight data and a non-dimensional load interference (NDLI) method are used to estimate the thermal reliability at positions were reusable surface insulation (RSI) is installed. In this analysis, structure temperatures that exceed the design limit define the critical failure mode. At thirty-three positions the RSI thermal reliability is greater than 0.999999 for the missions studied. This is not the overall system level reliability of the thermal protection system installed on an Orbiter. The results from two Orbiters, OV-102 and OV-105, are in good agreement. The original RSI designs on the OV-102 Orbital Maneuvering System pods, which had low reliability, were significantly improved on OV-105. The NDLI method was also used to estimate thermal reliability from an assessment of TPS uncertainties that was completed shortly before the first Orbiter flight. Results fiom the flight data analysis and the pre-flight assessment agree at several positions near each other. The NDLI method is also effective for optimizing RSI designs to provide uniform thermal reliability on the acreage surface of reusable launch vehicles.
2003-03-01
27 2.8.5 Marginal Analysis Method...Figure 11 Improved Configuration of Figure 10; Increases Basic System Reliability..... 26 Figure 12 Example of marginal analysis ...View of Main Book of Software ............................................................... 51 Figure 20 The View of Data Worksheet
Charlton, Paula C; Mentiplay, Benjamin F; Pua, Yong-Hao; Clark, Ross A
2015-05-01
Traditional methods of assessing joint range of motion (ROM) involve specialized tools that may not be widely available to clinicians. This study assesses the reliability and validity of a custom Smartphone application for assessing hip joint range of motion. Intra-tester reliability with concurrent validity. Passive hip joint range of motion was recorded for seven different movements in 20 males on two separate occasions. Data from a Smartphone, bubble inclinometer and a three dimensional motion analysis (3DMA) system were collected simultaneously. Intraclass correlation coefficients (ICCs), coefficients of variation (CV) and standard error of measurement (SEM) were used to assess reliability. To assess validity of the Smartphone application and the bubble inclinometer against the three dimensional motion analysis system, intraclass correlation coefficients and fixed and proportional biases were used. The Smartphone demonstrated good to excellent reliability (ICCs>0.75) for four out of the seven movements, and moderate to good reliability for the remaining three movements (ICC=0.63-0.68). Additionally, the Smartphone application displayed comparable reliability to the bubble inclinometer. The Smartphone application displayed excellent validity when compared to the three dimensional motion analysis system for all movements (ICCs>0.88) except one, which displayed moderate to good validity (ICC=0.71). Smartphones are portable and widely available tools that are mostly reliable and valid for assessing passive hip range of motion, with potential for large-scale use when a bubble inclinometer is not available. However, caution must be taken in its implementation as some movement axes demonstrated only moderate reliability. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Reliability program requirements for aeronautical and space system contractors
NASA Technical Reports Server (NTRS)
1987-01-01
General reliability program requirements for NASA contracts involving the design, development, fabrication, test, and/or use of aeronautical and space systems including critical ground support equipment are prescribed. The reliability program requirements require (1) thorough planning and effective management of the reliability effort; (2) definition of the major reliability tasks and their place as an integral part of the design and development process; (3) planning and evaluating the reliability of the system and its elements (including effects of software interfaces) through a program of analysis, review, and test; and (4) timely status indication by formal documentation and other reporting to facilitate control of the reliability program.
Kepler, Christopher K; Vaccaro, Alexander R; Koerner, John D; Dvorak, Marcel F; Kandziora, Frank; Rajasekaran, Shanmuganathan; Aarabi, Bizhan; Vialle, Luiz R; Fehlings, Michael G; Schroeder, Gregory D; Reinhold, Maximilian; Schnake, Klaus John; Bellabarba, Carlo; Cumhur Öner, F
2016-04-01
The aims of this study were (1) to demonstrate the AOSpine thoracolumbar spine injury classification system can be reliably applied by an international group of surgeons and (2) to delineate those injury types which are difficult for spine surgeons to classify reliably. A previously described classification system of thoracolumbar injuries which consists of a morphologic classification of the fracture, a grading system for the neurologic status and relevant patient-specific modifiers was applied to 25 cases by 100 spinal surgeons from across the world twice independently, in grading sessions 1 month apart. The results were analyzed for classification reliability using the Kappa coefficient (κ). The overall Kappa coefficient for all cases was 0.56, which represents moderate reliability. Kappa values describing interobserver agreement were 0.80 for type A injuries, 0.68 for type B injuries and 0.72 for type C injuries, all representing substantial reliability. The lowest level of agreement for specific subtypes was for fracture subtype A4 (Kappa = 0.19). Intraobserver analysis demonstrated overall average Kappa statistic for subtype grading of 0.68 also representing substantial reproducibility. In a worldwide sample of spinal surgeons without previous exposure to the recently described AOSpine Thoracolumbar Spine Injury Classification System, we demonstrated moderate interobserver and substantial intraobserver reliability. These results suggest that most spine surgeons can reliably apply this system to spine trauma patients as or more reliably than previously described systems.
NASA Technical Reports Server (NTRS)
White, A. L.
1983-01-01
This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology for quantitatively analyzing the reliability of redundant avionics systems, in general, and the dual, separated Redundant Strapdown Inertial Measurement Unit (RSDIMU), in particular, is presented. The RSDIMU is described and a candidate failure detection and isolation system presented. A Markov reliability model is employed. The operational states of the system are defined and the single-step state transition diagrams discussed. Graphical results, showing the impact of major system parameters on the reliability of the RSDIMU system, are presented and discussed.
Simplified Phased-Mission System Analysis for Systems with Independent Component Repairs
NASA Technical Reports Server (NTRS)
Somani, Arun K.
1996-01-01
Accurate analysis of reliability of system requires that it accounts for all major variations in system's operation. Most reliability analyses assume that the system configuration, success criteria, and component behavior remain the same. However, multiple phases are natural. We present a new computationally efficient technique for analysis of phased-mission systems where the operational states of a system can be described by combinations of components states (such as fault trees or assertions). Moreover, individual components may be repaired, if failed, as part of system operation but repairs are independent of the system state. For repairable systems Markov analysis techniques are used but they suffer from state space explosion. That limits the size of system that can be analyzed and it is expensive in computation. We avoid the state space explosion. The phase algebra is used to account for the effects of variable configurations, repairs, and success criteria from phase to phase. Our technique yields exact (as opposed to approximate) results. We demonstrate our technique by means of several examples and present numerical results to show the effects of phases and repairs on the system reliability/availability.
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine
2008-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.
The reliability-quality relationship for quality systems and quality risk management.
Claycamp, H Gregg; Rahaman, Faiad; Urban, Jason M
2012-01-01
Engineering reliability typically refers to the probability that a system, or any of its components, will perform a required function for a stated period of time and under specified operating conditions. As such, reliability is inextricably linked with time-dependent quality concepts, such as maintaining a state of control and predicting the chances of losses from failures for quality risk management. Two popular current good manufacturing practice (cGMP) and quality risk management tools, failure mode and effects analysis (FMEA) and root cause analysis (RCA) are examples of engineering reliability evaluations that link reliability with quality and risk. Current concepts in pharmaceutical quality and quality management systems call for more predictive systems for maintaining quality; yet, the current pharmaceutical manufacturing literature and guidelines are curiously silent on engineering quality. This commentary discusses the meaning of engineering reliability while linking the concept to quality systems and quality risk management. The essay also discusses the difference between engineering reliability and statistical (assay) reliability. The assurance of quality in a pharmaceutical product is no longer measured only "after the fact" of manufacturing. Rather, concepts of quality systems and quality risk management call for designing quality assurance into all stages of the pharmaceutical product life cycle. Interestingly, most assays for quality are essentially static and inform product quality over the life cycle only by being repeated over time. Engineering process reliability is the fundamental concept that is meant to anticipate quality failures over the life cycle of the product. Reliability is a well-developed theory and practice for other types of manufactured products and manufacturing processes. Thus, it is well known to be an appropriate index of manufactured product quality. This essay discusses the meaning of reliability and its linkages with quality systems and quality risk management.
NASA Astrophysics Data System (ADS)
Gilmanshin, I. R.; Kirpichnikov, A. P.
2017-09-01
In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.
DOT National Transportation Integrated Search
1984-10-01
The Transit Reliability Information Program (TRIP) is a government-initiated program to assist the transit industry in satisfying its need for transit reliability information. TRIP provides this assistance through the operation of a national data ban...
Reliability of Fault Tolerant Control Systems. Part 1
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.
Analysis of travel time reliability on Indiana interstates.
DOT National Transportation Integrated Search
2009-09-15
Travel-time reliability is a key performance measure in any transportation system. It is a : measure of quality of travel time experienced by transportation system users and reflects the efficiency : of the transportation system to serve citizens, bu...
Technology Overview for Advanced Aircraft Armament System Program.
1981-05-01
availability of methods or systems for improving stores and armament safety. Of particular importance are aspects of safety involving hazards analysis ...flutter virtually insensitive to inertia and center-of- gravity location of store - Simplifies and reduces analysis and testing required to flutter- clear...status. Nearly every existing reliability analysis and discipline that prom- ised a positive return on reliability performance was drawn out, dusted
Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Knox, Lenora A.
The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
System reliability and recovery.
DOT National Transportation Integrated Search
1971-06-01
The paper exhibits a variety of reliability techniques applicable to future ATC data processing systems. Presently envisioned schemes for error detection, error interrupt and error analysis are considered, along with methods of retry, reconfiguration...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia
2015-04-26
Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less
The Application of a Residual Risk Evaluation Technique Used for Expendable Launch Vehicles
NASA Technical Reports Server (NTRS)
Latimer, John A.
2009-01-01
This presentation provides a Residual Risk Evaluation Technique (RRET) developed by Kennedy Space Center (KSC) Safety and Mission Assurance (S&MA) Launch Services Division. This technique is one of many procedures used by S&MA at KSC to evaluate residual risks for each Expendable Launch Vehicle (ELV) mission. RRET is a straight forward technique that incorporates the proven methodology of risk management, fault tree analysis, and reliability prediction. RRET derives a system reliability impact indicator from the system baseline reliability and the system residual risk reliability values. The system reliability impact indicator provides a quantitative measure of the reduction in the system baseline reliability due to the identified residual risks associated with the designated ELV mission. An example is discussed to provide insight into the application of RRET.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
Distribution System Reliability Analysis for Smart Grid Applications
NASA Astrophysics Data System (ADS)
Aljohani, Tawfiq Masad
Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.
Shuang, Qing; Zhang, Mingyuan; Yuan, Yongbo
2014-01-01
As a mean of supplying water, Water distribution system (WDS) is one of the most important complex infrastructures. The stability and reliability are critical for urban activities. WDSs can be characterized by networks of multiple nodes (e.g. reservoirs and junctions) and interconnected by physical links (e.g. pipes). Instead of analyzing highest failure rate or highest betweenness, reliability of WDS is evaluated by introducing hydraulic analysis and cascading failures (conductive failure pattern) from complex network. The crucial pipes are identified eventually. The proposed methodology is illustrated by an example. The results show that the demand multiplier has a great influence on the peak of reliability and the persistent time of the cascading failures in its propagation in WDS. The time period when the system has the highest reliability is when the demand multiplier is less than 1. There is a threshold of tolerance parameter exists. When the tolerance parameter is less than the threshold, the time period with the highest system reliability does not meet minimum value of demand multiplier. The results indicate that the system reliability should be evaluated with the properties of WDS and the characteristics of cascading failures, so as to improve its ability of resisting disasters. PMID:24551102
Proposed reliability cost model
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1973-01-01
The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.
Demonstration Advanced Avionics System (DAAS), Phase 1
NASA Technical Reports Server (NTRS)
Bailey, A. J.; Bailey, D. G.; Gaabo, R. J.; Lahn, T. G.; Larson, J. C.; Peterson, E. M.; Schuck, J. W.; Rodgers, D. L.; Wroblewski, K. A.
1981-01-01
Demonstration advanced anionics system (DAAS) function description, hardware description, operational evaluation, and failure mode and effects analysis (FMEA) are provided. Projected advanced avionics system (PAAS) description, reliability analysis, cost analysis, maintainability analysis, and modularity analysis are discussed.
Probabilistic structural analysis by extremum methods
NASA Technical Reports Server (NTRS)
Nafday, Avinash M.
1990-01-01
The objective is to demonstrate discrete extremum methods of structural analysis as a tool for structural system reliability evaluation. Specifically, linear and multiobjective linear programming models for analysis of rigid plastic frames under proportional and multiparametric loadings, respectively, are considered. Kinematic and static approaches for analysis form a primal-dual pair in each of these models and have a polyhedral format. Duality relations link extreme points and hyperplanes of these polyhedra and lead naturally to dual methods for system reliability evaluation.
Automated Sneak Circuit Analysis Technique
1990-06-01
the OrCAD/SDT module Port facility. 2. The terminals of all in- circuit voltage sources (e , batteries) must be labeled using the OrCAD/SDT module port...ELECTE 1 MAY 2 01994 _- AUTOMATED SNEAK CIRCUIT ANALYSIS TECHNIQUEIt~ w I wtA who RADC 94-14062 Systems Reliability & Engineering Division Rome...Air Develpment Center Best Avai~lable copy AUTOMATED SNEAK CIRCUIT ANALYSIS TECHNIQUE RADC June 1990 Systems Reliability & Engineering Division Rome Air
Komal
2018-05-01
Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucknor, Matthew D.; Grabaskas, David; Brunett, Acacia J.
2016-01-01
Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended due to deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologiesmore » for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Centering on an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive reactor cavity cooling system following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. While this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability for the reactor cavity cooling system (and the reactor system in general) to the postulated transient event.« less
Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; ...
2017-01-24
We report that many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has beenmore » examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Lastly, although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.« less
Integrated Approach To Design And Analysis Of Systems
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1993-01-01
Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.
Probabilistic assessment of dynamic system performance. Part 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belhadj, Mohamed
1993-01-01
Accurate prediction of dynamic system failure behavior can be important for the reliability and risk analyses of nuclear power plants, as well as for their backfitting to satisfy given constraints on overall system reliability, or optimization of system performance. Global analysis of dynamic systems through investigating the variations in the structure of the attractors of the system and the domains of attraction of these attractors as a function of the system parameters is also important for nuclear technology in order to understand the fault-tolerance as well as the safety margins of the system under consideration and to insure a safemore » operation of nuclear reactors. Such a global analysis would be particularly relevant to future reactors with inherent or passive safety features that are expected to rely on natural phenomena rather than active components to achieve and maintain safe shutdown. Conventionally, failure and global analysis of dynamic systems necessitate the utilization of different methodologies which have computational limitations on the system size that can be handled. Using a Chapman-Kolmogorov interpretation of system dynamics, a theoretical basis is developed that unifies these methodologies as special cases and which can be used for a comprehensive safety and reliability analysis of dynamic systems.« less
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
Flight control electronics reliability/maintenance study
NASA Technical Reports Server (NTRS)
Dade, W. W.; Edwards, R. H.; Katt, G. T.; Mcclellan, K. L.; Shomber, H. A.
1977-01-01
Collection and analysis of data are reported that concern the reliability and maintenance experience of flight control system electronics currently in use on passenger carrying jet aircraft. Two airlines B-747 airplane fleets were analyzed to assess the component reliability, system functional reliability, and achieved availability of the CAT II configuration flight control system. Also assessed were the costs generated by this system in the categories of spare equipment, schedule irregularity, and line and shop maintenance. The results indicate that although there is a marked difference in the geographic location and route pattern between the airlines studied, there is a close similarity in the reliability and the maintenance costs associated with the flight control electronics.
Large-scale-system effectiveness analysis. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Foster, J.W.
1979-11-01
Objective of the research project has been the investigation and development of methods for calculating system reliability indices which have absolute, and measurable, significance to consumers. Such indices are a necessary prerequisite to any scheme for system optimization which includes the economic consequences of consumer service interruptions. A further area of investigation has been joint consideration of generation and transmission in reliability studies. Methods for finding or estimating the probability distributions of some measures of reliability performance have been developed. The application of modern Monte Carlo simulation methods to compute reliability indices in generating systems has been studied.
Reliability Analysis of a Glacier Lake Warning System Using a Bayesian Net
NASA Astrophysics Data System (ADS)
Sturny, Rouven A.; Bründl, Michael
2013-04-01
Beside structural mitigation measures like avalanche defense structures, dams and galleries, warning and alarm systems have become important measures for dealing with Alpine natural hazards. Integrating them into risk mitigation strategies and comparing their effectiveness with structural measures requires quantification of the reliability of these systems. However, little is known about how reliability of warning systems can be quantified and which methods are suitable for comparing their contribution to risk reduction with that of structural mitigation measures. We present a reliability analysis of a warning system located in Grindelwald, Switzerland. The warning system was built for warning and protecting residents and tourists from glacier outburst floods as consequence of a rapid drain of the glacier lake. We have set up a Bayesian Net (BN, BPN) that allowed for a qualitative and quantitative reliability analysis. The Conditional Probability Tables (CPT) of the BN were determined according to manufacturer's reliability data for each component of the system as well as by assigning weights for specific BN nodes accounting for information flows and decision-making processes of the local safety service. The presented results focus on the two alerting units 'visual acoustic signal' (VAS) and 'alerting of the intervention entities' (AIE). For the summer of 2009, the reliability was determined to be 94 % for the VAS and 83 % for the AEI. The probability of occurrence of a major event was calculated as 0.55 % per day resulting in an overall reliability of 99.967 % for the VAS and 99.906 % for the AEI. We concluded that a failure of the VAS alerting unit would be the consequence of a simultaneous failure of the four probes located in the lake and the gorge. Similarly, we deduced that the AEI would fail either if there were a simultaneous connectivity loss of the mobile and fixed network in Grindelwald, an Internet access loss or a failure of the regional operations centre. However, the probability of a common failure of these components was assumed to be low. Overall it can be stated that due to numerous redundancies, the investigated warning system is highly reliable and its influence on risk reduction is very high. Comparable studies in the future are needed to classify these results and to gain more experience how the reliability of warning systems could be determined in practice.
Rapid Modeling and Analysis Tools: Evolution, Status, Needs and Directions
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Stone, Thomas J.; Ransom, Jonathan B. (Technical Monitor)
2002-01-01
Advanced aerospace systems are becoming increasingly more complex, and customers are demanding lower cost, higher performance, and high reliability. Increased demands are placed on the design engineers to collaborate and integrate design needs and objectives early in the design process to minimize risks that may occur later in the design development stage. High performance systems require better understanding of system sensitivities much earlier in the design process to meet these goals. The knowledge, skills, intuition, and experience of an individual design engineer will need to be extended significantly for the next generation of aerospace system designs. Then a collaborative effort involving the designer, rapid and reliable analysis tools and virtual experts will result in advanced aerospace systems that are safe, reliable, and efficient. This paper discusses the evolution, status, needs and directions for rapid modeling and analysis tools for structural analysis. First, the evolution of computerized design and analysis tools is briefly described. Next, the status of representative design and analysis tools is described along with a brief statement on their functionality. Then technology advancements to achieve rapid modeling and analysis are identified. Finally, potential future directions including possible prototype configurations are proposed.
Evolving Reliability and Maintainability Allocations for NASA Ground Systems
NASA Technical Reports Server (NTRS)
Munoz, Gisela; Toon, T.; Toon, J.; Conner, A.; Adams, T.; Miranda, D.
2016-01-01
This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) programs subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.
Evolving Reliability and Maintainability Allocations for NASA Ground Systems
NASA Technical Reports Server (NTRS)
Munoz, Gisela; Toon, Troy; Toon, Jamie; Conner, Angelo C.; Adams, Timothy C.; Miranda, David J.
2016-01-01
This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program’s subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.
Evolving Reliability and Maintainability Allocations for NASA Ground Systems
NASA Technical Reports Server (NTRS)
Munoz, Gisela; Toon, Jamie; Toon, Troy; Adams, Timothy C.; Miranda, David J.
2016-01-01
This paper describes the methodology that was developed to allocate reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program's subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. Allocating is an iterative process; as systems moved beyond their conceptual and preliminary design phases this provided an opportunity for the reliability engineering team to reevaluate allocations based on updated designs and maintainability characteristics of the components. Trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper will discuss the value of modifying reliability and maintainability allocations made for the GSDO subsystems as the program nears the end of its design phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CARLSON, A.B.
The document presents updated results of the preliminary reliability, availability, maintainability analysis performed for delivery of waste feed from tanks 241-AZ-101 and 241-AN-105 to British Nuclear Fuels Limited, inc. under the Tank Waste Remediation System Privatization Contract. The operational schedule delay risk is estimated and contributing factors are discussed.
1992-04-01
contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more
ERIC Educational Resources Information Center
Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Reliability Growth in Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2014-01-01
A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.
The development of a reliable amateur boxing performance analysis template.
Thomson, Edward; Lamb, Kevin; Nicholas, Ceri
2013-01-01
The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing.
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
Photovoltaic power system reliability considerations
NASA Technical Reports Server (NTRS)
Lalli, V. R.
1980-01-01
An example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems is presented. This particular application is for a solar cell power system demonstration project designed to provide electric power requirements for remote villages. The techniques utilized involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of fail-safe and planned spare parts engineering philosophy.
Photovoltaic power system reliability considerations
NASA Technical Reports Server (NTRS)
Lalli, V. R.
1980-01-01
This paper describes an example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems. This particular application was for a solar cell power system demonstration project in Tangaye, Upper Volta, Africa. The techniques involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of a fail-safe and planned spare parts engineering philosophy.
User-Perceived Reliability of M-for-N (M: N) Shared Protection Systems
NASA Astrophysics Data System (ADS)
Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue
In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.
A System for Integrated Reliability and Safety Analyses
NASA Technical Reports Server (NTRS)
Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Coumeri, Marc; Scheidler, Peter, Jr.; Bonesteel, Charles
1999-01-01
We present an integrated reliability and aviation safety analysis tool. The reliability models for selected infrastructure components of the air traffic control system are described. The results of this model are used to evaluate the likelihood of seeing outcomes predicted by simulations with failures injected. We discuss the design of the simulation model, and the user interface to the integrated toolset.
Alwan, Faris M; Baharum, Adam; Hassan, Geehan S
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.
Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346
FEMA and RAM Analysis for the Multi Canister Overpack (MCO) Handling Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
SWENSON, C.E.
2000-06-01
The Failure Modes and Effects Analysis and the Reliability, Availability, and Maintainability Analysis performed for the Multi-Canister Overpack Handling Machine (MHM) has shown that the current design provides for a safe system, but the reliability of the system (primarily due to the complexity of the interlocks and permissive controls) is relatively low. No specific failure modes were identified where significant consequences to the public occurred, or where significant impact to nearby workers should be expected. The overall reliability calculation for the MHM shows a 98.1 percent probability of operating for eight hours without failure, and an availability of the MHMmore » of 90 percent. The majority of the reliability issues are found in the interlocks and controls. The availability of appropriate spare parts and maintenance personnel, coupled with well written operating procedures, will play a more important role in successful mission completion for the MHM than other less complicated systems.« less
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.
2015-12-01
This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.
Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth A.
2016-01-01
We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems. If a system is expected to be protected using TMR, improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. This manuscript addresses the challenge of confirming that TMR has been inserted without corruption of functionality and with correct application of the expected TMR topology. The proposed verification method combines the usage of existing formal analysis tools with a novel search-detect-and-verify tool. Field programmable gate array (FPGA),Triple Modular Redundancy (TMR),Verification, Trust, Reliability,
TIGER reliability analysis in the DSN
NASA Technical Reports Server (NTRS)
Gunn, J. M.
1982-01-01
The TIGER algorithm, the inputs to the program and the output are described. TIGER is a computer program designed to simulate a system over a period of time to evaluate system reliability and availability. Results can be used in the Deep Space Network for initial spares provisioning and system evaluation.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Design and Analysis of a Flexible, Reliable Deep Space Life Support System
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2012-01-01
This report describes a flexible, reliable, deep space life support system design approach that uses either storage or recycling or both together. The design goal is to provide the needed life support performance with the required ultra reliability for the minimum Equivalent System Mass (ESM). Recycling life support systems used with multiple redundancy can have sufficient reliability for deep space missions but they usually do not save mass compared to mixed storage and recycling systems. The best deep space life support system design uses water recycling with sufficient water storage to prevent loss of crew if recycling fails. Since the amount of water needed for crew survival is a small part of the total water requirement, the required amount of stored water is significantly less than the total to be consumed. Water recycling with water, oxygen, and carbon dioxide removal material storage can achieve the high reliability of full storage systems with only half the mass of full storage and with less mass than the highly redundant recycling systems needed to achieve acceptable reliability. Improved recycling systems with lower mass and higher reliability could perform better than systems using storage.
NASA Astrophysics Data System (ADS)
Lamour, B. G.; Harris, R. T.; Roberts, A. G.
2010-06-01
Power system reliability problems are very difficult to solve because the power systems are complex and geographically widely distributed and influenced by numerous unexpected events. It is therefore imperative to employ the most efficient optimization methods in solving the problems relating to reliability of the power system. This paper presents a reliability analysis and study of the power interruptions resulting from severe power outages in the Nelson Mandela Bay Municipality (NMBM), South Africa and includes an overview of the important factors influencing reliability, and methods to improve the reliability. The Blue Horizon Bay 22 kV overhead line, supplying a 6.6 kV residential sector has been selected. It has been established that 70% of the outages, recorded at the source, originate on this feeder.
Power transfer systems for future navy helicopters. Final report 25 Jun 70--28 Jun 72
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bossler, R.B. Jr.
1972-11-01
The purpose of this program was to conduct an analysis of helicopter power transfer systems (pts), both conventional and advanced concept type, with the objective of reducing specific weights and improving reliability beyond present values. The analysis satisfied requirements specified for a 200,000 pound cargo transport helicopter (CTH), a 70,000 pound heavy assault helicopter, and a 15,000 pound non-combat search and rescue helicopter. Four selected gearing systems (out of seven studied), optimized for lightest weight and equal reliability for the CTH, using component proportioning via stress and stiffness equations, had no significant difference between their aircraft payloads. All optimized ptsmore » were approximately 70% of statistically predicted weight. Reliability increase is predicted via gearbox derating using Weibull relationships. Among advanced concepts, the Turbine Integrated Geared Rotor was competitive for weight, technology availability and reliability increase but handicapped by a special engine requirement. The warm cycle system was found not competitive. Helicopter parametric weight analysis is shown. Advanced development Plans are presented for the pts for the CTH, including total pts system, selected pts components, and scale model flight testing in a Kaman HH2 helicopter.« less
NASA Astrophysics Data System (ADS)
McCrea, Terry
The Shuttle Processing Contract (SPC) workforce consists of Lockheed Space Operations Co. as prime contractor, with Grumman, Thiokol Corporation, and Johnson Controls World Services as subcontractors. During the design phase, reliability engineering is instrumental in influencing the development of systems that meet the Shuttle fail-safe program requirements. Reliability engineers accomplish this objective by performing FMEA (failure modes and effects analysis) to identify potential single failure points. When technology, time, or resources do not permit a redesign to eliminate a single failure point, the single failure point information is formatted into a change request and presented to senior management of SPC and NASA for risk acceptance. In parallel with the FMEA, safety engineering conducts a hazard analysis to assure that potential hazards to personnel are assessed. The combined effort (FMEA and hazard analysis) is published as a system assurance analysis. Special ground rules and techniques are developed to perform and present the analysis. The reliability program at KSC is vigorously pursued, and has been extremely successful. The ground support equipment and facilities used to launch and land the Space Shuttle maintain an excellent reliability record.
Reliability of digital reactor protection system based on extenics.
Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng
2016-01-01
After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.
The Evaluation Method of the Lightning Strike on Transmission Lines Aiming at Power Grid Reliability
NASA Astrophysics Data System (ADS)
Wen, Jianfeng; Wu, Jianwei; Huang, Liandong; Geng, Yinan; Yu, zhanqing
2018-01-01
Lightning protection of power system focuses on reducing the flashover rate, only distinguishing by the voltage level, without considering the functional differences between the transmission lines, and being lack of analysis the effect on the reliability of power grid. This will lead lightning protection design of general transmission lines is surplus but insufficient for key lines. In order to solve this problem, the analysis method of lightning striking on transmission lines for power grid reliability is given. Full wave process theory is used to analyze the lightning back striking; the leader propagation model is used to describe the process of shielding failure of transmission lines. The index of power grid reliability is introduced and the effect of transmission line fault on the reliability of power system is discussed in detail.
Operational present status and reliability analysis of the upgraded EAST cryogenic system
NASA Astrophysics Data System (ADS)
Zhou, Z. W.; Y Zhang, Q.; Lu, X. F.; Hu, L. B.; Zhu, P.
2017-12-01
Since the first commissioning in 2005, the cryogenic system for EAST (Experimental Advanced Superconducting Tokamak) has been cooled down and warmed up for thirteen experimental campaigns. In order to promote the refrigeration efficiencies and reliability, the EAST cryogenic system was upgraded gradually with new helium screw compressors and new dynamic gas bearing helium turbine expanders with eddy current brake to improve the original poor mechanical and operational performance from 2012 to 2015. Then the totally upgraded cryogenic system was put into operation in the eleventh cool-down experiment, and has been operated for the latest several experimental campaigns. The upgraded system has successfully coped with various normal operational modes during cool-down and 4.5 K steady-state operation under pulsed heat load from the tokamak as well as the abnormal fault modes including turbines protection stop. In this paper, the upgraded EAST cryogenic system including its functional analysis and new cryogenic control networks will be presented in detail. Also, its operational present status in the latest cool-down experiments will be presented and the system reliability will be analyzed, which shows a high reliability and low fault rate after upgrade. In the end, some future necessary work to meet the higher reliability requirement for future uninterrupted long-term experimental operation will also be proposed.
Mechanical system reliability for long life space systems
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1994-01-01
The creation of a compendium of mechanical limit states was undertaken in order to provide a reference base for the application of first-order reliability methods to mechanical systems in the context of the development of a system level design methodology. The compendium was conceived as a reference source specific to the problem of developing the noted design methodology, and not an exhaustive or exclusive compilation of mechanical limit states. The compendium is not intended to be a handbook of mechanical limit states for general use. The compendium provides a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The compendium is to be used in the reliability analysis of moderately complex mechanical systems.
[Examination of safety improvement by failure record analysis that uses reliability engineering].
Kato, Kyoichi; Sato, Hisaya; Abe, Yoshihisa; Ishimori, Yoshiyuki; Hirano, Hiroshi; Higashimura, Kyoji; Amauchi, Hiroshi; Yanakita, Takashi; Kikuchi, Kei; Nakazawa, Yasuo
2010-08-20
How the maintenance checks of the medical treatment system, including start of work check and the ending check, was effective for preventive maintenance and the safety improvement was verified. In this research, date on the failure of devices in multiple facilities was collected, and the data of the trouble repair record was analyzed by the technique of reliability engineering. An analysis of data on the system (8 general systems, 6 Angio systems, 11 CT systems, 8 MRI systems, 8 RI systems, and the radiation therapy system 9) used in eight hospitals was performed. The data collection period assumed nine months from April to December 2008. Seven items were analyzed. (1) Mean time between failures (MTBF) (2) Mean time to repair (MTTR) (3) Mean down time (MDT) (4) Number found by check in morning (5) Failure generation time according to modality. The classification of the breakdowns per device, the incidence, and the tendency could be understood by introducing reliability engineering. Analysis, evaluation, and feedback on the failure generation history are useful to keep downtime to a minimum and to ensure safety.
System Risk Assessment and Allocation in Conceptual Design
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Smith, Natasha L.; Zang, Thomas A. (Technical Monitor)
2003-01-01
As aerospace systems continue to evolve in addressing newer challenges in air and space transportation, there exists a heightened priority for significant improvement in system performance, cost effectiveness, reliability, and safety. Tools, which synthesize multidisciplinary integration, probabilistic analysis, and optimization, are needed to facilitate design decisions allowing trade-offs between cost and reliability. This study investigates tools for probabilistic analysis and probabilistic optimization in the multidisciplinary design of aerospace systems. A probabilistic optimization methodology is demonstrated for the low-fidelity design of a reusable launch vehicle at two levels, a global geometry design and a local tank design. Probabilistic analysis is performed on a high fidelity analysis of a Navy missile system. Furthermore, decoupling strategies are introduced to reduce the computational effort required for multidisciplinary systems with feedback coupling.
Reliability of Long-Term Wave Conditions Predicted with Data Sets of Short Duration
1985-03-01
the validity and reliability of predicted probable wave heights obtained from data of limited duration. BACKGROUND: The basic steps listed by...interest to perform the analysis outlined in steps 2 to 5, the prediction would only be reliable for up to a 3year return period. For a 5-year data set...for long-term hindcast data . The data retrieval and analysis program known as the Sea State Engineering Analysis System (SEAS) makes handling of the
NASA Applications and Lessons Learned in Reliability Engineering
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Fuller, Raymond P.
2011-01-01
Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.
Hawaii electric system reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva Monroy, Cesar Augusto; Loose, Verne William
2012-09-01
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less
Hawaii Electric System Reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loose, Verne William; Silva Monroy, Cesar Augusto
2012-08-01
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less
Models for evaluating the performability of degradable computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.
1982-01-01
Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.
Orbiter Autoland reliability analysis
NASA Technical Reports Server (NTRS)
Welch, D. Phillip
1993-01-01
The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, within a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability, and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation, testing results, and other information. Where appropriate, actual performance history was used to calculate failure rates for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to assess compliance with requirements and to highlight design or performance shortcomings for further decision making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability, and maintainability analysis, and present findings and observation based on analysis leading to the Ground Operations Project Preliminary Design Review milestone.
Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan
2016-01-01
The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.
The art of fault-tolerant system reliability modeling
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1990-01-01
A step-by-step tutorial of the methods and tools used for the reliability analysis of fault-tolerant systems is presented. Emphasis is on the representation of architectural features in mathematical models. Details of the mathematical solution of complex reliability models are not presented. Instead the use of several recently developed computer programs--SURE, ASSIST, STEM, PAWS--which automate the generation and solution of these models is described.
Reliability Analysis of Sealing Structure of Electromechanical System Based on Kriging Model
NASA Astrophysics Data System (ADS)
Zhang, F.; Wang, Y. M.; Chen, R. W.; Deng, W. W.; Gao, Y.
2018-05-01
The sealing performance of aircraft electromechanical system has a great influence on flight safety, and the reliability of its typical seal structure is analyzed by researcher. In this paper, we regard reciprocating seal structure as a research object to study structural reliability. Having been based on the finite element numerical simulation method, the contact stress between the rubber sealing ring and the cylinder wall is calculated, and the relationship between the contact stress and the pressure of the hydraulic medium is built, and the friction force on different working conditions are compared. Through the co-simulation, the adaptive Kriging model obtained by EFF learning mechanism is used to describe the failure probability of the seal ring, so as to evaluate the reliability of the sealing structure. This article proposes a new idea of numerical evaluation for the reliability analysis of sealing structure, and also provides a theoretical basis for the optimal design of sealing structure.
Evaluating the Reliability of Emergency Response Systems for Large-Scale Incident Operations
Jackson, Brian A.; Faith, Kay Sullivan; Willis, Henry H.
2012-01-01
Abstract The ability to measure emergency preparedness—to predict the likely performance of emergency response systems in future events—is critical for policy analysis in homeland security. Yet it remains difficult to know how prepared a response system is to deal with large-scale incidents, whether it be a natural disaster, terrorist attack, or industrial or transportation accident. This research draws on the fields of systems analysis and engineering to apply the concept of system reliability to the evaluation of emergency response systems. The authors describe a method for modeling an emergency response system; identifying how individual parts of the system might fail; and assessing the likelihood of each failure and the severity of its effects on the overall response effort. The authors walk the reader through two applications of this method: a simplified example in which responders must deliver medical treatment to a certain number of people in a specified time window, and a more complex scenario involving the release of chlorine gas. The authors also describe an exploratory analysis in which they parsed a set of after-action reports describing real-world incidents, to demonstrate how this method can be used to quantitatively analyze data on past response performance. The authors conclude with a discussion of how this method of measuring emergency response system reliability could inform policy discussion of emergency preparedness, how system reliability might be improved, and the costs of doing so. PMID:28083267
Ultra Reliable Closed Loop Life Support for Long Space Missions
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Ewert, Michael K.
2010-01-01
Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.
Spaceflight Ground Support Equipment Reliability & System Safety Data
NASA Technical Reports Server (NTRS)
Fernandez, Rene; Riddlebaugh, Jeffrey; Brinkman, John; Wilkinson, Myron
2012-01-01
Presented were Reliability Analysis, consisting primarily of Failure Modes and Effects Analysis (FMEA), and System Safety Analysis, consisting of Preliminary Hazards Analysis (PHA), performed to ensure that the CoNNeCT (Communications, Navigation, and Networking re- Configurable Testbed) Flight System was safely and reliably operated during its Assembly, Integration and Test (AI&T) phase. A tailored approach to the NASA Ground Support Equipment (GSE) standard, NASA-STD-5005C, involving the application of the appropriate Requirements, S&MA discipline expertise, and a Configuration Management system (to retain a record of the analysis and documentation) were presented. Presented were System Block Diagrams of selected GSE and the corresponding FMEA, as well as the PHAs. Also discussed are the specific examples of the FMEAs and PHAs being used during the AI&T phase to drive modifications to the GSE (via "redlining" of test procedures, and the placement of warning stickers to protect the flight hardware) before being interfaced to the Flight System. These modifications were necessary because failure modes and hazards were identified during the analysis that had not been properly mitigated. Strict Configuration Management was applied to changes (whether due to upgrades or expired calibrations) in the GSE by revisiting the FMEAs and PHAs to reflect the latest System Block Diagrams and Bill Of Material. The CoNNeCT flight system has been successfully assembled, integrated, tested, and shipped to the launch site without incident. This demonstrates that the steps taken to safeguard the flight system when it was interfaced to the various GSE were successful.
Quality management for space systems in ISRO
NASA Astrophysics Data System (ADS)
Satish, S.; Selva Raju, S.; Nanjunda Swamy, T. S.; Kulkarni, P. L.
2009-11-01
In a little over four decades, the Indian Space Program has carved a niche for itself with the unique application driven program oriented towards National development. The end-to-end capability approach of the space projects in the country call for innovative practices and procedures in assuring the quality and reliability of space systems. The System Reliability (SR) efforts initiated at the start of the projects continue during the entire life cycle of the project encompassing design, development, realisation, assembly, testing and integration and during launch. Even after the launch, SR groups participate in the on-orbit evaluation of transponders in communication satellites and camera systems in remote sensing satellites. SR groups play a major role in identification, evaluation and inculcating quality practices in work centres involved in the fabrication of mechanical, electronics and propulsion systems required for Indian Space Research Organization's (ISRO's) launch vehicle and spacecraft projects. Also the reliability analysis activities like prediction, assessment and demonstration as well as de-rating analysis, Failure Mode Effects and Criticality Analysis (FMECA) and worst-case analysis are carried out by SR groups during various stages of project realisation. These activities provide the basis for project management to take appropriate techno-managerial decisions to ensure that the required reliability goals are met. Extensive test facilities catering to the needs of the space program has been set up. A system for consolidating the experience and expertise gained for issue of standards called product assurance specifications to be used in all ISRO centres has also been established.
Kumar, Mohit; Yadav, Shiv Prasad
2012-03-01
This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Reliability techniques for computer executive programs
NASA Technical Reports Server (NTRS)
1972-01-01
Computer techniques for increasing the stability and reliability of executive and supervisory systems were studied. Program segmentation characteristics are discussed along with a validation system which is designed to retain the natural top down outlook in coding. An analysis of redundancy techniques and roll back procedures is included.
NASA Technical Reports Server (NTRS)
Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.
NASA Technical Reports Server (NTRS)
1972-01-01
The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.
System reliability analysis through corona testing
NASA Technical Reports Server (NTRS)
Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.
1975-01-01
A corona vacuum test facility for nondestructive testing of power system components was built in the Reliability and Quality Engineering Test Laboratories at the NASA Lewis Research Center. The facility was developed to simulate operating temperature and vacuum while monitoring corona discharges with residual gases. The facility is being used to test various high-voltage power system components.
Large-scale systems: Complexity, stability, reliability
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1975-01-01
After showing that a complex dynamic system with a competitive structure has highly reliable stability, a class of noncompetitive dynamic systems for which competitive models can be constructed is defined. It is shown that such a construction is possible in the context of the hierarchic stability analysis. The scheme is based on the comparison principle and vector Liapunov functions.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
Jackson, Brian A; Faith, Kay Sullivan
2013-02-01
Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.
Reliability and Probabilistic Risk Assessment - How They Play Together
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Stutts, Richard G.; Zhaofeng, Huang
2015-01-01
PRA methodology is one of the probabilistic analysis methods that NASA brought from the nuclear industry to assess the risk of LOM, LOV and LOC for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability and statistical data to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: What can go wrong? How likely is it? What is the severity of the degradation? Since 1986, NASA, along with industry partners, has conducted a number of PRA studies to predict the overall launch vehicles risks. Planning Research Corporation conducted the first of these studies in 1988. In 1995, Science Applications International Corporation (SAIC) conducted a comprehensive PRA study. In July 1996, NASA conducted a two-year study (October 1996 - September 1998) to develop a model that provided the overall Space Shuttle risk and estimates of risk changes due to proposed Space Shuttle upgrades. After the Columbia accident, NASA conducted a PRA on the Shuttle External Tank (ET) foam. This study was the most focused and extensive risk assessment that NASA has conducted in recent years. It used a dynamic, physics-based, integrated system analysis approach to understand the integrated system risk due to ET foam loss in flight. Most recently, a PRA for Ares I launch vehicle has been performed in support of the Constellation program. Reliability, on the other hand, addresses the loss of functions. In a broader sense, reliability engineering is a discipline that involves the application of engineering principles to the design and processing of products, both hardware and software, for meeting product reliability requirements or goals. It is a very broad design-support discipline. It has important interfaces with many other engineering disciplines. Reliability as a figure of merit (i.e. the metric) is the probability that an item will perform its intended function(s) for a specified mission profile. In general, the reliability metric can be calculated through the analyses using reliability demonstration and reliability prediction methodologies. Reliability analysis is very critical for understanding component failure mechanisms and in identifying reliability critical design and process drivers. The following sections discuss the PRA process and reliability engineering in detail and provide an application where reliability analysis and PRA were jointly used in a complementary manner to support a Space Shuttle flight risk assessment.
Sepulveda, Esteban; Franco, José G; Trzepacz, Paula T; Gaviria, Ana M; Meagher, David J; Palma, José; Viñuelas, Eva; Grau, Imma; Vilella, Elisabet; de Pablo, Joan
2016-05-26
Information on validity and reliability of delirium criteria is necessary for clinicians, researchers, and further developments of DSM or ICD. We compare four DSM and ICD delirium diagnostic criteria versions, which were developed by consensus of experts, with a phenomenology-based natural diagnosis delineated using cluster analysis of delirium features in a sample with a high prevalence of dementia. We also measured inter-rater reliability of each system when applied by two evaluators from distinct disciplines. Cross-sectional analysis of 200 consecutive patients admitted to a skilled nursing facility, independently assessed within 24-48 h after admission with the Delirium Rating Scale-Revised-98 (DRS-R98) and for DSM-III-R, DSM-IV, DSM-5, and ICD-10 criteria for delirium. Cluster analysis (CA) delineated natural delirium and nondelirium reference groups using DRS-R98 items and then diagnostic systems' performance were evaluated against the CA-defined groups using logistic regression and crosstabs for discriminant analysis (sensitivity, specificity, percentage of subjects correctly classified by each diagnostic system and their individual criteria, and performance for each system when excluding each individual criterion are reported). Kappa Index (K) was used to report inter-rater reliability for delirium diagnostic systems and their individual criteria. 117 (58.5 %) patients had preexisting dementia according to the Informant Questionnaire on Cognitive Decline in the Elderly. CA delineated 49 delirium subjects and 151 nondelirium. Against these CA groups, delirium diagnosis accuracy was highest using DSM-III-R (87.5 %) followed closely by DSM-IV (86.0 %), ICD-10 (85.5 %) and DSM-5 (84.5 %). ICD-10 had the highest specificity (96.0 %) but lowest sensitivity (53.1 %). DSM-III-R had the best sensitivity (81.6 %) and the best sensitivity-specificity balance. DSM-5 had the highest inter-rater reliability (K =0.73) while DSM-III-R criteria were the least reliable. Using our CA-defined, phenomenologically-based delirium designations as the reference standard, we found performance discordance among four diagnostic systems when tested in subjects where comorbid dementia was prevalent. The most complex diagnostic systems have higher accuracy and the newer DSM-5 have higher reliability. Our novel phenomenological approach to designing a delirium reference standard may be preferred to guide revisions of diagnostic systems in the future.
NASA Astrophysics Data System (ADS)
Kuznetsov, Michael V.
2006-05-01
For reliable teamwork of various systems of automatic telecommunication including transferring systems of optical communication networks it is necessary authentic recognition of signals for one- or two-frequency service signal system. The analysis of time parameters of an accepted signal allows increasing reliability of detection and recognition of the service signal system on a background of speech.
Reliability analysis of the solar array based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Jianing, Wu; Shaoze, Yan
2011-07-01
The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Difficult Decisions Made Easier
NASA Technical Reports Server (NTRS)
2006-01-01
NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.
Structural reliability assessment capability in NESSUS
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.
1992-01-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
Structural reliability assessment capability in NESSUS
NASA Astrophysics Data System (ADS)
Millwater, H.; Wu, Y.-T.
1992-07-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
NASA trend analysis procedures
NASA Technical Reports Server (NTRS)
1993-01-01
This publication is primarily intended for use by NASA personnel engaged in managing or implementing trend analysis programs. 'Trend analysis' refers to the observation of current activity in the context of the past in order to infer the expected level of future activity. NASA trend analysis was divided into 5 categories: problem, performance, supportability, programmatic, and reliability. Problem trend analysis uncovers multiple occurrences of historical hardware or software problems or failures in order to focus future corrective action. Performance trend analysis observes changing levels of real-time or historical flight vehicle performance parameters such as temperatures, pressures, and flow rates as compared to specification or 'safe' limits. Supportability trend analysis assesses the adequacy of the spaceflight logistics system; example indicators are repair-turn-around time and parts stockage levels. Programmatic trend analysis uses quantitative indicators to evaluate the 'health' of NASA programs of all types. Finally, reliability trend analysis attempts to evaluate the growth of system reliability based on a decreasing rate of occurrence of hardware problems over time. Procedures for conducting all five types of trend analysis are provided in this publication, prepared through the joint efforts of the NASA Trend Analysis Working Group.
NASA Technical Reports Server (NTRS)
Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.
The Typical General Aviation Aircraft
NASA Technical Reports Server (NTRS)
Turnbull, Andrew
1999-01-01
The reliability of General Aviation aircraft is unknown. In order to "assist the development of future GA reliability and safety requirements", a reliability study needs to be performed. Before any studies on General Aviation aircraft reliability begins, a definition of a typical aircraft that encompasses most of the general aviation characteristics needs to be defined. In this report, not only is the typical general aviation aircraft defined for the purpose of the follow-on reliability study, but it is also separated, or "sifted" into several different categories where individual analysis can be performed on the reasonably independent systems. In this study, the typical General Aviation aircraft is a four-place, single engine piston, all aluminum fixed-wing certified aircraft with a fixed tricycle landing gear and a cable operated flight control system. The system breakdown of a GA aircraft "sifts" the aircraft systems and components into five categories: Powerplant, Airframe, Aircraft Control Systems, Cockpit Instrumentation Systems, and the Electrical Systems. This breakdown was performed along the lines of a failure of the system. Any component that caused a system to fail was considered a part of that system.
Bae, Sungwoo; Kim, Myungchin
2016-01-01
In order to realize a true WoT environment, a reliable power circuit is required to ensure interconnections among a range of WoT devices. This paper presents research on sensors and their effects on the reliability and response characteristics of power circuits in WoT devices. The presented research can be used in various power circuit applications, such as energy harvesting interfaces, photovoltaic systems, and battery management systems for the WoT devices. As power circuits rely on the feedback from voltage/current sensors, the system performance is likely to be affected by the sensor failure rates, sensor dynamic characteristics, and their interface circuits. This study investigated how the operational availability of the power circuits is affected by the sensor failure rates by performing a quantitative reliability analysis. In the analysis process, this paper also includes the effects of various reconstruction and estimation techniques used in power processing circuits (e.g., energy harvesting circuits and photovoltaic systems). This paper also reports how the transient control performance of power circuits is affected by sensor interface circuits. With the frequency domain stability analysis and circuit simulation, it was verified that the interface circuit dynamics may affect the transient response characteristics of power circuits. The verification results in this paper showed that the reliability and control performance of the power circuits can be affected by the sensor types, fault tolerant approaches against sensor failures, and the response characteristics of the sensor interfaces. The analysis results were also verified by experiments using a power circuit prototype. PMID:27608020
Reliability Analysis of Systems Subject to First-Passage Failure
NASA Technical Reports Server (NTRS)
Lutes, Loren D.; Sarkani, Shahram
2009-01-01
An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.
System reliability analysis through corona testing
NASA Technical Reports Server (NTRS)
Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.
1975-01-01
In the Reliability and Quality Engineering Test Laboratory at the NASA Lewis Research Center a nondestructive, corona-vacuum test facility for testing power system components was developed using commercially available hardware. The test facility was developed to simulate operating temperature and vacuum while monitoring corona discharges with residual gases. This facility is being used to test various high voltage power system components.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Developing Ultra Reliable Life Support for the Moon and Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2009-01-01
Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
The reliability of an instrumented start block analysis system.
Tor, Elaine; Pease, David L; Ball, Kevin A
2015-02-01
The swimming start is highly influential to overall competition performance. Therefore, it is paramount to develop reliable methods to perform accurate biomechanical analysis of start performance for training and research. The Wetplate Analysis System is a custom-made force plate system developed by the Australian Institute of Sport--Aquatic Testing, Training and Research Unit (AIS ATTRU). This sophisticated system combines both force data and 2D digitization to measure a number of kinetic and kinematic parameter values in an attempt to evaluate start performance. Fourteen elite swimmers performed two maximal effort dives (performance was defined as time from start signal to 15 m) over two separate testing sessions. Intraclass correlation coefficients (ICC) were used to determine each parameter's reliability. The kinetic parameters all had ICC greater than 0.9 except the time of peak vertical force (0.742). This may have been due to variations in movement initiation after the starting signal between trials. The kinematic and time parameters also had ICC greater than 0.9 apart from for the time of maximum depth (0.719). This parameter was lower due to the swimmers varying their depth between trials. Based on the high ICC scores for all parameters, the Wetplate Analysis System is suitable for biomechanical analysis of swimming starts.
An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.
Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes
2017-10-01
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.
Reliability Assessment for Low-cost Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Freeman, Paul Michael
Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.
Survey of critical failure events in on-chip interconnect by fault tree analysis
NASA Astrophysics Data System (ADS)
Yokogawa, Shinji; Kunii, Kyousuke
2018-07-01
In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, D.; Brunett, A.; Passerini, S.
Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
NASA Astrophysics Data System (ADS)
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine
NASA Astrophysics Data System (ADS)
Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.
2018-04-01
The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.
NASA Astrophysics Data System (ADS)
Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.
2017-12-01
This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.
Research program for experiment M133
NASA Technical Reports Server (NTRS)
Frost, J. D., Jr.
1972-01-01
The development of the automatic data-acquisition and sleep-analysis system is reported. The purpose was consultation and evaluation in the transition of the Skylab M133 Sleep-Monitoring Experiment equipment from prototype of flight status; review of problems associated with acquisition and on-line display of data in near-real time via spacecraft telemetry; and development of laboratory facilities and design of equipment to assure reliable playback and analysis of analog data. The existing prototype system modified, and the changes improve the performance of the analysis circuitry and increase its reliability. These modifications are useful for pre- and postflight analysis, but are not now proposed for the inflight system. There were improvements in the EEG recording cap, some of which will be incorporated into the flight hardware.
Analysis of strain gage reliability in F-100 jet engine testing at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Holanda, R.
1983-01-01
A reliability analysis was performed on 64 strain gage systems mounted on the 3 rotor stages of the fan of a YF-100 engine. The strain gages were used in a 65 hour fan flutter research program which included about 5 hours of blade flutter. The analysis was part of a reliability improvement program. Eighty-four percent of the strain gages survived the test and performed satisfactorily. A post test analysis determined most failure causes. Five failures were caused by open circuits, three failed gages showed elevated circuit resistance, and one gage circuit was grounded. One failure was undetermined.
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
Model of load balancing using reliable algorithm with multi-agent system
NASA Astrophysics Data System (ADS)
Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.
2017-04-01
Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.
Ross, Amy M; Ilic, Kelley; Kiyoshi-Teo, Hiroko; Lee, Christopher S
2017-12-26
The purpose of this study was to establish the psychometric properties of the new 16-item leadership environment scale. The leadership environment scale was based on complexity science concepts relevant to complex adaptive health care systems. A workforce survey of direct-care nurses was conducted (n = 1,443) in Oregon. Confirmatory factor analysis, exploratory factor analysis, concordant validity test and reliability tests were conducted to establish the structure and internal consistency of the leadership environment scale. Confirmatory factor analysis indices approached acceptable thresholds of fit with a single factor solution. Exploratory factor analysis showed improved fit with a two-factor model solution; the factors were labelled 'influencing relationships' and 'interdependent system supports'. Moderate to strong convergent validity was observed between the leadership environment scale/subscales and both the nursing workforce index and the safety organising scale. Reliability of the leadership environment scale and subscales was strong, with all alphas ≥.85. The leadership environment scale is structurally sound and reliable. Nursing management can employ adaptive complexity leadership attributes, measure their influence on the leadership environment, subsequently modify system supports and relationships and improve the quality of health care systems. The leadership environment scale is an innovative fit to complex adaptive systems and how nurses act as leaders within these systems. © 2017 John Wiley & Sons Ltd.
Reliability analysis of repairable systems using Petri nets and vague Lambda-Tau methodology.
Garg, Harish
2013-01-01
The main objective of the paper is to developed a methodology, named as vague Lambda-Tau, for reliability analysis of repairable systems. Petri net tool is applied to represent the asynchronous and concurrent processing of the system instead of fault tree analysis. To enhance the relevance of the reliability study, vague set theory is used for representing the failure rate and repair times instead of classical(crisp) or fuzzy set theory because vague sets are characterized by a truth membership function and false membership functions (non-membership functions) so that sum of both values is less than 1. The proposed methodology involves qualitative modeling using PN and quantitative analysis using Lambda-Tau method of solution with the basic events represented by intuitionistic fuzzy numbers of triangular membership functions. Sensitivity analysis has also been performed and the effects on system MTBF are addressed. The methodology improves the shortcomings of the existing probabilistic approaches and gives a better understanding of the system behavior through its graphical representation. The washing unit of a paper mill situated in a northern part of India, producing approximately 200 ton of paper per day, has been considered to demonstrate the proposed approach. The results may be helpful for the plant personnel for analyzing the systems' behavior and to improve their performance by adopting suitable maintenance strategies. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Reliability and validity analysis of the open-source Chinese Foot and Ankle Outcome Score (FAOS).
Ling, Samuel K K; Chan, Vincent; Ho, Karen; Ling, Fona; Lui, T H
2017-12-21
Develop the first reliable and validated open-source outcome scoring system in the Chinese language for foot and ankle problems. Translation of the English FAOS into Chinese following regular protocols. First, two forward-translations were created separately, these were then combined into a preliminary version by an expert committee, and was subsequently back-translated into English. The process was repeated until the original and back translations were congruent. This version was then field tested on actual patients who provided feedback for modification. The final Chinese FAOS version was then tested for reliability and validity. Reliability analysis was performed on 20 subjects while validity analysis was performed on 50 subjects. Tools used to validate the Chinese FAOS were the SF36 and Pain Numeric Rating Scale (NRS). Internal consistency between the FAOS subgroups was measured using Cronbach's alpha. Spearman's correlation was calculated between each subgroup in the FAOS, SF36 and NRS. The Chinese FAOS passed both reliability and validity testing; meaning it is reliable, internally consistent and correlates positively with the SF36 and the NRS. The Chinese FAOS is a free, open-source scoring system that can be used to provide a relatively standardised outcome measure for foot and ankle studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
78 FR 58295 - Commission Information Collection Activities (FERC-725A); Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-23
... submitting the information collection FERC-725A, Mandatory Reliability Standards for the Bulk Power System... collection analysis associated with its approval of Reliability Standard EOP-004-2, in an order published in... solicitation and is making this notation in its submission to OMB. \\1\\ North American Electric Reliability Corp...
Interactive Image Analysis System Design,
1982-12-01
This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image
NASA Astrophysics Data System (ADS)
Gromek, Katherine Emily
A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.
Feasibility and demonstration of a cloud-based RIID analysis system
NASA Astrophysics Data System (ADS)
Wright, Michael C.; Hertz, Kristin L.; Johnson, William C.; Sword, Eric D.; Younkin, James R.; Sadler, Lorraine E.
2015-06-01
A significant limitation in the operational utility of handheld and backpack radioisotope identifiers (RIIDs) is the inability of their onboard algorithms to accurately and reliably identify the isotopic sources of the measured gamma-ray energy spectrum. A possible solution is to move the spectral analysis computations to an external device, the cloud, where significantly greater capabilities are available. The implementation and demonstration of a prototype cloud-based RIID analysis system have shown this type of system to be feasible with currently available communication and computational technology. A system study has shown that the potential user community could derive significant benefits from an appropriately implemented cloud-based analysis system and has identified the design and operational characteristics required by the users and stakeholders for such a system. A general description of the hardware and software necessary to implement reliable cloud-based analysis, the value of the cloud expressed by the user community, and the aspects of the cloud implemented in the demonstrations are discussed.
2002-06-01
projects are converted into bricks and mortar , as Figure 5 illustrates. Making major changes in LCC after projects are turned over to production is...matter experts ( SMEs ) in the parts, materials, and processes functional area. Data gathering and analysis were conducted through structured interviews...The analysis synthesized feedback and searched for collective issues from the various SMEs on managing PM&P Program requirements, the
Halim, Isa; Arep, Hambali; Kamat, Seri Rahayu; Abdullah, Rohana; Omar, Abdul Rahman; Ismail, Ahmad Rasdan
2014-06-01
Prolonged standing has been hypothesized as a vital contributor to discomfort and muscle fatigue in the workplace. The objective of this study was to develop a decision support system that could provide systematic analysis and solutions to minimize the discomfort and muscle fatigue associated with prolonged standing. The integration of object-oriented programming and a Model Oriented Simultaneous Engineering System were used to design the architecture of the decision support system. Validation of the decision support system was carried out in two manufacturing companies. The validation process showed that the decision support system produced reliable results. The decision support system is a reliable advisory tool for providing analysis and solutions to problems related to the discomfort and muscle fatigue associated with prolonged standing. Further testing of the decision support system is suggested before it is used commercially.
Halim, Isa; Arep, Hambali; Kamat, Seri Rahayu; Abdullah, Rohana; Omar, Abdul Rahman; Ismail, Ahmad Rasdan
2014-01-01
Background Prolonged standing has been hypothesized as a vital contributor to discomfort and muscle fatigue in the workplace. The objective of this study was to develop a decision support system that could provide systematic analysis and solutions to minimize the discomfort and muscle fatigue associated with prolonged standing. Methods The integration of object-oriented programming and a Model Oriented Simultaneous Engineering System were used to design the architecture of the decision support system. Results Validation of the decision support system was carried out in two manufacturing companies. The validation process showed that the decision support system produced reliable results. Conclusion The decision support system is a reliable advisory tool for providing analysis and solutions to problems related to the discomfort and muscle fatigue associated with prolonged standing. Further testing of the decision support system is suggested before it is used commercially. PMID:25180141
A Delay-Aware and Reliable Data Aggregation for Cyber-Physical Sensing
Zhang, Jinhuan; Long, Jun; Zhang, Chengyuan; Zhao, Guihu
2017-01-01
Physical information sensed by various sensors in a cyber-physical system should be collected for further operation. In many applications, data aggregation should take reliability and delay into consideration. To address these problems, a novel Tiered Structure Routing-based Delay-Aware and Reliable Data Aggregation scheme named TSR-DARDA for spherical physical objects is proposed. By dividing the spherical network constructed by dispersed sensor nodes into circular tiers with specifically designed widths and cells, TSTR-DARDA tries to enable as many nodes as possible to transmit data simultaneously. In order to ensure transmission reliability, lost packets are retransmitted. Moreover, to minimize the latency while maintaining reliability for data collection, in-network aggregation and broadcast techniques are adopted to deal with the transmission between data collecting nodes in the outer layer and their parent data collecting nodes in the inner layer. Thus, the optimization problem is transformed to minimize the delay under reliability constraints by controlling the system parameters. To demonstrate the effectiveness of the proposed scheme, we have conducted extensive theoretical analysis and comparisons to evaluate the performance of TSR-DARDA. The analysis and simulations show that TSR-DARDA leads to lower delay with reliability satisfaction. PMID:28218668
An image analysis system for near-infrared (NIR) fluorescence lymph imaging
NASA Astrophysics Data System (ADS)
Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.
2011-03-01
Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.
NASA Astrophysics Data System (ADS)
Varlataya, S. K.; Evdokimov, V. E.; Urzov, A. Y.
2017-11-01
This article describes a process of calculating a certain complex information security system (CISS) reliability using the example of the technospheric security management model as well as ability to determine the frequency of its maintenance using the system reliability parameter which allows one to assess man-made risks and to forecast natural and man-made emergencies. The relevance of this article is explained by the fact the CISS reliability is closely related to information security (IS) risks. Since reliability (or resiliency) is a probabilistic characteristic of the system showing the possibility of its failure (and as a consequence - threats to the protected information assets emergence), it is seen as a component of the overall IS risk in the system. As it is known, there is a certain acceptable level of IS risk assigned by experts for a particular information system; in case of reliability being a risk-forming factor maintaining an acceptable risk level should be carried out by the routine analysis of the condition of CISS and its elements and their timely service. The article presents a reliability parameter calculation for the CISS with a mixed type of element connection, a formula of the dynamics of such system reliability is written. The chart of CISS reliability change is a S-shaped curve which can be divided into 3 periods: almost invariable high level of reliability, uniform reliability reduction, almost invariable low level of reliability. Setting the minimum acceptable level of reliability, the graph (or formula) can be used to determine the period of time during which the system would meet requirements. Ideally, this period should not be longer than the first period of the graph. Thus, the proposed method of calculating the CISS maintenance frequency helps to solve a voluminous and critical task of the information assets risk management.
Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System
NASA Astrophysics Data System (ADS)
He, Qing; Li, Hong
Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-10-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-01-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245
14 CFR 25.1435 - Hydraulic systems.
Code of Federal Regulations, 2013 CFR
2013-01-01
... on the hydraulic system(s), and/or subsystem(s) and elements, except that analysis may be used in place of or to supplement testing, where the analysis is shown to be reliable and appropriate. All... system(s), subsystem(s), or element(s) must be subjected to performance, fatigue, and endurance tests...
14 CFR 25.1435 - Hydraulic systems.
Code of Federal Regulations, 2011 CFR
2011-01-01
... on the hydraulic system(s), and/or subsystem(s) and elements, except that analysis may be used in place of or to supplement testing, where the analysis is shown to be reliable and appropriate. All... system(s), subsystem(s), or element(s) must be subjected to performance, fatigue, and endurance tests...
14 CFR 25.1435 - Hydraulic systems.
Code of Federal Regulations, 2014 CFR
2014-01-01
... on the hydraulic system(s), and/or subsystem(s) and elements, except that analysis may be used in place of or to supplement testing, where the analysis is shown to be reliable and appropriate. All... system(s), subsystem(s), or element(s) must be subjected to performance, fatigue, and endurance tests...
14 CFR 25.1435 - Hydraulic systems.
Code of Federal Regulations, 2012 CFR
2012-01-01
... on the hydraulic system(s), and/or subsystem(s) and elements, except that analysis may be used in place of or to supplement testing, where the analysis is shown to be reliable and appropriate. All... system(s), subsystem(s), or element(s) must be subjected to performance, fatigue, and endurance tests...
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang
2011-12-01
To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.
Forward Period Analysis Method of the Periodic Hamiltonian System.
Wang, Pengfei
2016-01-01
Using the forward period analysis (FPA), we obtain the period of a Morse oscillator and mathematical pendulum system, with the accuracy of 100 significant digits. From these results, the long-term [0, 1060] (time unit) solutions, ranging from the Planck time to the age of the universe, are computed reliably and quickly with a parallel multiple-precision Taylor series (PMT) scheme. The application of FPA to periodic systems can greatly reduce the computation time of long-term reliable simulations. This scheme provides an efficient way to generate reference solutions, against which long-term simulations using other schemes can be tested.
Electric system restructuring and system reliability
NASA Astrophysics Data System (ADS)
Horiuchi, Catherine Miller
In 1996 the California legislature passed AB 1890, explicitly defining economic benefits and detailing specific mechanisms for initiating a partial restructuring the state's electric system. Critics have since sought re-regulation and proponents have asked for patience as the new institutions and markets take shape. Other states' electric system restructuring activities have been tempered by real and perceived problems in the California model. This study examines the reduced regulatory controls and new constraints introduced in California's limited restructuring model using utility and regulatory agency records from the 1990's to investigate effects of new institutions and practices on system reliability for the state's five largest public and private utilities. Logit and negative binomial regressions indicate negative impact from the California model of restructuring on system reliability as measured by customer interruptions. Time series analysis of outage data could not predict the wholesale power market collapse and the subsequent rolling blackouts in early 2001; inclusion of near-outage reliability disturbances---load shedding and energy emergencies---provided a measure of forewarning. Analysis of system disruptions, generation capacity and demand, and the role of purchased power challenge conventional wisdom on the causality of Californian's power problems. The quantitative analysis was supplemented by a targeted survey of electric system restructuring participants. Findings suggest each utility and the organization controlling the state's electric grid provided protection from power outages comparable to pre-restructuring operations through 2000; however, this reliability has come at an inflated cost, resulting in reduced system purchases and decreased marginal protection. The historic margin of operating safety has fully eroded, increasing mandatory load shedding and emergency declarations for voluntary and mandatory conservation. Proposed remedies focused on state-funded contracts and government-managed power authorities may not help, as the findings suggest pricing models, market uncertainty, interjurisdictional conflict and an inability to respond to market perturbations are more significant contributors to reduced regional generation availability than the particular contract mechanisms and funding sources used for power purchases.
NASA Technical Reports Server (NTRS)
Simmons, D. B.
1975-01-01
The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.
Reliability Driven Space Logistics Demand Analysis
NASA Technical Reports Server (NTRS)
Knezevic, J.
1995-01-01
Accurate selection of the quantity of logistic support resources has a strong influence on mission success, system availability and the cost of ownership. At the same time the accurate prediction of these resources depends on the accurate prediction of the reliability measures of the items involved. This paper presents a method for the advanced and accurate calculation of the reliability measures of complex space systems which are the basis for the determination of the demands for logistics resources needed during the operational life or mission of space systems. The applicability of the method presented is demonstrated through several examples.
On Space Exploration and Human Error: A Paper on Reliability and Safety
NASA Technical Reports Server (NTRS)
Bell, David G.; Maluf, David A.; Gawdiak, Yuri
2005-01-01
NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probably almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability
Bulk electric system reliability evaluation incorporating wind power and demand side management
NASA Astrophysics Data System (ADS)
Huang, Dange
Electric power systems are experiencing dramatic changes with respect to structure, operation and regulation and are facing increasing pressure due to environmental and societal constraints. Bulk electric system reliability is an important consideration in power system planning, design and operation particularly in the new competitive environment. A wide range of methods have been developed to perform bulk electric system reliability evaluation. Theoretically, sequential Monte Carlo simulation can include all aspects and contingencies in a power system and can be used to produce an informative set of reliability indices. It has become a practical and viable tool for large system reliability assessment technique due to the development of computing power and is used in the studies described in this thesis. The well-being approach used in this research provides the opportunity to integrate an accepted deterministic criterion into a probabilistic framework. This research work includes the investigation of important factors that impact bulk electric system adequacy evaluation and security constrained adequacy assessment using the well-being analysis framework. Load forecast uncertainty is an important consideration in an electrical power system. This research includes load forecast uncertainty considerations in bulk electric system reliability assessment and the effects on system, load point and well-being indices and reliability index probability distributions are examined. There has been increasing worldwide interest in the utilization of wind power as a renewable energy source over the last two decades due to enhanced public awareness of the environment. Increasing penetration of wind power has significant impacts on power system reliability, and security analyses become more uncertain due to the unpredictable nature of wind power. The effects of wind power additions in generating and bulk electric system reliability assessment considering site wind speed correlations and the interactive effects of wind power and load forecast uncertainty on system reliability are examined. The concept of the security cost associated with operating in the marginal state in the well-being framework is incorporated in the economic analyses associated with system expansion planning including wind power and load forecast uncertainty. Overall reliability cost/worth analyses including security cost concepts are applied to select an optimal wind power injection strategy in a bulk electric system. The effects of the various demand side management measures on system reliability are illustrated using the system, load point, and well-being indices, and the reliability index probability distributions. The reliability effects of demand side management procedures in a bulk electric system including wind power and load forecast uncertainty considerations are also investigated. The system reliability effects due to specific demand side management programs are quantified and examined in terms of their reliability benefits.
Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis
NASA Technical Reports Server (NTRS)
Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William
2009-01-01
This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
Illinois high-speed rail four-quadrant gate reliability assessment
DOT National Transportation Integrated Search
2009-10-01
The Federal Railroad Administration (FRA) tasked the John A. Volpe National Transportation Systems Center (Volpe Center) to conduct a reliability analysis of the four-quadrant gate/vehicle detection equipment installed on the future high-speed rail (...
PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS
NASA Technical Reports Server (NTRS)
Savage, M.
1994-01-01
The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.
Fast Computation and Assessment Methods in Power System Analysis
NASA Astrophysics Data System (ADS)
Nagata, Masaki
Power system analysis is essential for efficient and reliable power system operation and control. Recently, online security assessment system has become of importance, as more efficient use of power networks is eagerly required. In this article, fast power system analysis techniques such as contingency screening, parallel processing and intelligent systems application are briefly surveyed from the view point of their application to online dynamic security assessment.
Reliability analysis of interdependent lattices
NASA Astrophysics Data System (ADS)
Limiao, Zhang; Daqing, Li; Pengju, Qin; Bowen, Fu; Yinan, Jiang; Zio, Enrico; Rui, Kang
2016-06-01
Network reliability analysis has drawn much attention recently due to the risks of catastrophic damage in networked infrastructures. These infrastructures are dependent on each other as a result of various interactions. However, most of the reliability analyses of these interdependent networks do not consider spatial constraints, which are found important for robustness of infrastructures including power grid and transport systems. Here we study the reliability properties of interdependent lattices with different ranges of spatial constraints. Our study shows that interdependent lattices with strong spatial constraints are more resilient than interdependent Erdös-Rényi networks. There exists an intermediate range of spatial constraints, at which the interdependent lattices have minimal resilience.
Reliability and Probabilistic Risk Assessment - How They Play Together
NASA Technical Reports Server (NTRS)
Safie, Fayssal; Stutts, Richard; Huang, Zhaofeng
2015-01-01
Since the Space Shuttle Challenger accident in 1986, NASA has extensively used probabilistic analysis methods to assess, understand, and communicate the risk of space launch vehicles. Probabilistic Risk Assessment (PRA), used in the nuclear industry, is one of the probabilistic analysis methods NASA utilizes to assess Loss of Mission (LOM) and Loss of Crew (LOC) risk for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability distributions to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: 1) what can go wrong that would lead to loss or degraded performance (i.e., scenarios involving undesired consequences of interest), 2) how likely is it (probabilities), and 3) what is the severity of the degradation (consequences). Since the Challenger accident, PRA has been used in supporting decisions regarding safety upgrades for launch vehicles. Another area that was given a lot of emphasis at NASA after the Challenger accident is reliability engineering. Reliability engineering has been a critical design function at NASA since the early Apollo days. However, after the Challenger accident, quantitative reliability analysis and reliability predictions were given more scrutiny because of their importance in understanding failure mechanism and quantifying the probability of failure, which are key elements in resolving technical issues, performing design trades, and implementing design improvements. Although PRA and reliability are both probabilistic in nature and, in some cases, use the same tools, they are two different activities. Specifically, reliability engineering is a broad design discipline that deals with loss of function and helps understand failure mechanism and improve component and system design. PRA is a system scenario based risk assessment process intended to assess the risk scenarios that could lead to a major/top undesirable system event, and to identify those scenarios that are high-risk drivers. PRA output is critical to support risk informed decisions concerning system design. This paper describes the PRA process and the reliability engineering discipline in detail. It discusses their differences and similarities and how they work together as complementary analyses to support the design and risk assessment processes. Lessons learned, applications, and case studies in both areas are also discussed in the paper to demonstrate and explain these differences and similarities.
A Comparison of Laser and Video Techniques for Determining Displacement and Velocity during Running
ERIC Educational Resources Information Center
Harrison, Andrew J.; Jensen, Randall L.; Donoghue, Orna
2005-01-01
The reliability of a laser system was compared with the reliability of a video-based kinematic analysis in measuring displacement and velocity during running. Validity and reliability of the laser on static measures was also assessed at distances between 10 m and 70 m by evaluating the coefficient of variation and intraclass correlation…
System statistical reliability model and analysis
NASA Technical Reports Server (NTRS)
Lekach, V. S.; Rood, H.
1973-01-01
A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.
We report that many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has beenmore » examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Lastly, although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.« less
Validation of highly reliable, real-time knowledge-based systems
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1988-01-01
Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.
Cutolo, Maurizio; Vanhaecke, Amber; Ruaro, Barbara; Deschepper, Ellen; Ickinger, Claudia; Melsens, Karin; Piette, Yves; Trombetta, Amelia Chiara; De Keyser, Filip; Smith, Vanessa
2018-06-06
A reliable tool to evaluate flow is paramount in systemic sclerosis (SSc). We describe herein on the one hand a systematic literature review on the reliability of laser speckle contrast analysis (LASCA) to measure the peripheral blood perfusion (PBP) in SSc and perform an additional pilot study, investigating the intra- and inter-rater reliability of LASCA. A systematic search was performed in 3 electronic databases, according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. In the pilot study, 30 SSc patients and 30 healthy subjects (HS) underwent LASCA assessment. Intra-rater reliability was assessed by having a first anchor rater performing the measurements at 2 time-points and inter-rater reliability by having the anchor rater and a team of second raters performing the measurements in 15 SSc and 30 HS. The measurements were repeated with a second anchor rater in the other 15 SSc patients, as external validation. Only 1 of the 14 records of interest identified through the systematic search was included in the final analysis. In the additional pilot study: intra-class correlation coefficient (ICC) for intra-rater reliability of the first anchor rater was 0.95 in SSc and 0.93 in HS, the ICC for inter-rater reliability was 0.97 in SSc and 0.93 in HS. Intra- and inter-rater reliability of the second anchor rater was 0.78 and 0.87. The identified literature regarding the reliability of LASCA measurements reports good to excellent inter-rater agreement. This very pilot study could confirm the reliability of LASCA measurements with good to excellent inter-rater agreement and found additionally good to excellent intra-rater reliability. Furthermore, similar results were found in the external validation. Copyright © 2018. Published by Elsevier B.V.
Integration of RAMS in LCC analysis for linear transport infrastructures. A case study for railways.
NASA Astrophysics Data System (ADS)
Calle-Cordón, Álvaro; Jiménez-Redondo, Noemi; Morales-Gámiz, F. J.; García-Villena, F. A.; Garmabaki, Amir H. S.; Odelius, Johan
2017-09-01
Life-cycle cost (LCC) analysis is an economic technique used to assess the total costs associated with the lifetime of a system in order to support decision making in long term strategic planning. For complex systems, such as railway and road infrastructures, the cost of maintenance plays an important role in the LCC analysis. Costs associated with maintenance interventions can be more reliably estimated by integrating the probabilistic nature of the failures associated to these interventions in the LCC models. Reliability, Maintainability, Availability and Safety (RAMS) parameters describe the maintenance needs of an asset in a quantitative way by using probabilistic information extracted from registered maintenance activities. Therefore, the integration of RAMS in the LCC analysis allows obtaining reliable predictions of system maintenance costs and the dependencies of these costs with specific cost drivers through sensitivity analyses. This paper presents an innovative approach for a combined RAMS & LCC methodology for railway and road transport infrastructures being developed under the on-going H2020 project INFRALERT. Such RAMS & LCC analysis provides relevant probabilistic information to be used for condition and risk-based planning of maintenance activities as well as for decision support in long term strategic investment planning.
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
NASA Astrophysics Data System (ADS)
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2017-06-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report
NASA Technical Reports Server (NTRS)
Valentine, Mark; Boyer, Roger; Cross, Bob; Hamlin, Teri; Roelant, Henk; Stewart, Mike; Bigler, Mark; Winter, Scott; Reistle, Bruce; Heydorn,Dick
2009-01-01
The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts as well as performing major probabilistic assessments used to support flight rationale and help establish program requirements. During 2008, the Analysis Group performed more than 70 assessments. Although all these assessments were important, some were instrumental in the decisionmaking processes for the Shuttle and Constellation Programs. Two of the more significant tasks were the Space Transportation System (STS)-122 Low Level Cutoff PRA for the SSP and the Orion Pad Abort One (PA-1) PRA for the CxP. These two activities, along with the numerous other tasks the Analysis Group performed in 2008, are summarized in this report. This report also highlights several ongoing and upcoming efforts to provide crucial statistical and probabilistic assessments, such as the Extravehicular Activity (EVA) PRA for the Hubble Space Telescope service mission and the first fully integrated PRAs for the CxP's Lunar Sortie and ISS missions.
Electrical service reliability: the customer perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samsa, M.E.; Hub, K.A.; Krohm, G.C.
1978-09-01
Electric-utility-system reliability criteria have traditionally been established as a matter of utility policy or through long-term engineering practice, generally with no supportive customer cost/benefit analysis as justification. This report presents results of an initial study of the customer perspective toward electric-utility-system reliability, based on critical review of over 20 previous and ongoing efforts to quantify the customer's value of reliable electric service. A possible structure of customer classifications is suggested as a reasonable level of disaggregation for further investigation of customer value, and these groups are characterized in terms of their electricity use patterns. The values that customers assign tomore » reliability are discussed in terms of internal and external cost components. A list of options for effecting changes in customer service reliability is set forth, and some of the many policy issues that could alter customer-service reliability are identified.« less
Lee, Myungmo; Song, Changho; Lee, Kyoungjin; Shin, Doochul; Shin, Seungho
2014-07-14
Treadmill gait analysis was more advantageous than over-ground walking because it allowed continuous measurements of the gait parameters. The purpose of this study was to investigate the concurrent validity and the test-retest reliability of the OPTOGait photoelectric cell system against the treadmill-based gait analysis system by assessing spatio-temporal gait parameters. Twenty-six stroke patients and 18 healthy adults were asked to walk on the treadmill at their preferred speed. The concurrent validity was assessed by comparing data obtained from the 2 systems, and the test-retest reliability was determined by comparing data obtained from the 1st and the 2nd session of the OPTOGait system. The concurrent validity, identified by the intra-class correlation coefficients (ICC [2, 1]), coefficients of variation (CVME), and 95% limits of agreement (LOA) for the spatial-temporal gait parameters, were excellent but the temporal parameters expressed as a percentage of the gait cycle were poor. The test-retest reliability of the OPTOGait System, identified by ICC (3, 1), CVME, 95% LOA, standard error of measurement (SEM), and minimum detectable change (MDC95%) for the spatio-temporal gait parameters, was high. These findings indicated that the treadmill-based OPTOGait System had strong concurrent validity and test-retest reliability. This portable system could be useful for clinical assessments.
Lausberg, Hedda; Sloetjes, Han
2016-09-01
As visual media spread to all domains of public and scientific life, nonverbal behavior is taking its place as an important form of communication alongside the written and spoken word. An objective and reliable method of analysis for hand movement behavior and gesture is therefore currently required in various scientific disciplines, including psychology, medicine, linguistics, anthropology, sociology, and computer science. However, no adequate common methodological standards have been developed thus far. Many behavioral gesture-coding systems lack objectivity and reliability, and automated methods that register specific movement parameters often fail to show validity with regard to psychological and social functions. To address these deficits, we have combined two methods, an elaborated behavioral coding system and an annotation tool for video and audio data. The NEUROGES-ELAN system is an effective and user-friendly research tool for the analysis of hand movement behavior, including gesture, self-touch, shifts, and actions. Since its first publication in 2009 in Behavior Research Methods, the tool has been used in interdisciplinary research projects to analyze a total of 467 individuals from different cultures, including subjects with mental disease and brain damage. Partly on the basis of new insights from these studies, the system has been revised methodologically and conceptually. The article presents the revised version of the system, including a detailed study of reliability. The improved reproducibility of the revised version makes NEUROGES-ELAN a suitable system for basic empirical research into the relation between hand movement behavior and gesture and cognitive, emotional, and interactive processes and for the development of automated movement behavior recognition methods.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Multi-mode reliability-based design of horizontal curves.
Essa, Mohamed; Sayed, Tarek; Hussein, Mohamed
2016-08-01
Recently, reliability analysis has been advocated as an effective approach to account for uncertainty in the geometric design process and to evaluate the risk associated with a particular design. In this approach, a risk measure (e.g. probability of noncompliance) is calculated to represent the probability that a specific design would not meet standard requirements. The majority of previous applications of reliability analysis in geometric design focused on evaluating the probability of noncompliance for only one mode of noncompliance such as insufficient sight distance. However, in many design situations, more than one mode of noncompliance may be present (e.g. insufficient sight distance and vehicle skidding at horizontal curves). In these situations, utilizing a multi-mode reliability approach that considers more than one failure (noncompliance) mode is required. The main objective of this paper is to demonstrate the application of multi-mode (system) reliability analysis to the design of horizontal curves. The process is demonstrated by a case study of Sea-to-Sky Highway located between Vancouver and Whistler, in southern British Columbia, Canada. Two noncompliance modes were considered: insufficient sight distance and vehicle skidding. The results show the importance of accounting for several noncompliance modes in the reliability model. The system reliability concept could be used in future studies to calibrate the design of various design elements in order to achieve consistent safety levels based on all possible modes of noncompliance. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reliability of segmental accelerations measured using a new wireless gait analysis system.
Kavanagh, Justin J; Morrison, Steven; James, Daniel A; Barrett, Rod
2006-01-01
The purpose of this study was to determine the inter- and intra-examiner reliability, and stride-to-stride reliability, of an accelerometer-based gait analysis system which measured 3D accelerations of the upper and lower body during self-selected slow, preferred and fast walking speeds. Eight subjects attended two testing sessions in which accelerometers were attached to the head, neck, lower trunk, and right shank. In the initial testing session, two different examiners attached the accelerometers and performed the same testing procedures. A single examiner repeated the procedure in a subsequent testing session. All data were collected using a new wireless gait analysis system, which features near real-time data transmission via a Bluetooth network. Reliability for each testing condition (4 locations, 3 directions, 3 speeds) was quantified using a waveform similarity statistic known as the coefficient of multiple determination (CMD). CMD's ranged from 0.60 to 0.98 across all test conditions and were not significantly different for inter-examiner (0.86), intra-examiner (0.87), and stride-to-stride reliability (0.86). The highest repeatability for the effect of location, direction and walking speed were for the shank segment (0.94), the vertical direction (0.91) and the fast walking speed (0.91), respectively. Overall, these results indicate that a high degree of waveform repeatability was obtained using a new gait system under test-retest conditions involving single and dual examiners. Furthermore, differences in acceleration waveform repeatability associated with the reapplication of accelerometers were small in relation to normal motor variability.
Reliability Validation and Improvement Framework
2012-11-01
systems . Steps in that direction include the use of the Architec- ture Tradeoff Analysis Method ® (ATAM®) developed at the Carnegie Mellon...embedded software • cyber - physical systems (CPSs) to indicate that the embedded software interacts with, manag - es, and controls a physical system [Lee...the use of formal static analysis methods to increase our confidence in system operation beyond testing. However, analysis results
Scaling Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin
2016-01-01
For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.
Reliability Analysis and Modeling of ZigBee Networks
NASA Astrophysics Data System (ADS)
Lin, Cheng-Min
The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.
NASA Astrophysics Data System (ADS)
Hanish Nithin, Anu; Omenzetter, Piotr
2017-04-01
Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
Superior model for fault tolerance computation in designing nano-sized circuit systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my
2014-10-24
As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less
Computer calculation of device, circuit, equipment, and system reliability.
NASA Technical Reports Server (NTRS)
Crosby, D. R.
1972-01-01
A grouping into four classes is proposed for all reliability computations that are related to electronic equipment. Examples are presented of reliability computations in three of these four classes. Each of the three specific reliability tasks described was originally undertaken to satisfy an engineering need for reliability data. The form and interpretation of the print-out of the specific reliability computations is presented. The justification for the costs of these computations is indicated. The skills of the personnel used to conduct the analysis, the interfaces between the personnel, and the timing of the projects is discussed.
Development of the Systems Thinking Scale for Adolescent Behavior Change.
Moore, Shirley M; Komton, Vilailert; Adegbite-Adeniyi, Clara; Dolansky, Mary A; Hardin, Heather K; Borawski, Elaine A
2018-03-01
This report describes the development and psychometric testing of the Systems Thinking Scale for Adolescent Behavior Change (STS-AB). Following item development, initial assessments of understandability and stability of the STS-AB were conducted in a sample of nine adolescents enrolled in a weight management program. Exploratory factor analysis of the 16-item STS-AB and internal consistency assessments were then done with 359 adolescents enrolled in a weight management program. Test-retest reliability of the STS-AB was .71, p = .03; internal consistency reliability was .87. Factor analysis of the 16-item STS-AB indicated a one-factor solution with good factor loadings, ranging from .40 to .67. Evidence of construct validity was supported by significant correlations with established measures of variables associated with health behavior change. We provide beginning evidence of the reliability and validity of the STS-AB to measure systems thinking for health behavior change in young adolescents.
Development of the Systems Thinking Scale for Adolescent Behavior Change
Moore, Shirley M.; Komton, Vilailert; Adegbite-Adeniyi, Clara; Dolansky, Mary A.; Hardin, Heather K.; Borawski, Elaine A.
2017-01-01
This report describes the development and psychometric testing of the Systems Thinking Scale for Adolescent Behavior Change (STS-AB). Following item development, initial assessments of understandability and stability of the STS-AB were conducted in a sample of nine adolescents enrolled in a weight management program. Exploratory factor analysis of the 16-item STS-AB and internal consistency assessments were then done with 359 adolescents enrolled in a weight management program. Test–retest reliability of the STS-AB was .71, p = .03; internal consistency reliability was .87. Factor analysis of the 16-item STS-AB indicated a one-factor solution with good factor loadings, ranging from .40 to .67. Evidence of construct validity was supported by significant correlations with established measures of variables associated with health behavior change. We provide beginning evidence of the reliability and validity of the STS-AB to measure systems thinking for health behavior change in young adolescents. PMID:28303755
Accelerated Testing and Analysis | Photovoltaic Research | NREL
& Engineering pages: Real-Time PV & Solar Resource Testing Systems Engineering Systems PV standards. Each year, NCPV researchers, along with solar companies and other national lab Accelerated Testing and Analysis Accelerated Testing and Analysis PV Research Other Reliability
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
Analysis on Sealing Reliability of Bolted Joint Ball Head Component of Satellite Propulsion System
NASA Astrophysics Data System (ADS)
Guo, Tao; Fan, Yougao; Gao, Feng; Gu, Shixin; Wang, Wei
2018-01-01
Propulsion system is one of the important subsystems of satellite, and its performance directly affects the service life, attitude control and reliability of the satellite. The Paper analyzes the sealing principle of bolted joint ball head component of satellite propulsion system and discuss from the compatibility of hydrazine anhydrous and bolted joint ball head component, influence of ground environment on the sealing performance of bolted joint ball heads, and material failure caused by environment, showing that the sealing reliability of bolted joint ball head component is good and the influence of above three aspects on sealing of bolted joint ball head component can be ignored.
Reliability, Safety and Error Recovery for Advanced Control Software
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2003-01-01
For long-duration automated operation of regenerative life support systems in space environments, there is a need for advanced integration and control systems that are significantly more reliable and safe, and that support error recovery and minimization of operational failures. This presentation outlines some challenges of hazardous space environments and complex system interactions that can lead to system accidents. It discusses approaches to hazard analysis and error recovery for control software and challenges of supporting effective intervention by safety software and the crew.
Consistency Analysis and Data Consultation of Gas System of Gas-Electricity Network of Latvia
NASA Astrophysics Data System (ADS)
Zemite, L.; Kutjuns, A.; Bode, I.; Kunickis, M.; Zeltins, N.
2018-02-01
In the present research, the main critical points of gas transmission and storage system of Latvia have been determined to ensure secure and reliable gas supply among the Baltic States to fulfil the core objectives of the EU energy policies. Technical data of critical points of the gas transmission and storage system of Latvia have been collected and analysed with the SWOT method and solutions have been provided to increase the reliability of the regional natural gas system.
Recent advances in computational structural reliability analysis methods
NASA Astrophysics Data System (ADS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-10-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Recent advances in computational structural reliability analysis methods
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-01-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Inspection planning development: An evolutionary approach using reliability engineering as a tool
NASA Technical Reports Server (NTRS)
Graf, David A.; Huang, Zhaofeng
1994-01-01
This paper proposes an evolutionary approach for inspection planning which introduces various reliability engineering tools into the process and assess system trade-offs among reliability, engineering requirement, manufacturing capability and inspection cost to establish an optimal inspection plan. The examples presented in the paper illustrate some advantages and benefits of the new approach. Through the analysis, reliability and engineering impacts due to manufacturing process capability and inspection uncertainty are clearly understood; the most cost effective and efficient inspection plan can be established and associated risks are well controlled; some inspection reductions and relaxations are well justified; and design feedbacks and changes may be initiated from the analysis conclusion to further enhance reliability and reduce cost. The approach is particularly promising as global competitions and customer quality improvement expectations are rapidly increasing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divan, Deepak; Brumsickle, William; Eto, Joseph
2003-04-01
This report describes a new approach for collecting information on power quality and reliability and making it available in the public domain. Making this information readily available in a form that is meaningful to electricity consumers is necessary for enabling more informed private and public decisions regarding electricity reliability. The system dramatically reduces the cost (and expertise) needed for customers to obtain information on the most significant power quality events, called voltage sags and interruptions. The system also offers widespread access to information on power quality collected from multiple sites and the potential for capturing information on the impacts ofmore » power quality problems, together enabling a wide variety of analysis and benchmarking to improve system reliability. Six case studies demonstrate selected functionality and capabilities of the system, including: Linking measured power quality events to process interruption and downtime; Demonstrating the ability to correlate events recorded by multiple monitors to narrow and confirm the causes of power quality events; and Benchmarking power quality and reliability on a firm and regional basis.« less
Recent developments of the NESSUS probabilistic structural analysis computer program
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.
1992-01-01
The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.
Reliability analysis of the F-8 digital fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goodman, H. A.
1981-01-01
The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.
NASA Technical Reports Server (NTRS)
Oswald, Fred B.; Savage, Michael; Zaretsky, Erwin V.
2015-01-01
The U.S. Space Shuttle fleet was originally intended to have a life of 100 flights for each vehicle, lasting over a 10-year period, with minimal scheduled maintenance or inspection. The first space shuttle flight was that of the Space Shuttle Columbia (OV-102), launched April 12, 1981. The disaster that destroyed Columbia occurred on its 28th flight, February 1, 2003, nearly 22 years after its first launch. In order to minimize risk of losing another Space Shuttle, a probabilistic life and reliability analysis was conducted for the Space Shuttle rudder/speed brake actuators to determine the number of flights the actuators could sustain. A life and reliability assessment of the actuator gears was performed in two stages: a contact stress fatigue model and a gear tooth bending fatigue model. For the contact stress analysis, the Lundberg-Palmgren bearing life theory was expanded to include gear-surface pitting for the actuator as a system. The mission spectrum of the Space Shuttle rudder/speed brake actuator was combined into equivalent effective hinge moment loads including an actuator input preload for the contact stress fatigue and tooth bending fatigue models. Gear system reliabilities are reported for both models and their combination. Reliability of the actuator bearings was analyzed separately, based on data provided by the actuator manufacturer. As a result of the analysis, the reliability of one half of a single actuator was calculated to be 98.6 percent for 12 flights. Accordingly, each actuator was subsequently limited to 12 flights before removal from service in the Space Shuttle.
The Use of Object-Oriented Analysis Methods in Surety Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craft, Richard L.; Funkhouser, Donald R.; Wyss, Gregory D.
1999-05-01
Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automaticmore » model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.« less
Iwata, Shintaro; Uehara, Kosuke; Ogura, Koichi; Akiyama, Toru; Shinoda, Yusuke; Yonemoto, Tsukasa; Kawai, Akira
2016-09-01
The Musculoskeletal Tumor Society (MSTS) scoring system is a widely used functional evaluation tool for patients treated for musculoskeletal tumors. Although the MSTS scoring system has been validated in English and Brazilian Portuguese, a Japanese version of the MSTS scoring system has not yet been validated. We sought to determine whether a Japanese-language translation of the MSTS scoring system for the lower extremity had (1) sufficient reliability and internal consistency, (2) adequate construct validity, and (3) reasonable criterion validity compared with the Toronto Extremity Salvage Score (TESS) and SF-36 using psychometric analysis. The Japanese version of the MSTS scoring system was developed using accepted guidelines, which included translation of the English version of the MSTS into Japanese by five native Japanese bilingual musculoskeletal oncology surgeons and integrated into one document. One hundred patients with a diagnosis of intermediate or malignant bone or soft tissue tumors located in the lower extremity and who had undergone tumor resection with or without reconstruction or amputation participated in this study. Reliability was evaluated by test-retest analysis, and internal consistency was established by Cronbach's alpha coefficient. Construct validity was evaluated using the principal factor analysis and Akaike information criterion network. Criterion validity was evaluated by comparing the MSTS scoring system with the TESS and SF-36. Test-retest analysis showed a high intraclass correlation coefficient (0.92; 95% CI, 0.88-0.95), indicating high reliability of the Japanese version of the MSTS scoring system, although a considerable ceiling effect was observed, with 23 patients (23%) given the maximum score. Cronbach's alpha coefficient was 0.87 (95% CI, 0.82-0.90), suggesting a high level of internal consistency. Factor analysis revealed that all items had high loading values and communalities; we identified a central role for the items "walking" and "gait" according to the Akaike information criterion network. The total MSTS score was correlated with that of the TESS (r = 0.81; 95% CI, 0.73-0.87; p < 0.001) and the physical component summary and physical functioning of the SF-36. The Japanese-language translation of the MSTS scoring system for the lower extremity has sufficient reliability and reasonable validity. Nevertheless, the observation of a ceiling effect suggests poor ability of this system to discriminate from among patients who have a high level of function.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
NASA Technical Reports Server (NTRS)
Turnquist, S. R.; Twombly, M.; Hoffman, D.
1989-01-01
A preliminary reliability, availability, and maintainability (RAM) analysis of the proposed Space Station Freedom electric power system (EPS) was performed using the unit reliability, availability, and maintainability (UNIRAM) analysis methodology. Orbital replacement units (ORUs) having the most significant impact on EPS availability measures were identified. Also, the sensitivity of the EPS to variations in ORU RAM data was evaluated for each ORU. Estimates were made of average EPS power output levels and availability of power to the core area of the space station. The results of assessments of the availability of EPS power and power to load distribution points in the space stations are given. Some highlights of continuing studies being performed to understand EPS availability considerations are presented.
48 CFR 215.404-1 - Proposal analysis techniques.
Code of Federal Regulations, 2010 CFR
2010-10-01
... reliability of its estimating and accounting systems. [63 FR 55040, Oct. 14, 1998, as amended at 71 FR 69494... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Proposal analysis techniques. 215.404-1 Section 215.404-1 Federal Acquisition Regulations System DEFENSE ACQUISITION...
Reliability Analysis of the Space Station Freedom Electrical Power System
1989-08-01
Cleveland, Ohio, who assisted in obtaining related research materials and provided feedback on our efforts to produce a dynamic analysis tool useful to...System software that we used to do our analysis of the electrical power system. Thanks are due to Dr. Vira Chankong, my thesis advisor, for his...a frequency duration analysis . Using a transition rate matrix with a model of the photovoltaic and solar dynamic systems, they have one model that
Ku-band signal design study. [space shuttle orbiter data processing network
NASA Technical Reports Server (NTRS)
Rubin, I.
1978-01-01
Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.
Validity and reliability of acoustic analysis of respiratory sounds in infants
Elphick, H; Lancaster, G; Solis, A; Majumdar, A; Gupta, R; Smyth, R
2004-01-01
Objective: To investigate the validity and reliability of computerised acoustic analysis in the detection of abnormal respiratory noises in infants. Methods: Blinded, prospective comparison of acoustic analysis with stethoscope examination. Validity and reliability of acoustic analysis were assessed by calculating the degree of observer agreement using the κ statistic with 95% confidence intervals (CI). Results: 102 infants under 18 months were recruited. Convergent validity for agreement between stethoscope examination and acoustic analysis was poor for wheeze (κ = 0.07 (95% CI, –0.13 to 0.26)) and rattles (κ = 0.11 (–0.05 to 0.27)) and fair for crackles (κ = 0.36 (0.18 to 0.54)). Both the stethoscope and acoustic analysis distinguished well between sounds (discriminant validity). Agreement between observers for the presence of wheeze was poor for both stethoscope examination and acoustic analysis. Agreement for rattles was moderate for the stethoscope but poor for acoustic analysis. Agreement for crackles was moderate using both techniques. Within-observer reliability for all sounds using acoustic analysis was moderate to good. Conclusions: The stethoscope is unreliable for assessing respiratory sounds in infants. This has important implications for its use as a diagnostic tool for lung disorders in infants, and confirms that it cannot be used as a gold standard. Because of the unreliability of the stethoscope, the validity of acoustic analysis could not be demonstrated, although it could discriminate between sounds well and showed good within-observer reliability. For acoustic analysis, targeted training and the development of computerised pattern recognition systems may improve reliability so that it can be used in clinical practice. PMID:15499065
NASA Astrophysics Data System (ADS)
Xia, Quan; Wang, Zili; Ren, Yi; Sun, Bo; Yang, Dezhen; Feng, Qiang
2018-05-01
With the rapid development of lithium-ion battery technology in the electric vehicle (EV) industry, the lifetime of the battery cell increases substantially; however, the reliability of the battery pack is still inadequate. Because of the complexity of the battery pack, a reliability design method for a lithium-ion battery pack considering the thermal disequilibrium is proposed in this paper based on cell redundancy. Based on this method, a three-dimensional electric-thermal-flow-coupled model, a stochastic degradation model of cells under field dynamic conditions and a multi-state system reliability model of a battery pack are established. The relationships between the multi-physics coupling model, the degradation model and the system reliability model are first constructed to analyze the reliability of the battery pack and followed by analysis examples with different redundancy strategies. By comparing the reliability of battery packs of different redundant cell numbers and configurations, several conclusions for the redundancy strategy are obtained. More notably, the reliability does not monotonically increase with the number of redundant cells for the thermal disequilibrium effects. In this work, the reliability of a 6 × 5 parallel-series configuration is the optimal system structure. In addition, the effect of the cell arrangement and cooling conditions are investigated.
NASA Astrophysics Data System (ADS)
Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang
2018-03-01
Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.
Reliability and Maintainability Analysis: A Conceptual Design Model
1972-03-01
Elements For a System I. Research ane Development A. Preliminary design and engineering B. Fabrication of test equipment C. Test operations D...reliability racquiro:wents, little, if any, modu larzation and auto- matic test features would be incorporated in the subsystem design, limited reliability...niaintaina~ility testing and monitoring would be conducted turing dev!qopmcnt, and little Quality Control effort, in the rell ability/’uaintainalility
Real-time emergency forecasting technique for situation management systems
NASA Astrophysics Data System (ADS)
Kopytov, V. V.; Kharechkin, P. V.; Naumenko, V. V.; Tretyak, R. S.; Tebueva, F. B.
2018-05-01
The article describes the real-time emergency forecasting technique that allows increasing accuracy and reliability of forecasting results of any emergency computational model applied for decision making in situation management systems. Computational models are improved by the Improved Brown’s method applying fractal dimension to forecast short time series data being received from sensors and control systems. Reliability of emergency forecasting results is ensured by the invalid sensed data filtering according to the methods of correlation analysis.
R&D of high reliable refrigeration system for superconducting generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosoya, T.; Shindo, S.; Yaguchi, H.
1996-12-31
Super-GM carries out R&D of 70 MW class superconducting generators (model machines), refrigeration system and superconducting wires to apply superconducting technology to electric power apparatuses. The helium refrigeration system for keeping field windings of superconducting generator (SCG) in cryogenic environment must meet the requirement of high reliability for uninterrupted long term operation of the SCG. In FY 1992, a high reliable conventional refrigeration system for the model machines was integrated by combining components such as compressor unit, higher temperature cold box and lower temperature cold box which were manufactured utilizing various fundamental technologies developed in early stage of the projectmore » since 1988. Since FY 1993, its performance tests have been carried out. It has been confirmed that its performance was fulfilled the development target of liquefaction capacity of 100 L/h and impurity removal in the helium gas to < 0.1 ppm. Furthermore, its operation method and performance were clarified to all different modes as how to control liquefaction rate and how to supply liquid helium from a dewar to the model machine. In addition, the authors have made performance tests and system performance analysis of oil free screw type and turbo type compressors which greatly improve reliability of conventional refrigeration systems. The operation performance and operational control method of the compressors has been clarified through the tests and analysis.« less
Aerospace reliability applied to biomedicine.
NASA Technical Reports Server (NTRS)
Lalli, V. R.; Vargo, D. J.
1972-01-01
An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.
NASA Technical Reports Server (NTRS)
Hanks, G. W.; Shomber, H. A.; Dethman, H. A.; Gratzer, L. B.; Maeshiro, A.; Gangsaas, D.; Blight, J. D.; Buchan, S. M.; Crumb, C. B.; Dorwart, R. J.
1981-01-01
The current status of the Active Controls Technology (ACT) for the advanced subsonic transport project is investigated through analysis of the systems technical data. Control systems technologies under examination include computerized reliability analysis, pitch axis fly by wire actuator, flaperon actuation system design trade study, control law synthesis and analysis, flutter mode control and gust load alleviation analysis, and implementation of alternative ACT systems. Extensive analysis of the computer techniques involved in each system is included.
An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments
Guthrie, Michael A.
2013-01-01
limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less
The Role of Probabilistic Design Analysis Methods in Safety and Affordability
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.
2016-01-01
For the last several years, NASA and its contractors have been working together to build space launch systems to commercialize space. Developing commercial affordable and safe launch systems becomes very important and requires a paradigm shift. This paradigm shift enforces the need for an integrated systems engineering environment where cost, safety, reliability, and performance need to be considered to optimize the launch system design. In such an environment, rule based and deterministic engineering design practices alone may not be sufficient to optimize margins and fault tolerance to reduce cost. As a result, introduction of Probabilistic Design Analysis (PDA) methods to support the current deterministic engineering design practices becomes a necessity to reduce cost without compromising reliability and safety. This paper discusses the importance of PDA methods in NASA's new commercial environment, their applications, and the key role they can play in designing reliable, safe, and affordable launch systems. More specifically, this paper discusses: 1) The involvement of NASA in PDA 2) Why PDA is needed 3) A PDA model structure 4) A PDA example application 5) PDA link to safety and affordability.
Reliability Analysis of RSG-GAS Primary Cooling System to Support Aging Management Program
NASA Astrophysics Data System (ADS)
Deswandri; Subekti, M.; Sunaryo, Geni Rina
2018-02-01
Multipurpose Research Reactor G.A. Siwabessy (RSG-GAS) which has been operating since 1987 is one of the main facilities on supporting research, development and application of nuclear energy programs in BATAN. Until now, the RSG-GAS research reactor has been successfully operated safely and securely. However, because it has been operating for nearly 30 years, the structures, systems and components (SSCs) from the reactor would have started experiencing an aging phase. The process of aging certainly causes a decrease in reliability and safe performances of the reactor, therefore the aging management program is needed to resolve the issues. One of the programs in the aging management is to evaluate the safety and reliability of the system and also screening the critical components to be managed.One method that can be used for such purposes is the Fault Tree Analysis (FTA). In this papers FTA method is used to screening the critical components in the RSG-GAS Primary Cooling System. The evaluation results showed that the primary isolation valves are the basic events which are dominant against the system failure.
Reliability analysis of component-level redundant topologies for solid-state fault current limiter
NASA Astrophysics Data System (ADS)
Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam
2018-04-01
Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.
RELIABLE COMPUTATION OF HOMOGENEOUS AZEOTROPES. (R824731)
It is important to determine the existence and composition of homogeneous azeotropes in the analysis of phase behavior and in the synthesis and design of separation systems, from both theoretical and practical standpoints. A new method for reliably locating an...
Seeking high reliability in primary care: Leadership, tools, and organization.
Weaver, Robert R
2015-01-01
Leaders in health care increasingly recognize that improving health care quality and safety requires developing an organizational culture that fosters high reliability and continuous process improvement. For various reasons, a reliability-seeking culture is lacking in most health care settings. Developing a reliability-seeking culture requires leaders' sustained commitment to reliability principles using key mechanisms to embed those principles widely in the organization. The aim of this study was to examine how key mechanisms used by a primary care practice (PCP) might foster a reliability-seeking, system-oriented organizational culture. A case study approach was used to investigate the PCP's reliability culture. The study examined four cultural artifacts used to embed reliability-seeking principles across the organization: leadership statements, decision support tools, and two organizational processes. To decipher their effects on reliability, the study relied on observations of work patterns and the tools' use, interactions during morning huddles and process improvement meetings, interviews with clinical and office staff, and a "collective mindfulness" questionnaire. The five reliability principles framed the data analysis. Leadership statements articulated principles that oriented the PCP toward a reliability-seeking culture of care. Reliability principles became embedded in the everyday discourse and actions through the use of "problem knowledge coupler" decision support tools and daily "huddles." Practitioners and staff were encouraged to report unexpected events or close calls that arose and which often initiated a formal "process change" used to adjust routines and prevent adverse events from recurring. Activities that foster reliable patient care became part of the taken-for-granted routine at the PCP. The analysis illustrates the role leadership, tools, and organizational processes play in developing and embedding a reliable-seeking culture across an organization. Progress toward a reliability-seeking, system-oriented approach to care remains ongoing, and movement in that direction requires deliberate and sustained effort by committed leaders in health care.
RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.
Effect of Entropy Generation on Wear Mechanics and System Reliability
NASA Astrophysics Data System (ADS)
Gidwani, Akshay; James, Siddanth; Jagtap, Sagar; Karthikeyan, Ram; Vincent, S.
2018-04-01
Wear is an irreversible phenomenon. Processes such as mutual sliding and rolling between materials involve entropy generation. These processes are monotonic with respect to time. The concept of entropy generation is further quantified using Degradation Entropy Generation theorem formulated by Michael D. Bryant. The sliding-wear model can be extrapolated to different instances in order to further provide a potential analysis of machine prognostics as well as system and process reliability for various processes besides even mere mechanical processes. In other words, using the concept of ‘entropy generation’ and wear, one can quantify the reliability of a system with respect to time using a thermodynamic variable, which is the basis of this paper. Thus in the present investigation, a unique attempt has been made to establish correlation between entropy-wear-reliability which can be useful technique in preventive maintenance.
SURE reliability analysis: Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; White, Allan L.
1988-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.
Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming
2016-08-01
In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.
Parhar, Harman S; Thamboo, Andrew; Habib, Al-Rahim; Chang, Brent; Gan, Eng Cern; Javer, Amin R
2014-04-01
The Philpott-Javer postoperative endoscopic mucosal staging system for allergic fungal rhinosinusitis has previously demonstrated acceptable interrater reliability among rhinologists. There are, however, numerous learners involved in patient care at tertiary centers. This study aims to analyze the interrater and intrarater reliability of this system among learners in otolaryngology at different stages in training. A prospective analysis of retrospectively collected endoscopic photographs. A tertiary care teaching hospital (January 2013). Fifty patients undergoing routine follow-up. Three photographs from each of 50 patients undergoing routine postsurgical nasoendoscopy were reviewed. Images were played twice, 1 week apart, in 2 differently randomized cycles and scored according to Philpott-Javer criteria by a rhinologist, a rhinology fellow, a senior otolaryngology resident, a junior otolaryngology resident, and a medical student. Interobserver reliability was assessed using the intraclass correlation coefficient, while intrarater reliability was assessed by Shrout-Fleiss κ values. Agreement between each learner and the rhinologist was also assessed using κ values. The interclass correlation among the 5 raters was 0.7600 (95% confidence interval, 0.6917-0.8161) for the Philpott-Javer scoring system, suggesting substantial reliability. Intrarater data showed substantial to almost-perfect reliability (κ values between 0.668 and 0.815) among all raters using this system. There was also moderate to substantial agreement between the learners and the rhinologist (κ values between 0.534 and 0.710). Results suggest that the Philpott-Javer staging system has acceptable intrarater and interrater reliability among learners of differing levels of clinical experience and is suitable for evaluating progress following surgery.
Reliability and validity of the Microsoft Kinect for assessment of manual wheelchair propulsion.
Milgrom, Rachel; Foreman, Matthew; Standeven, John; Engsberg, Jack R; Morgan, Kerri A
2016-01-01
Concurrent validity and test-retest reliability of the Microsoft Kinect in quantification of manual wheelchair propulsion were examined. Data were collected from five manual wheelchair users on a roller system. Three Kinect sensors were used to assess test-retest reliability with a still pose. Three systems were used to assess concurrent validity of the Kinect to measure propulsion kinematics (joint angles, push loop characteristics): Kinect, Motion Analysis, and Dartfish ProSuite (Dartfish joint angles were limited to shoulder and elbow flexion). Intraclass correlation coefficients revealed good reliability (0.87-0.99) between five of the six joint angles (neck flexion, shoulder flexion, shoulder abduction, elbow flexion, wrist flexion). ICCs suggested good concurrent validity for elbow flexion between the Kinect and Dartfish and between the Kinect and Motion Analysis. Good concurrent validity was revealed for maximum height, hand-axle relationship, and maximum area (0.92-0.95) between the Kinect and Dartfish and maximum height and hand-axle relationship (0.89-0.96) between the Kinect and Motion Analysis. Analysis of variance revealed significant differences (p < 0.05) in maximum length between Dartfish (mean 58.76 cm) and the Kinect (40.16 cm). Results pose promising research and clinical implications for propulsion assessment and overuse injury prevention with the application of current findings to future technology.
Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment
Seo, Aria; Kim, Yeichang
2017-01-01
As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users’ situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS. PMID:28805709
Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment.
Seo, Aria; Jeong, Junho; Kim, Yeichang
2017-08-13
As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users' situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS.
System data communication structures for active-control transport aircraft, volume 1
NASA Technical Reports Server (NTRS)
Hopkins, A. L.; Martin, J. H.; Brock, L. D.; Jansson, D. G.; Serben, S.; Smith, T. B.; Hanley, L. D.
1981-01-01
Candidate data communication techniques are identified, including dedicated links, local buses, broadcast buses, multiplex buses, and mesh networks. The design methodology for mesh networks is then discussed, including network topology and node architecture. Several concepts of power distribution are reviewed, including current limiting and mesh networks for power. The technology issues of packaging, transmission media, and lightning are addressed, and, finally, the analysis tools developed to aid in the communication design process are described. There are special tools to analyze the reliability and connectivity of networks and more general reliability analysis tools for all types of systems.
Small numbers, disclosure risk, security, and reliability issues in Web-based data query systems.
Rudolph, Barbara A; Shah, Gulzar H; Love, Denise
2006-01-01
This article describes the process for developing consensus guidelines and tools for releasing public health data via the Web and highlights approaches leading agencies have taken to balance disclosure risk with public dissemination of reliable health statistics. An agency's choice of statistical methods for improving the reliability of released data for Web-based query systems is based upon a number of factors, including query system design (dynamic analysis vs preaggregated data and tables), population size, cell size, data use, and how data will be supplied to users. The article also describes those efforts that are necessary to reduce the risk of disclosure of an individual's protected health information.
Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Tutorial
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. L. Smith; S. T. Beck; S. T. Wood
2008-08-01
The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of computer programs that were developed to create and analyze probabilistic risk assessment (PRAs). This volume is the tutorial manual for the SAPHIRE system. In this document, a series of lessons are provided that guide the user through basic steps common to most analyses preformed with SAPHIRE. The tutorial is divided into two major sections covering both basic and advanced features. The section covering basic topics contains lessons that lead the reader through development of a probabilistic hypothetical problem involving a vehicle accident, highlighting the program’smore » most fundamental features. The advanced features section contains additional lessons that expand on fundamental analysis features of SAPHIRE and provide insights into more complex analysis techniques. Together, these two elements provide an overview into the operation and capabilities of the SAPHIRE software.« less
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
An approach to solving large reliability models
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.
1988-01-01
This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).
Reliability considerations of a fuel cell backup power system for telecom applications
NASA Astrophysics Data System (ADS)
Serincan, Mustafa Fazil
2016-03-01
A commercial fuel cell backup power unit is tested in real life operating conditions at a base station of a Turkish telecom operator. The fuel cell system responds to 256 of 260 electric power outages successfully, providing the required power to the base station. Reliability of the fuel cell backup power unit is found to be 98.5% at the system level. On the other hand, a qualitative reliability analysis at the component level is carried out. Implications of the power management algorithm on reliability is discussed. Moreover, integration of the backup power unit to the base station ecosystem is reviewed in the context of reliability. Impact of inverter design on the stability of the output power is outlined. Significant current harmonics are encountered when a generic inverter is used. However, ripples are attenuated significantly when a custom design inverter is used. Further, fault conditions are considered for real world case studies such as running out of hydrogen, a malfunction in the system, or an unprecedented operating scheme. Some design guidelines are suggested for hybridization of the backup power unit for an uninterrupted operation.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...
2013-10-21
depend on the quality of allocating resources. This work uses a reliability model of system and environmental covariates incorporating information at...state space. Further, the use of condition variables allows for the direct modeling of maintenance impact with the assumption that a nominal value ... value ), the model in the application of aviation maintenance can provide a useful estimation of reliability at multiple levels. Adjusted survival
ERIC Educational Resources Information Center
Goclowski, John C.; And Others
The Reliability, Maintainability, and Cost Model (RMCM) described in this report is an interactive mathematical model with a built-in sensitivity analysis capability. It is a major component of the Life Cycle Cost Impact Model (LCCIM), which was developed as part of the DAIS advanced development program to be used to assess the potential impacts…
Stochastic Models of Human Errors
NASA Technical Reports Server (NTRS)
Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)
2002-01-01
Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.
Anderson, Ruth A.; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R.; McDaniel, Reuben R.
2013-01-01
Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them. PMID:24349771
Anderson, Ruth A; Plowman, Donde; Corazzini, Kirsten; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R; McDaniel, Reuben R
2013-01-01
Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them.
Validity and Reliability of the 8-Item Work Limitations Questionnaire.
Walker, Timothy J; Tullar, Jessica M; Diamond, Pamela M; Kohl, Harold W; Amick, Benjamin C
2017-12-01
Purpose To evaluate factorial validity, scale reliability, test-retest reliability, convergent validity, and discriminant validity of the 8-item Work Limitations Questionnaire (WLQ) among employees from a public university system. Methods A secondary analysis using de-identified data from employees who completed an annual Health Assessment between the years 2009-2015 tested research aims. Confirmatory factor analysis (CFA) (n = 10,165) tested the latent structure of the 8-item WLQ. Scale reliability was determined using a CFA-based approach while test-retest reliability was determined using the intraclass correlation coefficient. Convergent/discriminant validity was tested by evaluating relations between the 8-item WLQ with health/performance variables for convergent validity (health-related work performance, number of chronic conditions, and general health) and demographic variables for discriminant validity (gender and institution type). Results A 1-factor model with three correlated residuals demonstrated excellent model fit (CFI = 0.99, TLI = 0.99, RMSEA = 0.03, and SRMR = 0.01). The scale reliability was acceptable (0.69, 95% CI 0.68-0.70) and the test-retest reliability was very good (ICC = 0.78). Low-to-moderate associations were observed between the 8-item WLQ and the health/performance variables while weak associations were observed between the demographic variables. Conclusions The 8-item WLQ demonstrated sufficient reliability and validity among employees from a public university system. Results suggest the 8-item WLQ is a usable alternative for studies when the more comprehensive 25-item WLQ is not available.
Thin-film reliability and engineering overview
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1984-01-01
The reliability and engineering technology base required for thin film solar energy conversions modules is discussed. The emphasis is on the integration of amorphous silicon cells into power modules. The effort is being coordinated with SERI's thin film cell research activities as part of DOE's Amorphous Silicon Program. Program concentration is on temperature humidity reliability research, glass breaking strength research, point defect system analysis, hot spot heating assessment, and electrical measurements technology.
Thin-film reliability and engineering overview
NASA Astrophysics Data System (ADS)
Ross, R. G., Jr.
1984-10-01
The reliability and engineering technology base required for thin film solar energy conversions modules is discussed. The emphasis is on the integration of amorphous silicon cells into power modules. The effort is being coordinated with SERI's thin film cell research activities as part of DOE's Amorphous Silicon Program. Program concentration is on temperature humidity reliability research, glass breaking strength research, point defect system analysis, hot spot heating assessment, and electrical measurements technology.
Survey of aircraft electrical power systems
NASA Technical Reports Server (NTRS)
Lee, C. H.; Brandner, J. J.
1972-01-01
Areas investigated include: (1) load analysis; (2) power distribution, conversion techniques and generation; (3) design criteria and performance capabilities of hydraulic and pneumatic systems; (4) system control and protection methods; (5) component and heat transfer systems cooling; and (6) electrical system reliability.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1995-01-01
This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.
NASA Astrophysics Data System (ADS)
Karri, Naveen K.; Mo, Changki
2018-06-01
Structural reliability of thermoelectric generation (TEG) systems still remains an issue, especially for applications such as large-scale industrial or automobile exhaust heat recovery, in which TEG systems are subject to dynamic loads and thermal cycling. Traditional thermoelectric (TE) system design and optimization techniques, focused on performance alone, could result in designs that may fail during operation as the geometric requirements for optimal performance (especially the power) are often in conflict with the requirements for mechanical reliability. This study focused on reducing the thermomechanical stresses in a TEG system without compromising the optimized system performance. Finite element simulations were carried out to study the effect of TE element (leg) geometry such as leg length and cross-sectional shape under constrained material volume requirements. Results indicated that the element length has a major influence on the element stresses whereas regular cross-sectional shapes have minor influence. The impact of TE element stresses on the mechanical reliability is evaluated using brittle material failure theory based on Weibull analysis. An alternate couple configuration that relies on the industry practice of redundant element design is investigated. Results showed that the alternate configuration considerably reduced the TE element and metallization stresses, thereby enhancing the structural reliability, with little trade-off in the optimized performance. The proposed alternate configuration could serve as a potential design modification for improving the reliability of systems optimized for thermoelectric performance.
TRIDENT 1 third stage motor separation system
NASA Technical Reports Server (NTRS)
Welch, B. H.; Richter, B. J.; Sue, P.
1977-01-01
The third stage engine separation system has shown through test and analysis that it can effectively and reliably perform its function. The weight of the hardware associated with this system is well within the targeted value.
NASA Astrophysics Data System (ADS)
Yu, Yuting; Cheng, Ming
2018-05-01
Aiming at various configuration scheme and inertial measurement units of Strapdown Inertial Navigation System, selected tetrahedron skew configuration and coaxial orthogonal configuration by nine low cost IMU to build system. Calculation and simulation the performance index, reliability and fault diagnosis ability of the navigation system. Analysis shows that the reliability and reconfiguration scheme of skew configuration is superior to the orthogonal configuration scheme, while the performance index and fault diagnosis ability of the system are similar. The work in this paper provides a strong reference for the selection of engineering applications.
Risk Based Reliability Centered Maintenance of DOD Fire Protection Systems
1999-01-01
2.2.3 Failure Mode and Effect Analysis ( FMEA )............................ 2.2.4 Failure Mode Risk Characterization...Step 2 - System functions and functional failures definition Step 3 - Failure mode and effect analysis ( FMEA ) Step 4 - Failure mode risk...system). The Interface Location column identifies the location where the FMEA of the fire protection system began or stopped. For example, for the fire
NASA Technical Reports Server (NTRS)
Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.
Monte Carlo Approach for Reliability Estimations in Generalizability Studies.
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
Parallelizing Timed Petri Net simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1993-01-01
The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olalla, Carlos; Maksimovic, Dragan; Deline, Chris
Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less
Olalla, Carlos; Maksimovic, Dragan; Deline, Chris; ...
2017-04-26
Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less
Model-centric distribution automation: Capacity, reliability, and efficiency
Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...
2016-02-26
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
Model-centric distribution automation: Capacity, reliability, and efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onen, Ahmet; Jung, Jaesung; Dilek, Murat
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
A guide to onboard checkout. Volume 4: Propulsion
NASA Technical Reports Server (NTRS)
1971-01-01
The propulsion system for a space station is considered with respect to onboard checkout requirements. Failure analysis, reliability, and maintenance features are presented. Computer analysis techniques are also discussed.
Systems Design Factors: The Essential Ingredients of System Design, Version 0.4
1994-03-18
Reliability Function). 4. Barry . W. Johnson, Design and Analysis of Fault Tolerant Digital Systems, p. 4, Addison- Wesley Publishing Company, 1985. METRICS...the system was performing correctly at time t. The unreliability is often referred to as the probability of failure. SOURCE: 1. Barry W. Johnson...Systems Enuineerinf. 3. Barry W. Johnson, Design and Analysis of Fault Tolerant Digital Systems, Addison-Wesley Publishing Company, 1985, p. 5
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.
1988-01-01
The use and implementation of Ada were investigated in distributed environments in which reliability is the primary concern. In particular, the focus was on the possibility that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors are being executed, and that failures may occur in the software and underlying hardware. A secondary interest is in the performance of Ada systems and how that performance can be gauged reliably. Primary activities included: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; development of a refined approach to recovery that was applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.
NASA Astrophysics Data System (ADS)
Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2010-02-01
Near-infrared spectroscopy is a promising, rapidly developing, reliable and noninvasive technique, used extensively in the biomedicine and in pharmaceutical industry. With the introduction of acousto-optic tunable filters (AOTF) and highly sensitive InGaAs focal plane sensor arrays, real-time high resolution hyper-spectral imaging has become feasible for a number of new biomedical in vivo applications. However, due to the specificity of the AOTF technology and lack of spectral calibration standardization, maintaining long-term stability and compatibility of the acquired hyper-spectral images across different systems is still a challenging problem. Efficiently solving both is essential as the majority of methods for analysis of hyper-spectral images relay on a priori knowledge extracted from large spectral databases, serving as the basis for reliable qualitative or quantitative analysis of various biological samples. In this study, we propose and evaluate fast and reliable spectral calibration of hyper-spectral imaging systems in the short wavelength infrared spectral region. The proposed spectral calibration method is based on light sources or materials, exhibiting distinct spectral features, which enable robust non-rigid registration of the acquired spectra. The calibration accounts for all of the components of a typical hyper-spectral imaging system such as AOTF, light source, lens and optical fibers. The obtained results indicated that practical, fast and reliable spectral calibration of hyper-spectral imaging systems is possible, thereby assuring long-term stability and inter-system compatibility of the acquired hyper-spectral images.
NDE reliability and probability of detection (POD) evolution and paradigm shift
NASA Astrophysics Data System (ADS)
Singh, Surendra
2014-02-01
The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed "Have Cracks - Will Travel" or in short "Have Cracks" by Lockheed Georgia Company for US Air Force during 1974-1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability &Reproducibility (Gage R&R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between "hoped for" versus validated or fielded failed hardware.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We are investigating the application of classical reliability performance metrics combined with standard single event upset (SEU) analysis data. We expect to relate SEU behavior to system performance requirements. Our proposed methodology will provide better prediction of SEU responses in harsh radiation environments with confidence metrics. single event upset (SEU), single event effect (SEE), field programmable gate array devises (FPGAs)
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
The SURE Reliability Analysis Program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Bridge reliability assessment based on the PDF of long-term monitored extreme strains
NASA Astrophysics Data System (ADS)
Jiao, Meiju; Sun, Limin
2011-04-01
Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.
McGinley, Jennifer L; Goldie, Patricia A; Greenwood, Kenneth M; Olney, Sandra J
2003-02-01
Physical therapists routinely observe gait in clinical practice. The purpose of this study was to determine the accuracy and reliability of observational assessments of push-off in gait after stroke. Eighteen physical therapists and 11 subjects with hemiplegia following a stroke participated in the study. Measurements of ankle power generation were obtained from subjects following stroke using a gait analysis system. Concurrent videotaped gait performances were observed by the physical therapists on 2 occasions. Ankle power generation at push-off was scored as either normal or abnormal using two 11-point rating scales. These observational ratings were correlated with the measurements of peak ankle power generation. A high correlation was obtained between the observational ratings and the measurements of ankle power generation (mean Pearson r=.84). Interobserver reliability was moderately high (mean intraclass correlation coefficient [ICC (2,1)]=.76). Intraobserver reliability also was high, with a mean ICC (2,1) of.89 obtained. Physical therapists were able to make accurate and reliable judgments of push-off in videotaped gait of subjects following stroke using observational assessment. Further research is indicated to explore the accuracy and reliability of data obtained with observational gait analysis as it occurs in clinical practice.
Rosen, Jules; Mulsant, Benoit H; Marino, Patricia; Groening, Christopher; Young, Robert C; Fox, Debra
2008-10-30
Despite the importance of establishing shared scoring conventions and assessing interrater reliability in clinical trials in psychiatry, these elements are often overlooked. Obstacles to rater training and reliability testing include logistic difficulties in providing live training sessions, or mailing videotapes of patients to multiple sites and collecting the data for analysis. To address some of these obstacles, a web-based interactive video system was developed. It uses actors of diverse ages, gender and race to train raters how to score the Hamilton Depression Rating Scale and to assess interrater reliability. This system was tested with a group of experienced and novice raters within a single site. It was subsequently used to train raters of a federally funded multi-center clinical trial on scoring conventions and to test their interrater reliability. The advantages and limitations of using interactive video technology to improve the quality of clinical trials are discussed.
The Verification-based Analysis of Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1996-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Men'shikov, V V
2012-12-01
The article deals with the factors impacting the reliability of clinical laboratory information. The differences of qualities of laboratory analysis tools produced by various manufacturers are discussed. These characteristics are the causes of discrepancy of the results of laboratory analyses of the same analite. The role of the reference system in supporting the comparability of laboratory analysis results is demonstrated. The project of national standard is presented to regulate the requirements to standards and calibrators for analysis of qualitative and non-metrical characteristics of components of biomaterials.
Infusing Reliability Techniques into Software Safety Analysis
NASA Technical Reports Server (NTRS)
Shi, Ying
2015-01-01
Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.
Baker, Nancy A; Cook, James R; Redfern, Mark S
2009-01-01
This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-09-01
Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.
Probabilistic fatigue methodology for six nines reliability
NASA Technical Reports Server (NTRS)
Everett, R. A., Jr.; Bartlett, F. D., Jr.; Elber, Wolf
1990-01-01
Fleet readiness and flight safety strongly depend on the degree of reliability that can be designed into rotorcraft flight critical components. The current U.S. Army fatigue life specification for new rotorcraft is the so-called six nines reliability, or a probability of failure of one in a million. The progress of a round robin which was established by the American Helicopter Society (AHS) Subcommittee for Fatigue and Damage Tolerance is reviewed to investigate reliability-based fatigue methodology. The participants in this cooperative effort are in the U.S. Army Aviation Systems Command (AVSCOM) and the rotorcraft industry. One phase of the joint activity examined fatigue reliability under uniquely defined conditions for which only one answer was correct. The other phases were set up to learn how the different industry methods in defining fatigue strength affected the mean fatigue life and reliability calculations. Hence, constant amplitude and spectrum fatigue test data were provided so that each participant could perform their standard fatigue life analysis. As a result of this round robin, the probabilistic logic which includes both fatigue strength and spectrum loading variability in developing a consistant reliability analysis was established. In this first study, the reliability analysis was limited to the linear cumulative damage approach. However, it is expected that superior fatigue life prediction methods will ultimately be developed through this open AHS forum. To that end, these preliminary results were useful in identifying some topics for additional study.
Sociotechnical attributes of safe and unsafe work systems.
Kleiner, Brian M; Hettinger, Lawrence J; DeJoy, David M; Huang, Yuang-Hsiang; Love, Peter E D
2015-01-01
Theoretical and practical approaches to safety based on sociotechnical systems principles place heavy emphasis on the intersections between social-organisational and technical-work process factors. Within this perspective, work system design emphasises factors such as the joint optimisation of social and technical processes, a focus on reliable human-system performance and safety metrics as design and analysis criteria, the maintenance of a realistic and consistent set of safety objectives and policies, and regular access to the expertise and input of workers. We discuss three current approaches to the analysis and design of complex sociotechnical systems: human-systems integration, macroergonomics and safety climate. Each approach emphasises key sociotechnical systems themes, and each prescribes a more holistic perspective on work systems than do traditional theories and methods. We contrast these perspectives with historical precedents such as system safety and traditional human factors and ergonomics, and describe potential future directions for their application in research and practice. The identification of factors that can reliably distinguish between safe and unsafe work systems is an important concern for ergonomists and other safety professionals. This paper presents a variety of sociotechnical systems perspectives on intersections between social--organisational and technology--work process factors as they impact work system analysis, design and operation.
NASA Astrophysics Data System (ADS)
Hanna, Ryan
Distributed energy resources (DERs), and increasingly microgrids, are becoming an integral part of modern distribution systems. Interest in microgrids--which are insular and autonomous power networks embedded within the bulk grid--stems largely from the vast array of flexibilities and benefits they can offer stakeholders. Managed well, they can improve grid reliability and resiliency, increase end-use energy efficiency by coupling electric and thermal loads, reduce transmission losses by generating power locally, and may reduce system-wide emissions, among many others. Whether these public benefits are realized, however, depends on whether private firms see a "business case", or private value, in investing. To this end, firms need models that evaluate costs, benefits, risks, and assumptions that underlie decisions to invest. The objectives of this dissertation are to assess the business case for microgrids that provide what industry analysts forecast as two primary drivers of market growth--that of providing energy services (similar to an electric utility) as well as reliability service to customers within. Prototypical first adopters are modeled--using an existing model to analyze energy services and a new model that couples that analysis with one of reliability--to explore interactions between technology choice, reliability, costs, and benefits. The new model has a bi-level hierarchy; it uses heuristic optimization to select and size DERs and analytical optimization to schedule them. It further embeds Monte Carlo simulation to evaluate reliability as well as regression models for customer damage functions to monetize reliability. It provides least-cost microgrid configurations for utility customers who seek to reduce interruption and operating costs. Lastly, the model is used to explore the impact of such adoption on system-wide greenhouse gas emissions in California. Results indicate that there are, at present, co-benefits for emissions reductions when customers adopt and operate microgrids for private benefit, though future analysis is needed as the bulk grid continues to transition toward a less carbon intensive system.
Analysis of Shuttle Orbiter Reliability and Maintainability Data for Conceptual Studies
NASA Technical Reports Server (NTRS)
Morris, W. D.; White, N. H.; Ebeling, C. E.
1996-01-01
In order to provide a basis for estimating the expected support required of new systems during their conceptual design phase, Langley Research Center has recently collected Shuttle Orbiter reliability and maintainability data from the various data base sources at Kennedy Space Center. This information was analyzed to provide benchmarks, trends, and distributions to aid in the analysis of new designs. This paper presents a summation of those results and an initial interpretation of the findings.
NASA Technical Reports Server (NTRS)
Wilson, Larry W.
1989-01-01
The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.
Main propulsion system design recommendations for an advanced Orbit Transfer Vehicle
NASA Technical Reports Server (NTRS)
Redd, L.
1985-01-01
Various main propulsion system configurations of an advanced OTV are evaluated with respect to the probability of nonindependent failures, i.e., engine failures that disable the entire main propulsion system. Analysis of the life-cycle cost (LCC) indicates that LCC is sensitive to the main propulsion system reliability, vehicle dry weight, and propellant cost; it is relatively insensitive to the number of missions/overhaul, failures per mission, and EVA and IVA cost. In conclusion, two or three engines are recommended in view of their highest reliability, minimum life-cycle cost, and fail operational/fail safe capability.
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Diniz, F. L. R.; Takacs, L. L.; Suarez, M. J.
2018-01-01
Many hybrid data assimilation systems currently used for NWP employ some form of dual-analysis system approach. Typically a hybrid variational analysis is responsible for creating initial conditions for high-resolution forecasts, and an ensemble analysis system is responsible for creating sample perturbations used to form the flow-dependent part of the background error covariance required in the hybrid analysis component. In many of these, the two analysis components employ different methodologies, e.g., variational and ensemble Kalman filter. In such cases, it is not uncommon to have observations treated rather differently between the two analyses components; recentering of the ensemble analysis around the hybrid analysis is used to compensated for such differences. Furthermore, in many cases, the hybrid variational high-resolution system implements some type of four-dimensional approach, whereas the underlying ensemble system relies on a three-dimensional approach, which again introduces discrepancies in the overall system. Connected to these is the expectation that one can reliably estimate observation impact on forecasts issued from hybrid analyses by using an ensemble approach based on the underlying ensemble strategy of dual-analysis systems. Just the realization that the ensemble analysis makes substantially different use of observations as compared to their hybrid counterpart should serve as enough evidence of the implausibility of such expectation. This presentation assembles numerous anecdotal evidence to illustrate the fact that hybrid dual-analysis systems must, at the very minimum, strive for consistent use of the observations in both analysis sub-components. Simpler than that, this work suggests that hybrid systems can reliably be constructed without the need to employ a dual-analysis approach. In practice, the idea of relying on a single analysis system is appealing from a cost-maintenance perspective. More generally, single-analysis systems avoid contradictions such as having to choose one sub-component to generate performance diagnostics to another, possibly not fully consistent, component.
Measurement of stain on extracted teeth using spectrophotometry and digital image analysis.
Lath, D L; Smith, R N; Guan, Y H; Karmo, M; Brook, A H
2007-08-01
The aim of this study was to assess the reliability and validate a customized image analysis system, designed for use within clinical trials of general dental hygiene and whitening products, for the measurement of stain levels on extracted teeth and to compare it with reflectance spectrophotometry. Twenty non-carious extracted teeth were soaked in an artificial saliva, brushed for 1 min using an electric toothbrush and a standard toothpaste, bleached using a 5.3% hydrogen peroxide solution and cycled for 6 h daily through a tea solution. CIE L* values were obtained after each treatment step using the customized image analysis system and a reflectance spectrophotometer. A statistical analysis was carried out in SPSS. Fleiss' coefficient of reliability for intra-operator repeatability of the image analysis system and spectrophotometry was 0.996 and 0.946 respectively. CIE L* values were consistently higher using the image analysis compared with spectrophotometry, and t-tests for each treatment step showed significant differences (P < 0.05) for the two methods. Limits of agreement between the methods were -27.95 to +2.07, with a 95% confidence of the difference calculated as -14.26 to -11.84. The combined results for all treatment steps showed a significant difference between the methods for the CIE L* values (P < 0.05). The image analysis system has proven to be a reliable method for assessment of changes in stain level on extracted teeth. The method has been validated against reflectance spectrophotometry. This method may be used for pilot in vitro studies/trials of oral hygiene and whitening products, before expensive in vivo tests are carried out.
Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.
Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie
2010-07-01
Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.
A comparative reliability analysis of free-piston Stirling machines
NASA Astrophysics Data System (ADS)
Schreiber, Jeffrey G.
2001-02-01
A free-piston Stirling power convertor is being developed for use in an advanced radioisotope power system to provide electric power for NASA deep space missions. These missions are typically long lived, lasting for up to 14 years. The Department of Energy (DOE) is responsible for providing the radioisotope power system for the NASA missions, and has managed the development of the free-piston power convertor for this application. The NASA Glenn Research Center has been involved in the development of Stirling power conversion technology for over 25 years and is currently providing support to DOE. Due to the nature of the potential missions, long life and high reliability are important features for the power system. Substantial resources have been spent on the development of long life Stirling cryocoolers for space applications. As a very general statement, free-piston Stirling power convertors have many features in common with free-piston Stirling cryocoolers, however there are also significant differences. For example, designs exist for both power convertors and cryocoolers that use the flexure bearing support system to provide noncontacting operation of the close-clearance moving parts. This technology and the operating experience derived from one application may be readily applied to the other application. This similarity does not pertain in the case of outgassing and contamination. In the cryocooler, the contaminants normally condense in the critical heat exchangers and foul the performance. In the Stirling power convertor just the opposite is true as contaminants condense on non-critical surfaces. A methodology was recently published that provides a relative comparison of reliability, and is applicable to systems. The methodology has been applied to compare the reliability of a Stirling cryocooler relative to that of a free-piston Stirling power convertor. The reliability analysis indicates that the power convertor should be able to have superior reliability compared to the cryocooler. .
A Synthetic Vision Preliminary Integrated Safety Analysis
NASA Technical Reports Server (NTRS)
Hemm, Robert; Houser, Scott
2001-01-01
This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.
Design Strategy for a Formally Verified Reliable Computing Platform
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.
1991-01-01
This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.
Reliability of Beam Loss Monitors System for the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Guaglio, G.; Dehning, B.; Santoni, C.
2004-11-01
The employment of superconducting magnets in high energy colliders opens challenging failure scenarios and brings new criticalities for the whole system protection. For the LHC beam loss protection system, the failure rate and the availability requirements have been evaluated using the Safety Integrity Level (SIL) approach. A downtime cost evaluation is used as input for the SIL approach. The most critical systems, which contribute to the final SIL value, are the dump system, the interlock system, the beam loss monitors system and the energy monitor system. The Beam Loss Monitors System (BLMS) is critical for short and intense particle losses, while at medium and higher loss time it is assisted by other systems, such as the quench protection system and the cryogenic system. For BLMS, hardware and software have been evaluated in detail. The reliability input figures have been collected using historical data from the SPS, using temperature and radiation damage experimental data as well as using standard databases. All the data have been processed by reliability software (Isograph). The analysis ranges from the components data to the system configuration.
NASA Astrophysics Data System (ADS)
Siddiqi, A.; Muhammad, A.; Wescoat, J. L., Jr.
2017-12-01
Large-scale, legacy canal systems, such as the irrigation infrastructure in the Indus Basin in Punjab, Pakistan, have been primarily conceived, constructed, and operated with a techno-centric approach. The emerging socio-hydrological approaches provide a new lens for studying such systems to potentially identify fresh insights for addressing contemporary challenges of water security. In this work, using the partial definition of water security as "the reliable availability of an acceptable quantity and quality of water", supply reliability is construed as a partial measure of water security in irrigation systems. A set of metrics are used to quantitatively study reliability of surface supply in the canal systems of Punjab, Pakistan using an extensive dataset of 10-daily surface water deliveries over a decade (2007-2016) and of high frequency (10-minute) flow measurements over one year. The reliability quantification is based on comparison of actual deliveries and entitlements, which are a combination of hydrological and social constructs. The socio-hydrological lens highlights critical issues of how flows are measured, monitored, perceived, and experienced from the perspective of operators (government officials) and users (famers). The analysis reveals varying levels of reliability (and by extension security) of supply when data is examined across multiple temporal and spatial scales. The results shed new light on evolution of water security (as partially measured by supply reliability) for surface irrigation in the Punjab province of Pakistan and demonstrate that "information security" (defined as reliable availability of sufficiently detailed data) is vital for enabling water security. It is found that forecasting and management (that are social processes) lead to differences between entitlements and actual deliveries, and there is significant potential to positively affect supply reliability through interventions in the social realm.
Reliability of a Parallel Pipe Network
NASA Technical Reports Server (NTRS)
Herrera, Edgar; Chamis, Christopher (Technical Monitor)
2001-01-01
The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.
Donahoe, Laura; McDonald, Ellen; Kho, Michelle E; Maclennan, Margaret; Stratford, Paul W; Cook, Deborah J
2009-01-01
Given their clinical, research, and administrative purposes, scores on the Acute Physiology and Chronic Health Evaluation (APACHE) II should be reliable, whether calculated by health care personnel or a clinical information system. To determine reliability of APACHE II scores calculated by a clinical information system and by health care personnel before and after a multifaceted quality improvement intervention. APACHE II scores of 37 consecutive patients admitted to a closed, 15-bed, university-affiliated intensive care unit were collected by a research coordinator, a database clerk, and a clinical information system. After a quality improvement intervention focused on health care personnel and the clinical information system, the same methods were used to collect data on 32 consecutive patients. The research coordinator and the clerk did not know each other's scores or the information system's score. The data analyst did not know the source of the scores until analysis was complete. APACHE II scores obtained by the clerk and the research coordinator were highly reliable (intraclass correlation coefficient, 0.88 before vs 0.80 after intervention; P = .25). No significant changes were detected after the intervention; however, compared with scores of the research coordinator, the overall reliability of APACHE II scores calculated by the clinical information system improved (intraclass correlation coefficient, 0.24 before intervention vs 0.91 after intervention, P < .001). After completion of a quality improvement intervention, health care personnel and a computerized clinical information system calculated sufficiently reliable APACHE II scores for clinical, research, and administrative purposes.
Biomechanical analysis using Kinovea for sports application
NASA Astrophysics Data System (ADS)
Muaza Nor Adnan, Nor; Patar, Mohd Nor Azmi Ab; Lee, Hokyoo; Yamamoto, Shin-Ichiroh; Jong-Young, Lee; Mahmud, Jamaluddin
2018-04-01
This paper assesses the reliability of HD VideoCam–Kinovea as an alternative tool in conducting motion analysis and measuring knee relative angle of drop jump movement. The motion capture and analysis procedure were conducted in the Biomechanics Lab, Shibaura Institute of Technology, Omiya Campus, Japan. A healthy subject without any gait disorder (BMI of 28.60 ± 1.40) was recruited. The volunteered subject was asked to per the drop jump movement on preset platform and the motion was simultaneously recorded using an established infrared motion capture system (Hawk–Cortex) and a HD VideoCam in the sagittal plane only. The capture was repeated for 5 times. The outputs (video recordings) from the HD VideoCam were input into Kinovea (an open-source software) and the drop jump pattern was tracked and analysed. These data are compared with the drop jump pattern tracked and analysed earlier using the Hawk–Cortex system. In general, the results obtained (drop jump pattern) using the HD VideoCam–Kinovea are close to the results obtained using the established motion capture system. Basic statistical analyses show that most average variances are less than 10%, thus proving the repeatability of the protocol and the reliability of the results. It can be concluded that the integration of HD VideoCam–Kinovea has the potential to become a reliable motion capture–analysis system. Moreover, it is low cost, portable and easy to use. As a conclusion, the current study and its findings are found useful and has contributed to enhance significant knowledge pertaining to motion capture-analysis, drop jump movement and HD VideoCam–Kinovea integration.
Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers
NASA Technical Reports Server (NTRS)
Kenny, Sean (Technical Monitor); Wertz, Julie
2002-01-01
As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.
To the systematization of failure analysis for perturbed systems (in German)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haller, U.
1974-01-01
The paper investigates the reliable functioning of complex technical systems. Of main importance is the question of how the functioning of technical systems which may fail or whose design still has some faults can be determined in the very earliest planning stages. The present paper is to develop a functioning schedule and to look for possible methods of systematic failure analysis of systems with stochastic failures. (RW/AK)
Site-specific landslide assessment in Alpine area using a reliable integrated monitoring system
NASA Astrophysics Data System (ADS)
Romeo, Saverio; Di Matteo, Lucio; Kieffer, Daniel Scott
2016-04-01
Rockfalls are one of major cause of landslide fatalities around the world. The present work discusses the reliability of integrated monitoring of displacements in a rockfall within the Alpine region (Salzburg Land - Austria), taking into account also the effect of the ongoing climate change. Due to the unpredictability of the frequency and magnitude, that threatens human lives and infrastructure, frequently it is necessary to implement an efficient monitoring system. For this reason, during the last decades, integrated monitoring systems of unstable slopes were widely developed and used (e.g., extensometers, cameras, remote sensing, etc.). In this framework, Remote Sensing techniques, such as GBInSAR technique (Groung-Based Interferometric Synthetic Aperture Radar), have emerged as efficient and powerful tools for deformation monitoring. GBInSAR measurements can be useful to achieve an early warning system using surface deformation parameters as ground displacement or inverse velocity (for semi-empirical forecasting methods). In order to check the reliability of GBInSAR and to monitor the evolution of landslide, it is very important to integrate different techniques. Indeed, a multi-instrumental approach is essential to investigate movements both in surface and in depth and the use of different monitoring techniques allows to perform a cross analysis of the data and to minimize errors, to check the data quality and to improve the monitoring system. During 2013, an intense and complete monitoring campaign has been conducted on the Ingelsberg landslide. By analyzing both historical temperature series (HISTALP) recorded during the last century and those from local weather stations, temperature values (Autumn-Winter, Winter and Spring) are clearly increased in Bad Hofgastein area as well as in Alpine region. As consequence, in the last decades the rockfall events have been shifted from spring to summer due to warmer winters. It is interesting to point out that temperature values recorded in the valley and on the slope show a good relationship indicating that the climatic monitoring is reliable. In addition, the landslide displacement monitoring is reliable as well: the comparison between displacements in depth by extensometers and in surface by GBInSAR - referred to March-December 2013 - shows ad high reliability as confirmed by the inter-rater reliability analysis (Pearson correlation coefficient higher than 0.9). In conclusion, the reliability of the monitoring system confirms that data can be useful to improve the knowledge on rockfall kinematic and to develop an accurate early warning system useful for civil protection issues.
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
Krause, David A; Boyd, Michael S; Hager, Allison N; Smoyer, Eric C; Thompson, Anthony T; Hollman, John H
2015-02-01
The squat is a fundamental movement of many athletic and daily activities. Methods to clinically assess the squat maneuver range from simple observation to the use of sophisticated equipment. The purpose of this study was to examine the reliability of Coach's Eye (TechSmith Corp), a 2-dimensional (2D) motion analysis mobile device application (app), for assessing maximal sagittal plane hip, knee, and ankle motion during a functional movement screen deep squat, and to compare range of motion values generated by it to those from a Vicon (Vicon Motion Systems Ltd) 3-dimensional (3D) motion analysis system. Twenty-six healthy subjects performed three functional movement screen deep squats recorded simultaneously by both the app (on an iPad [Apple Inc]) and the 3D motion analysis system. Joint angle data were calculated with Vicon Nexus software (Vicon Motion Systems Ltd). The app video was analyzed frame by frame to determine, and freeze on the screen, the deepest position of the squat. With a capacitive stylus reference lines were then drawn on the iPad screen to determine joint angles. Procedures were repeated with approximately 48 hours between sessions. Test-retest intrarater reliability (ICC3,1) for the app at the hip, knee, and ankle was 0.98, 0.98, and 0.79, respectively. Minimum detectable change was hip 6°, knee 6°, and ankle 7°. Hip joint angles measured with the 2D app exceeded measurements obtained with the 3D motion analysis system by approximately 40°. Differences at the knee and ankle were of lower magnitude, with mean differences of 5° and 3°, respectively. Bland-Altman analysis demonstrated a systematic bias in the hip range-of-motion measurement. No such bias was demonstrated at the knee or ankle. The 2D app demonstrated excellent reliability and appeared to be a responsive means to assess for clinical change, with minimum detectable change values ranging from 6° to 7°. These results also suggest that the 2D app may be used as an alternative to a sophisticated 3D motion analysis system for assessing sagittal plane knee and ankle motion; however, it does not appear to be a comparable alternative for assessing hip motion. 3.
A Bayesian approach to reliability and confidence
NASA Technical Reports Server (NTRS)
Barnes, Ron
1989-01-01
The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Gage, Peter; Wright, Michael J.
2017-01-01
Mars Sample Return is our Grand Challenge for the coming decade. TPS (Thermal Protection System) nominal performance is not the key challenge. The main difficulty for designers is the need to verify unprecedented reliability for the entry system: current guidelines for prevention of backward contamination require that the probability of spores larger than 1 micron diameter escaping into the Earth environment be lower than 1 million for the entire system, and the allocation to TPS would be more stringent than that. For reference, the reliability allocation for Orion TPS is closer to 11000, and the demonstrated reliability for previous human Earth return systems was closer to 1100. Improving reliability by more than 3 orders of magnitude is a grand challenge indeed. The TPS community must embrace the possibility of new architectures that are focused on reliability above thermal performance and mass efficiency. MSR (Mars Sample Return) EEV (Earth Entry Vehicle) will be hit with MMOD (Micrometeoroid and Orbital Debris) prior to reentry. A chute-less aero-shell design which allows for self-righting shape was baselined in prior MSR studies, with the assumption that a passive system will maximize EEV robustness. Hence the aero-shell along with the TPS has to take ground impact and not break apart. System verification will require testing to establish ablative performance and thermal failure but also testing of damage from MMOD, and structural performance at ground impact. Mission requirements will demand analysis, testing and verification that are focused on establishing reliability of the design. In this proposed talk, we will focus on the grand challenge of MSR EEV TPS and the need for innovative approaches to address challenges in modeling, testing, manufacturing and verification.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
NASA Technical Reports Server (NTRS)
Holanda, R.; Frause, L. M.
1977-01-01
The reliability of 45 state-of-the-art strain gage systems under full scale engine testing was investigated. The flame spray process was used to install 23 systems on the first fan rotor of a YF-100 engine; the others were epoxy cemented. A total of 56 percent of the systems failed in 11 hours of engine operation. Flame spray system failures were primarily due to high gage resistance, probably caused by high stress levels. Epoxy system failures were principally erosion failures, but only on the concave side of the blade. Lead-wire failures between the blade-to-disk jump and the control room could not be analyzed.
Missile and Space Systems Reliability versus Cost Trade-Off Study
1983-01-01
F00-1C09 Robert C. Schneider F00-1C09 V . PERFORMING ORGANIZATION NAME AM0 ADDRESS 16 PRGRAM ELEMENT. PROJECT. TASK BoeingAerosace CmpAnyA CA WORK UNIT...reliability problems, which has the - real bearing on program effectiveness. A well planned and funded reliability effort can prevent or ferret out...failure analysis, and the in- corporation and verification of design corrections to prevent recurrence of failures. 302.2.2 A TMJ test plan shall be
RADC thermal guide for reliability engineers
NASA Astrophysics Data System (ADS)
Morrison, G. N.; Kallis, J. M.; Strattan, L. A.; Jones, I. R.; Lena, A. L.
1982-06-01
This guide was developed to provide a reliability engineer, who is not proficient in thermal design and analysis techniques, with the tools for managing and evaluating the thermal design and production of electronic equipment. It defines the requirements and tasks that should be addressed in system equipment specifications and statements of work, and describes how to evaluate performance.
Fatigue criterion to system design, life and reliability
NASA Technical Reports Server (NTRS)
Zaretsky, E. V.
1985-01-01
A generalized methodology to structural life prediction, design, and reliability based upon a fatigue criterion is advanced. The life prediction methodology is based in part on work of W. Weibull and G. Lundberg and A. Palmgren. The approach incorporates the computed life of elemental stress volumes of a complex machine element to predict system life. The results of coupon fatigue testing can be incorporated into the analysis allowing for life prediction and component or structural renewal rates with reasonable statistical certainty.
1988-09-01
applies to a one Air Transport Rack (ATR) volume LRU in an airborne, uninhabited, fighter environment.) The goal is to have a 2000 hour mean time between...benefits of applying reliability and 11 maintainability improvements to these weapon systems or components. Examples will be given in this research of...where the Pareto Principle applies . The Pareto analysis applies 25 to field failure types as well as to shop defect types. In the following automotive
A Framework for Reliability and Safety Analysis of Complex Space Missions
NASA Technical Reports Server (NTRS)
Evans, John W.; Groen, Frank; Wang, Lui; Austin, Rebekah; Witulski, Art; Mahadevan, Nagabhushan; Cornford, Steven L.; Feather, Martin S.; Lindsey, Nancy
2017-01-01
Long duration and complex mission scenarios are characteristics of NASA's human exploration of Mars, and will provide unprecedented challenges. Systems reliability and safety will become increasingly demanding and management of uncertainty will be increasingly important. NASA's current pioneering strategy recognizes and relies upon assurance of crew and asset safety. In this regard, flexibility to develop and innovate in the emergence of new design environments and methodologies, encompassing modeling of complex systems, is essential to meet the challenges.
Benchmark analysis of forecasted seasonal temperature over different climatic areas
NASA Astrophysics Data System (ADS)
Giunta, G.; Salerno, R.; Ceppi, A.; Ercolani, G.; Mancini, M.
2015-12-01
From a long-term perspective, an improvement of seasonal forecasting, which is often exclusively based on climatology, could provide a new capability for the management of energy resources in a time scale of just a few months. This paper regards a benchmark analysis in relation to long-term temperature forecasts over Italy in the year 2010, comparing the eni-kassandra meteo forecast (e-kmf®) model, the Climate Forecast System-National Centers for Environmental Prediction (CFS-NCEP) model, and the climatological reference (based on 25-year data) with observations. Statistical indexes are used to understand the reliability of the prediction of 2-m monthly air temperatures with a perspective of 12 weeks ahead. The results show how the best performance is achieved by the e-kmf® system which improves the reliability for long-term forecasts compared to climatology and the CFS-NCEP model. By using the reliable high-performance forecast system, it is possible to optimize the natural gas portfolio and management operations, thereby obtaining a competitive advantage in the European energy market.
Cushion, Christopher; Harvey, Stephen; Muir, Bob; Nelson, Lee
2012-01-01
We outline the evolution of a computerised systematic observation tool and describe the process for establishing the validity and reliability of this new instrument. The Coach Analysis and Interventions System (CAIS) has 23 primary behaviours related to physical behaviour, feedback/reinforcement, instruction, verbal/non-verbal, questioning and management. The instrument also analyses secondary coach behaviour related to performance states, recipient, timing, content and questioning/silence. The CAIS is a multi-dimensional and multi-level mechanism able to provide detailed and contextualised data about specific coaching behaviours occurring in complex and nuanced coaching interventions and environments that can be applied to both practice sessions and competition.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2002-01-01
Brittle materials are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts. thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The CARES/Life code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. For this presentation an interview of the CARES/Life program will be provided. Emphasis will be placed on describing the latest enhancements to the code for reliability analysis with time varying loads and temperatures (fully transient reliability analysis). Also, early efforts in investigating the validity of using Weibull statistics, the basis of the CARES/Life program, to characterize the strength of MEMS structures will be described as as well as the version of CARES/Life for MEMS (CARES/MEMS) being prepared which incorporates single crystal and edge flaw reliability analysis capability. It is hoped this talk will open a dialog for potential collaboration in the area of MEMS testing and life prediction.
Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance
NASA Astrophysics Data System (ADS)
Wang, Jian; Yang, Zhenwei; Kang, Mei
2018-01-01
This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.
An economic analysis of a commercial approach to the design and fabrication of a space power system
NASA Technical Reports Server (NTRS)
Putney, Z.; Been, J. F.
1979-01-01
A commercial approach to the design and fabrication of an economical space power system is presented. Cost reductions are projected through the conceptual design of a 2 kW space power system built with the capability for having serviceability. The approach to system costing that is used takes into account both the constraints of operation in space and commercial production engineering approaches. The cost of this power system reflects a variety of cost/benefit tradeoffs that would reduce system cost as a function of system reliability requirements, complexity, and the impact of rigid specifications. A breakdown of the system design, documentation, fabrication, and reliability and quality assurance cost estimates are detailed.
Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L
2009-05-01
To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.
Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.
Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A
2016-03-01
Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.
An overview of the phase-modular fault tree approach to phased mission system analysis
NASA Technical Reports Server (NTRS)
Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.
2003-01-01
We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.
Myneni, Sahiti; Cobb, Nathan K; Cohen, Trevor
2016-01-01
Analysis of user interactions in online communities could improve our understanding of health-related behaviors and inform the design of technological solutions that support behavior change. However, to achieve this we would need methods that provide granular perspective, yet are scalable. In this paper, we present a methodology for high-throughput semantic and network analysis of large social media datasets, combining semi-automated text categorization with social network analytics. We apply this method to derive content-specific network visualizations of 16,492 user interactions in an online community for smoking cessation. Performance of the categorization system was reasonable (average F-measure of 0.74, with system-rater reliability approaching rater-rater reliability). The resulting semantically specific network analysis of user interactions reveals content- and behavior-specific network topologies. Implications for socio-behavioral health and wellness platforms are also discussed.
Reliability analysis of laminated CMC components through shell subelement techniques
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Gyekenyesi, John P.
1992-01-01
An updated version of the integrated design program Composite Ceramics Analysis and Reliability Evaluation of Structures (C/CARES) was developed for the reliability evaluation of ceramic matrix composites (CMC) laminated shell components. The algorithm is now split into two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The interface program creates a neutral data base which is then read by the reliability module. This neutral data base concept allows easy data transfer between different computer systems. The new interface program from the finite-element code Matrix Automated Reduction and Coupling (MARC) also includes the option of using hybrid laminates (a combination of plies of different materials or different layups) and allows for variations in temperature fields throughout the component. In the current version of C/CARES, a subelement technique was implemented, enabling stress gradients within an element to be taken into account. The noninteractive reliability function is now evaluated at each Gaussian integration point instead of using averaging techniques. As a result of the increased number of stress evaluation points, considerable improvements in the accuracy of reliability analyses were realized.
Design for Reliability and Safety Approach for the NASA New Launch Vehicle
NASA Technical Reports Server (NTRS)
Safie, Fayssal, M.; Weldon, Danny M.
2007-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program intended for sending crew and cargo to the international Space Station (ISS), to the moon, and beyond. This program is called Constellation. As part of the Constellation program, NASA is developing new launch vehicles aimed at significantly increase safety and reliability, reduce the cost of accessing space, and provide a growth path for manned space exploration. Achieving these goals requires a rigorous process that addresses reliability, safety, and cost upfront and throughout all the phases of the life cycle of the program. This paper discusses the "Design for Reliability and Safety" approach for the NASA new crew launch vehicle called ARES I. The ARES I is being developed by NASA Marshall Space Flight Center (MSFC) in support of the Constellation program. The ARES I consists of three major Elements: A solid First Stage (FS), an Upper Stage (US), and liquid Upper Stage Engine (USE). Stacked on top of the ARES I is the Crew exploration vehicle (CEV). The CEV consists of a Launch Abort System (LAS), Crew Module (CM), Service Module (SM), and a Spacecraft Adapter (SA). The CEV development is being led by NASA Johnson Space Center (JSC). Designing for high reliability and safety require a good integrated working environment and a sound technical design approach. The "Design for Reliability and Safety" approach addressed in this paper discusses both the environment and the technical process put in place to support the ARES I design. To address the integrated working environment, the ARES I project office has established a risk based design group called "Operability Design and Analysis" (OD&A) group. This group is an integrated group intended to bring together the engineering, design, and safety organizations together to optimize the system design for safety, reliability, and cost. On the technical side, the ARES I project has, through the OD&A environment, implemented a probabilistic approach to analyze and evaluate design uncertainties and understand their impact on safety, reliability, and cost. This paper focuses on the use of the various probabilistic approaches that have been pursued by the ARES I project. Specifically, the paper discusses an integrated functional probabilistic analysis approach that addresses upffont some key areas to support the ARES I Design Analysis Cycle (DAC) pre Preliminary Design (PD) Phase. This functional approach is a probabilistic physics based approach that combines failure probabilities with system dynamics and engineering failure impact models to identify key system risk drivers and potential system design requirements. The paper also discusses other probabilistic risk assessment approaches planned by the ARES I project to support the PD phase and beyond.
Mouthon, L; Rannou, F; Bérezné, A; Pagnoux, C; Arène, J‐P; Foïs, E; Cabane, J; Guillevin, L; Revel, M; Fermanian, J; Poiraudeau, S
2007-01-01
Objective To develop and assess the reliability and construct validity of a scale assessing disability involving the mouth in systemic sclerosis (SSc). Methods We generated a 34‐item provisional scale from mailed responses of patients (n = 74), expert consensus (n = 10) and literature analysis. A total of 71 other SSc patients were recruited. The test–retest reliability was assessed using the intraclass coefficient correlation and divergent validity using the Spearman correlation coefficient. Factor analysis followed by varimax rotation was performed to assess the factorial structure of the scale. Results The item reduction process retained 12 items with 5 levels of answers (total score range 0–48). The mean total score of the scale was 20.3 (SD 9.7). The test–retest reliability was 0.96. Divergent validity was confirmed for global disability (Health Assessment Questionnaire (HAQ), r = 0.33), hand function (Cochin Hand Function Scale, r = 0.37), inter‐incisor distance (r = −0.34), handicap (McMaster‐Toronto Arthritis questionnaire (MACTAR), r = 0.24), depression (Hospital Anxiety and Depression (HAD); HADd, r = 0.26) and anxiety (HADa, r = 0.17). Factor analysis extracted 3 factors with eigenvalues of 4.26, 1.76 and 1.47, explaining 63% of the variance. These 3 factors could be clinically characterised. The first factor (5 items) represents handicap induced by the reduction in mouth opening, the second (5 items) handicap induced by sicca syndrome and the third (2 items) aesthetic concerns. Conclusion We propose a new scale, the Mouth Handicap in Systemic Sclerosis (MHISS) scale, which has excellent reliability and good construct validity, and assesses specifically disability involving the mouth in patients with SSc. PMID:17502364
Digital avionics design and reliability analyzer
NASA Technical Reports Server (NTRS)
1981-01-01
The description and specifications for a digital avionics design and reliability analyzer are given. Its basic function is to provide for the simulation and emulation of the various fault-tolerant digital avionic computer designs that are developed. It has been established that hardware emulation at the gate-level will be utilized. The primary benefit of emulation to reliability analysis is the fact that it provides the capability to model a system at a very detailed level. Emulation allows the direct insertion of faults into the system, rather than waiting for actual hardware failures to occur. This allows for controlled and accelerated testing of system reaction to hardware failures. There is a trade study which leads to the decision to specify a two-machine system, including an emulation computer connected to a general-purpose computer. There is also an evaluation of potential computers to serve as the emulation computer.
Reliability analysis of multicellular system architectures for low-cost satellites
NASA Astrophysics Data System (ADS)
Erlank, A. O.; Bridges, C. P.
2018-06-01
Multicellular system architectures are proposed as a solution to the problem of low reliability currently seen amongst small, low cost satellites. In a multicellular architecture, a set of independent k-out-of-n systems mimic the cells of a biological organism. In order to be beneficial, a multicellular architecture must provide more reliability per unit of overhead than traditional forms of redundancy. The overheads include power consumption, volume and mass. This paper describes the derivation of an analytical model for predicting a multicellular system's lifetime. The performance of such architectures is compared against that of several common forms of redundancy and proven to be beneficial under certain circumstances. In addition, the problem of peripheral interfaces and cross-strapping is investigated using a purpose-developed, multicellular simulation environment. Finally, two case studies are presented based on a prototype cell implementation, which demonstrate the feasibility of the proposed architecture.
Thermoelectric Outer Planets Spacecraft (TOPS)
NASA Technical Reports Server (NTRS)
1973-01-01
The research and advanced development work is reported on a ballistic-mode, outer planet spacecraft using radioisotope thermoelectric generator (RTG) power. The Thermoelectric Outer Planet Spacecraft (TOPS) project was established to provide the advanced systems technology that would allow the realistic estimates of performance, cost, reliability, and scheduling that are required for an actual flight mission. A system design of the complete RTG-powered outer planet spacecraft was made; major technical innovations of certain hardware elements were designed, developed, and tested; and reliability and quality assurance concepts were developed for long-life requirements. At the conclusion of its active phase, the TOPS Project reached its principal objectives: a development and experience base was established for project definition, and for estimating cost, performance, and reliability; an understanding of system and subsystem capabilities for successful outer planets missions was achieved. The system design answered long-life requirements with massive redundancy, controlled by on-board analysis of spacecraft performance data.
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
Reliability Constrained Priority Load Shedding for Aerospace Power System Automation
NASA Technical Reports Server (NTRS)
Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)
2000-01-01
The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.
Sociotechnical attributes of safe and unsafe work systems
Kleiner, Brian M.; Hettinger, Lawrence J.; DeJoy, David M.; Huang, Yuang-Hsiang; Love, Peter E.D.
2015-01-01
Theoretical and practical approaches to safety based on sociotechnical systems principles place heavy emphasis on the intersections between social–organisational and technical–work process factors. Within this perspective, work system design emphasises factors such as the joint optimisation of social and technical processes, a focus on reliable human–system performance and safety metrics as design and analysis criteria, the maintenance of a realistic and consistent set of safety objectives and policies, and regular access to the expertise and input of workers. We discuss three current approaches to the analysis and design of complex sociotechnical systems: human–systems integration, macroergonomics and safety climate. Each approach emphasises key sociotechnical systems themes, and each prescribes a more holistic perspective on work systems than do traditional theories and methods. We contrast these perspectives with historical precedents such as system safety and traditional human factors and ergonomics, and describe potential future directions for their application in research and practice. Practitioner Summary: The identification of factors that can reliably distinguish between safe and unsafe work systems is an important concern for ergonomists and other safety professionals. This paper presents a variety of sociotechnical systems perspectives on intersections between social–organisational and technology–work process factors as they impact work system analysis, design and operation. PMID:25909756
3-Dimensional Root Cause Diagnosis via Co-analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Ziming; Lan, Zhiling; Yu, Li
2012-01-01
With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less
Thermal Energy Storage using PCM for Solar Domestic Hot Water Systems: A Review
NASA Astrophysics Data System (ADS)
Khot, S. A.; Sane, N. K.; Gawali, B. S.
2012-06-01
Thermal energy storage using phase chase materials (PCM) has received considerable attention in the past two decades for time dependent energy source such as solar energy. From several experimental and theoretical analyses that have been made to assess the performance of thermal energy storage systems, it has been demonstrated that PCM-based systems are reliable and viable options. This paper covers such information on PCMs and PCM-based systems developed for the application of solar domestic hot water system. In addition, economic analysis of thermal storage system using PCM in comparison with conventional storage system helps to validate its commercial possibility. From the economic analysis, it is found that, PCM based solar domestic hot water system (SWHS) provides 23 % more cumulative and life cycle savings than conventional SWHS and will continue to perform efficiently even after 15 years due to application of non-metallic tank. Payback period of PCM-based system is also less compared to conventional system. In conclusion, PCM based solar water heating systems can meet the requirements of Indian climatic situation in a cost effective and reliable manner.
NDE reliability and probability of detection (POD) evolution and paradigm shift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Surendra
2014-02-18
The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed “Have Cracks – Will Travel” or in short “Have Cracks” by Lockheed Georgia Company for US Air Force during 1974–1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823Amore » (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability and Reproducibility (Gage R and R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between “hoped for” versus validated or fielded failed hardware.« less
Ringdal, Kjetil G; Skaga, Nils Oddvar; Steen, Petter Andreas; Hestnes, Morten; Laake, Petter; Jones, J Mary; Lossius, Hans Morten
2013-01-01
Pre-injury comorbidities can influence the outcomes of severely injured patients. Pre-injury comorbidity status, graded according to the American Society of Anesthesiologists Physical Status (ASA-PS) classification system, is an independent predictor of survival in trauma patients and is recommended as a comorbidity score in the Utstein Trauma Template for Uniform Reporting of Data. Little is known about the reliability of pre-injury ASA-PS scores. The objective of this study was to examine whether the pre-injury ASA-PS system was a reliable scale for grading comorbidity in trauma patients. Nineteen Norwegian trauma registry coders were invited to participate in a reliability study in which 50 real but anonymised patient medical records were distributed. Reliability was analysed using quadratic weighted kappa (κ(w)) analysis with 95% CI as the primary outcome measure and unweighted kappa (κ) analysis, which included unknown values, as a secondary outcome measure. Fifteen of the invitees responded to the invitation, and ten participated. We found moderate (κ(w)=0.77 [95% CI: 0.64-0.87]) to substantial (κ(w)=0.95 [95% CI: 0.89-0.99]) rater-against-reference standard reliability using κ(w) and fair (κ=0.46 [95% CI: 0.29-0.64]) to substantial (κ=0.83 [95% CI: 0.68-0.94]) reliability using κ. The inter-rater reliability ranged from moderate (κ(w)=0.66 [95% CI: 0.45-0.81]) to substantial (κ(w)=0.96 [95% CI: 0.88-1.00]) for κ(w) and from slight (κ=0.36 [95% CI: 0.21-0.54]) to moderate (κ=0.75 [95% CI: 0.62-0.89]) for κ. The rater-against-reference standard reliability varied from moderate to substantial for the primary outcome measure and from fair to substantial for the secondary outcome measure. The study findings indicate that the pre-injury ASA-PS scale is a reliable score for classifying comorbidity in trauma patients. Copyright © 2012 Elsevier Ltd. All rights reserved.
CRAX/Cassandra Reliability Analysis Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D.
1999-02-10
Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less
Computing Lives And Reliabilities Of Turboprop Transmissions
NASA Technical Reports Server (NTRS)
Coy, J. J.; Savage, M.; Radil, K. C.; Lewicki, D. G.
1991-01-01
Computer program PSHFT calculates lifetimes of variety of aircraft transmissions. Consists of main program, series of subroutines applying to specific configurations, generic subroutines for analysis of properties of components, subroutines for analysis of system, and common block. Main program selects routines used in analysis and causes them to operate in desired sequence. Series of configuration-specific subroutines put in configuration data, perform force and life analyses for components (with help of generic component-property-analysis subroutines), fill property array, call up system-analysis routines, and finally print out results of analysis for system and components. Written in FORTRAN 77(IV).
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
Addressing Uniqueness and Unison of Reliability and Safety for a Better Integration
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Safie, Fayssal
2016-01-01
Over time, it has been observed that Safety and Reliability have not been clearly differentiated, which leads to confusion, inefficiency, and, sometimes, counter-productive practices in executing each of these two disciplines. It is imperative to address this situation to help Reliability and Safety disciplines improve their effectiveness and efficiency. The paper poses an important question to address, "Safety and Reliability - Are they unique or unisonous?" To answer the question, the paper reviewed several most commonly used analyses from each of the disciplines, namely, FMEA, reliability allocation and prediction, reliability design involvement, system safety hazard analysis, Fault Tree Analysis, and Probabilistic Risk Assessment. The paper pointed out uniqueness and unison of Safety and Reliability in their respective roles, requirements, approaches, and tools, and presented some suggestions for enhancing and improving the individual disciplines, as well as promoting the integration of the two. The paper concludes that Safety and Reliability are unique, but compensating each other in many aspects, and need to be integrated. Particularly, the individual roles of Safety and Reliability need to be differentiated, that is, Safety is to ensure and assure the product meets safety requirements, goals, or desires, and Reliability is to ensure and assure maximum achievability of intended design functions. With the integration of Safety and Reliability, personnel can be shared, tools and analyses have to be integrated, and skill sets can be possessed by the same person with the purpose of providing the best value to a product development.
A design for a new catalog manager and associated file management for the Land Analysis System (LAS)
NASA Technical Reports Server (NTRS)
Greenhagen, Cheryl
1986-01-01
Due to the larger number of different types of files used in an image processing system, a mechanism for file management beyond the bounds of typical operating systems is necessary. The Transportable Applications Executive (TAE) Catalog Manager was written to meet this need. Land Analysis System (LAS) users at the EROS Data Center (EDC) encountered some problems in using the TAE catalog manager, including catalog corruption, networking difficulties, and lack of a reliable tape storage and retrieval capability. These problems, coupled with the complexity of the TAE catalog manager, led to the decision to design a new file management system for LAS, tailored to the needs of the EDC user community. This design effort, which addressed catalog management, label services, associated data management, and enhancements to LAS applications, is described. The new file management design will provide many benefits including improved system integration, increased flexibility, enhanced reliability, enhanced portability, improved performance, and improved maintainability.
A Probabilistic System Analysis of Intelligent Propulsion System Technologies
NASA Technical Reports Server (NTRS)
Tong, Michael T.
2007-01-01
NASA s Intelligent Propulsion System Technology (Propulsion 21) project focuses on developing adaptive technologies that will enable commercial gas turbine engines to produce fewer emissions and less noise while increasing reliability. It features adaptive technologies that have included active tip-clearance control for turbine and compressor, active combustion control, turbine aero-thermal and flow control, and enabling technologies such as sensors which are reliable at high operating temperatures and are minimally intrusive. A probabilistic system analysis is performed to evaluate the impact of these technologies on aircraft CO2 (directly proportional to fuel burn) and LTO (landing and takeoff) NO(x) reductions. A 300-passenger aircraft, with two 396-kN thrust (85,000-pound) engines is chosen for the study. The results show that NASA s Intelligent Propulsion System technologies have the potential to significantly reduce the CO2 and NO(x) emissions. The results are used to support informed decisionmaking on the development of the intelligent propulsion system technology portfolio for CO2 and NO(x) reductions.
Best Practices for Reliable and Robust Spacecraft Structures
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Murthy, P. L. N.; Patel, Naresh R.; Bonacuse, Peter J.; Elliott, Kenny B.; Gordon, S. A.; Gyekenyesi, J. P.; Daso, E. O.; Aggarwal, P.; Tillman, R. F.
2007-01-01
A study was undertaken to capture the best practices for the development of reliable and robust spacecraft structures for NASA s next generation cargo and crewed launch vehicles. In this study, the NASA heritage programs such as Mercury, Gemini, Apollo, and the Space Shuttle program were examined. A series of lessons learned during the NASA and DoD heritage programs are captured. The processes that "make the right structural system" are examined along with the processes to "make the structural system right". The impact of technology advancements in materials and analysis and testing methods on reliability and robustness of spacecraft structures is studied. The best practices and lessons learned are extracted from these studies. Since the first human space flight, the best practices for reliable and robust spacecraft structures appear to be well established, understood, and articulated by each generation of designers and engineers. However, these best practices apparently have not always been followed. When the best practices are ignored or short cuts are taken, risks accumulate, and reliability suffers. Thus program managers need to be vigilant of circumstances and situations that tend to violate best practices. Adherence to the best practices may help develop spacecraft systems with high reliability and robustness against certain anomalies and unforeseen events.
Doğramac, Sera N; Watsford, Mark L; Murphy, Aron J
2011-03-01
Subjective notational analysis can be used to track players and analyse movement patterns during match-play of team sports such as futsal. The purpose of this study was to establish the validity and reliability of the Event Recorder for subjective notational analysis. A course was designed, replicating ten minutes of futsal match-play movement patterns, where ten participants undertook the course. The course allowed a comparison of data derived from subjective notational analysis, to the known distances of the course, and to GPS data. The study analysed six locomotor activity categories, focusing on total distance covered, total duration of activities and total frequency of activities. The values between the known measurements and the Event Recorder were similar, whereas the majority of significant differences were found between the Event Recorder and GPS values. The reliability of subjective notational analysis was established with all ten participants being analysed on two occasions, as well as analysing five random futsal players twice during match-play. Subjective notational analysis is a valid and reliable method of tracking player movements, and may be a preferred and more effective method than GPS, particularly for indoor sports such as futsal, and field sports where short distances and changes in direction are observed.
Advanced techniques in reliability model representation and solution
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Nicol, David M.
1992-01-01
The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.
Laidoune, Abdelbaki; Rahal Gharbi, Med El Hadi
2016-09-01
The influence of sociocultural factors on human reliability within an open sociotechnical systems is highlighted. The design of such systems is enhanced by experience feedback. The study was focused on a survey related to the observation of working cases, and by processing of incident/accident statistics and semistructured interviews in the qualitative part. In order to consolidate the study approach, we considered a schedule for the purpose of standard statistical measurements. We tried to be unbiased by supporting an exhaustive list of all worker categories including age, sex, educational level, prescribed task, accountability level, etc. The survey was reinforced by a schedule distributed to 300 workers belonging to two oil companies. This schedule comprises 30 items related to six main factors that influence human reliability. Qualitative observations and schedule data processing had shown that the sociocultural factors can negatively and positively influence operator behaviors. The explored sociocultural factors influence the human reliability both in qualitative and quantitative manners. The proposed model shows how reliability can be enhanced by some measures such as experience feedback based on, for example, safety improvements, training, and information. With that is added the continuous systems improvements to improve sociocultural reality and to reduce negative behaviors.
Inertial Measurement Units for Clinical Movement Analysis: Reliability and Concurrent Validity
Nicholas, Kevin; Sparkes, Valerie; Sheeran, Liba; Davies, Jennifer L
2018-01-01
The aim of this study was to investigate the reliability and concurrent validity of a commercially available Xsens MVN BIOMECH inertial-sensor-based motion capture system during clinically relevant functional activities. A clinician with no prior experience of motion capture technologies and an experienced clinical movement scientist each assessed 26 healthy participants within each of two sessions using a camera-based motion capture system and the MVN BIOMECH system. Participants performed overground walking, squatting, and jumping. Sessions were separated by 4 ± 3 days. Reliability was evaluated using intraclass correlation coefficient and standard error of measurement, and validity was evaluated using the coefficient of multiple correlation and the linear fit method. Day-to-day reliability was generally fair-to-excellent in all three planes for hip, knee, and ankle joint angles in all three tasks. Within-day (between-rater) reliability was fair-to-excellent in all three planes during walking and squatting, and poor-to-high during jumping. Validity was excellent in the sagittal plane for hip, knee, and ankle joint angles in all three tasks and acceptable in frontal and transverse planes in squat and jump activity across joints. Our results suggest that the MVN BIOMECH system can be used by a clinician to quantify lower-limb joint angles in clinically relevant movements. PMID:29495600
NASA Astrophysics Data System (ADS)
Mullin, Daniel Richard
2013-09-01
The majority of space programs whether manned or unmanned for science or exploration require that a Failure Modes Effects and Criticality Analysis (FMECA) be performed as part of their safety and reliability activities. This comes as no surprise given that FMECAs have been an integral part of the reliability engineer's toolkit since the 1950s. The reasons for performing a FMECA are well known including fleshing out system single point failures, system hazards and critical components and functions. However, in the author's ten years' experience as a space systems safety and reliability engineer, findings demonstrate that the FMECA is often performed as an afterthought, simply to meet contract deliverable requirements and is often started long after the system requirements allocation and preliminary design have been completed. There are also important qualitative and quantitative components often missing which can provide useful data to all of project stakeholders. These include; probability of occurrence, probability of detection, time to effect and time to detect and, finally, the Risk Priority Number. This is unfortunate as the FMECA is a powerful system design tool that when used effectively, can help optimize system function while minimizing the risk of failure. When performed as early as possible in conjunction with writing the top level system requirements, the FMECA can provide instant feedback on the viability of the requirements while providing a valuable sanity check early in the design process. It can indicate which areas of the system will require redundancy and which areas are inherently the most risky from the onset. Based on historical and practical examples, it is this author's contention that FMECAs are an immense source of important information for all involved stakeholders in a given project and can provide several benefits including, efficient project management with respect to cost and schedule, system engineering and requirements management, assembly integration and test (AI&T) and operations if applied early, performed to completion and updated along with system design.
Review and critical analysis: Rolling-element bearings for system life and reliability
NASA Technical Reports Server (NTRS)
Irwin, A. S.; Anderson, W. J.; Derner, W. J.
1985-01-01
A ball and cylindrical roller bearing technical specification which incorporates the latest state-of-the-art advancements was prepared for the purpose of improving bearing reliability in U.S. Army aircraft. The current U.S. Army aviation bearing designs and applications, including life analyses, were analyzed. A bearing restoration and refurbishment specification was prepared to improve bearing availability.
Probabilistic Prediction of Lifetimes of Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.
2006-01-01
ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
NASA Astrophysics Data System (ADS)
Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-04-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.
Human reliability in petrochemical industry: an action research.
Silva, João Alexandre Pinheiro; Camarotto, João Alberto
2012-01-01
This paper aims to identify conflicts and gaps between the operators' strategies and actions and the organizational managerial approach for human reliability. In order to achieve these goals, the research approach adopted encompasses literature review, mixing action research methodology and Ergonomic Workplace Analysis in field research. The result suggests that the studied company has a classical and mechanistic point of view focusing on error identification and building barriers through procedures, checklists and other prescription alternatives to improve performance in reliability area. However, it was evident the fundamental role of the worker as an agent of maintenance and construction of system reliability during the action research cycle.
Modeling Imperfect Generator Behavior in Power System Operation Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim
A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less
MSIX - A general and user-friendly platform for RAM analysis
NASA Astrophysics Data System (ADS)
Pan, Z. J.; Blemel, Peter
The authors present a CAD (computer-aided design) platform supporting RAM (reliability, availability, and maintainability) analysis with efficient system description and alternative evaluation. The design concepts, implementation techniques, and application results are described. This platform is user-friendly because of its graphic environment, drawing facilities, object orientation, self-tutoring, and access to the operating system. The programs' independency and portability make them generally applicable to various analysis tasks.
A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gossman, W. E.
1986-01-01
A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.
Cuchna, Jennifer W; Hoch, Matthew C; Hoch, Johanna M
2016-05-01
To synthesize the literature and perform a meta-analysis for both the interrater and intrarater reliability of the FMS™. Academic Search Complete, CINAHL, Medline and SportsDiscus databases were systematically searched from inception to March 2015. Studies were included if the primary purpose was to determine the interrater or intrarater reliability of the FMS™, assessed and scored all 7-items using the standard scoring criteria, provided a composite score and employed intraclass correlation coefficients (ICCs). Studies were excluded if reliability was not the primary aim, participants were injured at data collection, or a modified FMS™ or scoring system was utilized. Seven papers were included; 6 assessing interrater and 6 assessing intrarater reliability. There was moderate evidence in good interrater reliability with a summary ICC of 0.843 (95% CI = 0.640, 0.936; Q7 = 84.915, p < 0.0001). There was moderate evidence in good intrarater reliability with a summary ICC of 0.869 (95% CI = 0.785, 0.921; Q12 = 60.763, p < 0.0001). There was moderate evidence for both forms of reliability. The sensitivity assessments revealed this interpretation is stable and not influenced by any one study. Overall, the FMS™ is a reliable tool for clinical practice. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.
2000-01-01
Engineers within the commercial and aerospace industries are using trade-off and risk analysis to aid in reducing spacecraft system cost while increasing performance and maintaining high reliability. In many cases, Commercial Off-The-Shelf (COTS) components, which include Plastic Encapsulated Microcircuits (PEMs), are candidate packaging technologies for spacecrafts due to their lower cost, lower weight and enhanced functionality. Establishing and implementing a parts program that effectively and reliably makes use of these potentially less reliable, but state-of-the-art devices, has become a significant portion of the job for the parts engineer. Assembling a reliable high performance electronic system, which includes COTS components, requires that the end user assume a risk. To minimize the risk involved, companies have developed methodologies by which they use accelerated stress testing to assess the product and reduce the risk involved to the total system. Currently, there are no industry standard procedures for accomplishing this risk mitigation. This paper will present the approaches for reducing the risk of using PEMs devices in space flight systems as developed by two independent Laboratories. The JPL procedure involves primarily a tailored screening with accelerated stress philosophy while the APL procedure is primarily, a lot qualification procedure. Both Laboratories successfully have reduced the risk of using the particular devices for their respective systems and mission requirements.
Kunkel, Amber; McLay, Laura A
2013-03-01
Emergency medical services (EMS) provide life-saving care and hospital transport to patients with severe trauma or medical conditions. Severe weather events, such as snow events, may lead to adverse patient outcomes by increasing call volumes and service times. Adequate staffing levels during such weather events are critical for ensuring that patients receive timely care. To determine staffing levels that depend on weather, we propose a model that uses a discrete event simulation of a reliability model to identify minimum staffing levels that provide timely patient care, with regression used to provide the input parameters. The system is said to be reliable if there is a high degree of confidence that ambulances can immediately respond to a given proportion of patients (e.g., 99 %). Four weather scenarios capture varying levels of snow falling and snow on the ground. An innovative feature of our approach is that we evaluate the mitigating effects of different extrinsic response policies and intrinsic system adaptation. The models use data from Hanover County, Virginia to quantify how snow reduces EMS system reliability and necessitates increasing staffing levels. The model and its analysis can assist in EMS preparedness by providing a methodology to adjust staffing levels during weather events. A key observation is that when it is snowing, intrinsic system adaptation has similar effects on system reliability as one additional ambulance.
Reviewing Reliability and Validity of Information for University Educational Evaluation
NASA Astrophysics Data System (ADS)
Otsuka, Yusaku
To better utilize evaluations in higher education, it is necessary to share the methods of reviewing reliability and validity of examination scores and grades, and to accumulate and share data for confirming results. Before the GPA system is first introduced into a university or college, the reliability of examination scores and grades, especially for essay examinations, must be assured. Validity is a complicated concept, so should be assured in various ways, including using professional audits, theoretical models, and statistical data analysis. Because individual students and teachers are continually improving, using evaluations to appraise their progress is not always compatible with using evaluations in appraising the implementation of accountability in various departments or the university overall. To better utilize evaluations and improve higher education, evaluations should be integrated into the current system by sharing the vision of an academic learning community and promoting interaction between students and teachers based on sufficiently reliable and validated evaluation tools.
NASA Astrophysics Data System (ADS)
Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok
Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.
Fatigue reliability of deck structures subjected to correlated crack growth
NASA Astrophysics Data System (ADS)
Feng, G. Q.; Garbatov, Y.; Guedes Soares, C.
2013-12-01
The objective of this work is to analyse fatigue reliability of deck structures subjected to correlated crack growth. The stress intensity factors of the correlated cracks are obtained by finite element analysis and based on which the geometry correction functions are derived. The Monte Carlo simulations are applied to predict the statistical descriptors of correlated cracks based on the Paris-Erdogan equation. A probabilistic model of crack growth as a function of time is used to analyse the fatigue reliability of deck structures accounting for the crack propagation correlation. A deck structure is modelled as a series system of stiffened panels, where a stiffened panel is regarded as a parallel system composed of plates and are longitudinal. It has been proven that the method developed here can be conveniently applied to perform the fatigue reliability assessment of structures subjected to correlated crack growth.
Monitoring Energy Balance in Breast Cancer Survivors Using a Mobile App: Reliability Study
Lozano-Lozano, Mario; Galiano-Castillo, Noelia; Martín-Martín, Lydia; Pace-Bedetti, Nicolás; Fernández-Lao, Carolina; Cantarero-Villanueva, Irene
2018-01-01
Background The majority of breast cancer survivors do not meet recommendations in terms of diet and physical activity. To address this problem, we developed a mobile health (mHealth) app for assessing and monitoring healthy lifestyles in breast cancer survivors, called the Energy Balance on Cancer (BENECA) mHealth system. The BENECA mHealth system is a novel and interactive mHealth app, which allows breast cancer survivors to engage themselves in their energy balance monitoring. BENECA was designed to facilitate adherence to healthy lifestyles in an easy and intuitive way. Objective The objective of the study was to assess the concurrent validity and test-retest reliability between the BENECA mHealth system and the gold standard assessment methods for diet and physical activity. Methods A reliability study was conducted with 20 breast cancer survivors. In the study, tri-axial accelerometers (ActiGraphGT3X+) were used as gold standard for 8 consecutive days, in addition to 2, 24-hour dietary recalls, 4 dietary records, and sociodemographic questionnaires. Two-way random effect intraclass correlation coefficients, a linear regression-analysis, and a Passing-Bablok regression were calculated. Results The reliability estimates were very high for all variables (alpha≥.90). The lowest reliability was found in fruit and vegetable intakes (alpha=.94). The reliability between the accelerometer and the dietary assessment instruments against the BENECA system was very high (intraclass correlation coefficient=.90). We found a mean match rate of 93.51% between instruments and a mean phantom rate of 3.35%. The Passing-Bablok regression analysis did not show considerable bias in fat percentage, portions of fruits and vegetables, or minutes of moderate to vigorous physical activity. Conclusions The BENECA mHealth app could be a new tool to measure energy balance in breast cancer survivors in a reliable and simple way. Our results support the use of this technology to not only to encourage changes in breast cancer survivors' lifestyles, but also to remotely monitor energy balance. Trial Registration ClinicalTrials.gov NCT02817724; https://clinicaltrials.gov/ct2/show/NCT02817724 (Archived by WebCite at http://www.webcitation.org/6xVY1buCc) PMID:29588273
Reliability of Beam Loss Monitor Systems for the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Guaglio, G.; Dehning, B.; Santoni, C.
2005-06-01
The increase of beam energy and beam intensity, together with the use of super conducting magnets, opens new failure scenarios and brings new criticalities for the whole accelerator protection system. For the LHC beam loss protection system, the failure rate and the availability requirements have been evaluated using the Safety Integrity Level (SIL) approach. A downtime cost evaluation is used as input for the SIL approach. The most critical systems, which contribute to the final SIL value, are the dump system, the interlock system, the beam loss monitors system, and the energy monitor system. The Beam Loss Monitors System (BLMS) is critical for short and intense particles losses at 7 TeV and assisted by the Fast Beam Current Decay Monitors at 450 GeV. At medium and higher loss time it is assisted by other systems, such as the quench protection system and the cryogenic system. For BLMS, hardware and software have been evaluated in detail. The reliability input figures have been collected using historical data from the SPS, using temperature and radiation damage experimental data as well as using standard databases. All the data has been processed by reliability software (Isograph). The analysis spaces from the components data to the system configuration.
14 CFR 417.307 - Support systems.
Code of Federal Regulations, 2014 CFR
2014-01-01
... subsystem, component, and part that can affect the reliability of the support system must have written...) Data processing, display, and recording. A flight safety system must include one or more subsystems... accordance with the flight safety analysis required by subpart C of this part; (5) Display and record raw...
14 CFR 417.307 - Support systems.
Code of Federal Regulations, 2012 CFR
2012-01-01
... subsystem, component, and part that can affect the reliability of the support system must have written...) Data processing, display, and recording. A flight safety system must include one or more subsystems... accordance with the flight safety analysis required by subpart C of this part; (5) Display and record raw...
14 CFR 417.307 - Support systems.
Code of Federal Regulations, 2011 CFR
2011-01-01
... subsystem, component, and part that can affect the reliability of the support system must have written...) Data processing, display, and recording. A flight safety system must include one or more subsystems... accordance with the flight safety analysis required by subpart C of this part; (5) Display and record raw...
14 CFR 417.307 - Support systems.
Code of Federal Regulations, 2013 CFR
2013-01-01
... subsystem, component, and part that can affect the reliability of the support system must have written...) Data processing, display, and recording. A flight safety system must include one or more subsystems... accordance with the flight safety analysis required by subpart C of this part; (5) Display and record raw...
NASA Technical Reports Server (NTRS)
Teper, G. L.; Hon, R. H.; Smyth, R. K.
1977-01-01
Specifications which define the system functional requirements, the subsystem and interface needs, and other requirements such as maintainability, modularity, and reliability are summarized. A design definition of all required avionics functions and a system risk analysis are presented.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
Development of a versatile user-friendly IBA experimental chamber
NASA Astrophysics Data System (ADS)
Kakuee, Omidreza; Fathollahi, Vahid; Lamehi-Rachti, Mohammad
2016-03-01
Reliable performance of the Ion Beam Analysis (IBA) techniques is based on the accurate geometry of the experimental setup, employment of the reliable nuclear data and implementation of dedicated analysis software for each of the IBA techniques. It has already been shown that geometrical imperfections lead to significant uncertainties in quantifications of IBA measurements. To minimize these uncertainties, a user-friendly experimental chamber with a heuristic sample positioning system for IBA analysis was recently developed in the Van de Graaff laboratory in Tehran. This system enhances IBA capabilities and in particular Nuclear Reaction Analysis (NRA) and Elastic Recoil Detection Analysis (ERDA) techniques. The newly developed sample manipulator provides the possibility of both controlling the tilt angle of the sample and analyzing samples with different thicknesses. Moreover, a reasonable number of samples can be loaded in the sample wheel. A comparison of the measured cross section data of the 16O(d,p1)17O reaction with the data reported in the literature confirms the performance and capability of the newly developed experimental chamber.
NASA Astrophysics Data System (ADS)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.
The engine fuel system fault analysis
NASA Astrophysics Data System (ADS)
Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei
2017-05-01
For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.
The welfare effects of integrating renewable energy into electricity markets
NASA Astrophysics Data System (ADS)
Lamadrid, Alberto J.
The challenges of deploying more renewable energy sources on an electric grid are caused largely by their inherent variability. In this context, energy storage can help make the electric delivery system more reliable by mitigating this variability. This thesis analyzes a series of models for procuring electricity and ancillary services for both individuals and social planners with high penetrations of stochastic wind energy. The results obtained for an individual decision maker using stochastic optimization are ambiguous, with closed form solutions dependent on technological parameters, and no consideration of the system reliability. The social planner models correctly reflect the effect of system reliability, and in the case of a Stochastic, Security Constrained Optimal Power Flow (S-SC-OPF or SuperOPF), determine reserve capacity endogenously so that system reliability is maintained. A single-period SuperOPF shows that including ramping costs in the objective function leads to more wind spilling and increased capacity requirements for reliability. However, this model does not reflect the inter temporal tradeoffs of using Energy Storage Systems (ESS) to improve reliability and mitigate wind variability. The results with the multiperiod SuperOPF determine the optimum use of storage for a typical day, and compare the effects of collocating ESS at wind sites with the same amount of storage (deferrable demand) located at demand centers. The collocated ESS has slightly lower operating costs and spills less wind generation compared to deferrable demand, but the total amount of conventional generating capacity needed for system adequacy is higher. In terms of the total system costs, that include the capital cost of conventional generating capacity, the costs with deferrable demand is substantially lower because the daily demand profile is flattened and less conventional generation capacity is then needed for reliability purposes. The analysis also demonstrates that the optimum daily pattern of dispatch and reserves is seriously distorted if the stochastic characteristics of wind generation are ignored.
The SURE reliability analysis program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
Chen, J D; Sun, H L
1999-04-01
Objective. To assess and predict reliability of an equipment dynamically by making full use of various test informations in the development of products. Method. A new reliability growth assessment method based on army material system analysis activity (AMSAA) model was developed. The method is composed of the AMSAA model and test data conversion technology. Result. The assessment and prediction results of a space-borne equipment conform to its expectations. Conclusion. It is suggested that this method should be further researched and popularized.
Removing Barriers for Effective Deployment of Intermittent Renewable Generation
NASA Astrophysics Data System (ADS)
Arabali, Amirsaman
The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.
NASA Astrophysics Data System (ADS)
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
NASA Technical Reports Server (NTRS)
Bien, D. D.
1973-01-01
This analysis considers the optimum allocation of redundancy in a system of serially connected subsystems in which each subsystem is of the k-out-of-n type. Redundancy is optimally allocated when: (1) reliability is maximized for given costs; or (2) costs are minimized for given reliability. Several techniques are presented for achieving optimum allocation and their relative merits are discussed. Approximate solutions in closed form were attainable only for the special case of series-parallel systems and the efficacy of these approximations is discussed.
2007 Beyond SBIR Phase II: Bringing Technology Edge to the Warfighter
2007-08-23
Systems Trade-Off Analysis and Optimization Verification and Validation On-Board Diagnostics and Self - healing Security and Anti-Tampering Rapid...verification; Safety and reliability analysis of flight and mission critical systems On-Board Diagnostics and Self - Healing Model-based monitoring and... self - healing On-board diagnostics and self - healing ; Autonomic computing; Network intrusion detection and prevention Anti-Tampering and Trust
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-11
.../Exposure Analysis Modeling System and Screening Concentration in Ground Water (SCI-GROW) models, the... Classification System (NAICS) codes have been provided to assist you and others in determining whether this... reliable information.'' This includes exposure through drinking water and in residential settings, but does...
75 FR 40745 - Cyazofamid; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-14
... Model/Exposure Analysis Modeling System (PRZM/EXAMS) model for surface water and the Screening... listed in this unit could also be affected. The North American Industrial Classification System (NAICS... there is reliable information.'' This includes exposure through drinking water and in residential...
Statistical Analysis Tools for Learning in Engineering Laboratories.
ERIC Educational Resources Information Center
Maher, Carolyn A.
1990-01-01
Described are engineering programs that have used automated data acquisition systems to implement data collection and analyze experiments. Applications include a biochemical engineering laboratory, heat transfer performance, engineering materials testing, mechanical system reliability, statistical control laboratory, thermo-fluid laboratory, and a…
A Performance Appraisal System for School Principals.
ERIC Educational Resources Information Center
Knoop, Robert; Common, Ronald W.
The Performance Review, Analysis, and Improvement System for Educators (PRAISE) is a formative evaluation instrument designed to improve the performance of school principals. The system appears to be reliable and valid and is flexible enough to accommodate the needs of a variety of schools. Sample items and categories of the instrument include…
Analysis of off-grid hybrid wind turbine/solar PV water pumping systems
USDA-ARS?s Scientific Manuscript database
While many remote water pumping systems exist (e.g. mechanical windmills, solar photovoltaic , wind-electric, diesel powered), very few combine both the wind and solar energy resources to possibly improve the reliability and the performance of the system. In this paper, off-grid wind turbine (WT) a...
MTL distributed magnet measurement system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogiec, J.M.; Craker, P.A.; Garbarini, J.P.
1993-04-01
The Magnet Test Laboratory (MTL) at the Superconducting Super collider Laboratory will be required to precisely and reliably measure properties of magnets in a production environment. The extensive testing of the superconducting magnets comprises several types of measurements whose main purpose is to evaluate some basic parameters characterizing magnetic, mechanic and cryogenic properties of magnets. The measurement process will produce a significant amount of data which will be subjected to complex analysis. Such massive measurements require a careful design of both the hardware and software of computer systems, having in mind a reliable, maximally automated system. In order to fulfillmore » this requirement a dedicated Distributed Magnet Measurement System (DMMS) is being developed.« less
An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry
NASA Astrophysics Data System (ADS)
Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul
2013-12-01
The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.
Numerical aerodynamic simulation facility. Preliminary study extension
NASA Technical Reports Server (NTRS)
1978-01-01
The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.
Mass and Reliability System (MaRS)
NASA Technical Reports Server (NTRS)
Barnes, Sarah
2016-01-01
The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including: operation hours, random/nonrandom failures, software/hardware failures, quantity, orbital replaceable units (ORU), date of placement, unit weight, frequency of part, etc. The motivation for creating such a database will be the development of a mass/reliability parametric model to estimate mass required for replacement parts. Once complete, engineers working on future space flight missions will have access a mean time to failures and on parts along with their mass, this will be used to make proper decisions for long duration space flight missions
Reliability Analysis of a Green Roof Under Different Storm Scenarios
NASA Astrophysics Data System (ADS)
William, R. K.; Stillwell, A. S.
2015-12-01
Urban environments continue to face the challenges of localized flooding and decreased water quality brought on by the increasing amount of impervious area in the built environment. Green infrastructure provides an alternative to conventional storm sewer design by using natural processes to filter and store stormwater at its source. However, there are currently few consistent standards available in North America to ensure that installed green infrastructure is performing as expected. This analysis offers a method for characterizing green roof failure using a visual aid commonly used in earthquake engineering: fragility curves. We adapted the concept of the fragility curve based on the efficiency in runoff reduction provided by a green roof compared to a conventional roof under different storm scenarios. We then used the 2D distributed surface water-groundwater coupled model MIKE SHE to model the impact that a real green roof might have on runoff in different storm events. We then employed a multiple regression analysis to generate an algebraic demand model that was input into the Matlab-based reliability analysis model FERUM, which was then used to calculate the probability of failure. The use of reliability analysis as a part of green infrastructure design code can provide insights into green roof weaknesses and areas for improvement. It also supports the design of code that is more resilient than current standards and is easily testable for failure. Finally, the understanding of reliability of a single green roof module under different scenarios can support holistic testing of system reliability.
Berger, Aaron J; Momeni, Arash; Ladd, Amy L
2014-04-01
Trapeziometacarpal, or thumb carpometacarpal (CMC), arthritis is a common problem with a variety of treatment options. Although widely used, the Eaton radiographic staging system for CMC arthritis is of questionable clinical utility, as disease severity does not predictably correlate with symptoms or treatment recommendations. A possible reason for this is that the classification itself may not be reliable, but the literature on this has not, to our knowledge, been systematically reviewed. We therefore performed a systematic review to determine the intra- and interobserver reliability of the Eaton staging system. We systematically reviewed English-language studies published between 1973 and 2013 to assess the degree of intra- and interobserver reliability of the Eaton classification for determining the stage of trapeziometacarpal joint arthritis and pantrapezial arthritis based on plain radiographic imaging. Search engines included: PubMed, Scopus(®), and CINAHL. Four studies, which included a total of 163 patients, met our inclusion criteria and were evaluated. The level of evidence of the studies included in this analysis was determined using the Oxford Centre for Evidence Based Medicine Levels of Evidence Classification by two independent observers. A limited number of studies have been performed to assess intra- and interobserver reliability of the Eaton classification system. The four studies included were determined to be Level 3b. These studies collectively indicate that the Eaton classification demonstrates poor to fair interobserver reliability (kappa values: 0.11-0.56) and fair to moderate intraobserver reliability (kappa values: 0.54-0.657). Review of the literature demonstrates that radiographs assist in the assessment of CMC joint disease, but there is not a reliable system for classification of disease severity. Currently, diagnosis and treatment of thumb CMC arthritis are based on the surgeon's qualitative assessment combining history, physical examination, and radiographic evaluation. Inconsistent agreement using the current common radiographic classification system suggests a need for better radiographic tools to quantify disease severity.
Learning from Trending, Precursor Analysis, and System Failures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Youngblood, R. W.; Duffey, R. B.
2015-11-01
Models of reliability growth relate current system unreliability to currently accumulated experience. But “experience” comes in different forms. Looking back after a major accident, one is sometimes able to identify previous events or measurable performance trends that were, in some sense, signaling the potential for that major accident: potential that could have been recognized and acted upon, but was not recognized until the accident occurred. This could be a previously unrecognized cause of accidents, or underestimation of the likelihood that a recognized potential cause would actually operate. Despite improvements in the state of practice of modeling of risk and reliability,more » operational experience still has a great deal to teach us, and work has been going on in several industries to try to do a better job of learning from experience before major accidents occur. It is not enough to say that we should review operating experience; there is too much “experience” for such general advice to be considered practical. The paper discusses the following: 1. The challenge of deciding what to focus on in analysis of operating experience. 2. Comparing what different models of learning and reliability growth imply about trending and precursor analysis.« less
Systematic review of methods for quantifying teamwork in the operating theatre
Marshall, D.; Sykes, M.; McCulloch, P.; Shalhoub, J.; Maruthappu, M.
2018-01-01
Background Teamwork in the operating theatre is becoming increasingly recognized as a major factor in clinical outcomes. Many tools have been developed to measure teamwork. Most fall into two categories: self‐assessment by theatre staff and assessment by observers. A critical and comparative analysis of the validity and reliability of these tools is lacking. Methods MEDLINE and Embase databases were searched following PRISMA guidelines. Content validity was assessed using measurements of inter‐rater agreement, predictive validity and multisite reliability, and interobserver reliability using statistical measures of inter‐rater agreement and reliability. Quantitative meta‐analysis was deemed unsuitable. Results Forty‐eight articles were selected for final inclusion; self‐assessment tools were used in 18 and observational tools in 28, and there were two qualitative studies. Self‐assessment of teamwork by profession varied with the profession of the assessor. The most robust self‐assessment tool was the Safety Attitudes Questionnaire (SAQ), although this failed to demonstrate multisite reliability. The most robust observational tool was the Non‐Technical Skills (NOTECHS) system, which demonstrated both test–retest reliability (P > 0·09) and interobserver reliability (Rwg = 0·96). Conclusion Self‐assessment of teamwork by the theatre team was influenced by professional differences. Observational tools, when used by trained observers, circumvented this.
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.
1984-01-01
The use and implementation of Ada in distributed environments in which reliability is the primary concern is investigated. Emphasis is placed on the possibility that a distributed system may be programmed entirely in ADA so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware. The primary activities are: (1) Continued development and testing of our fault-tolerant Ada testbed; (2) consideration of desirable language changes to allow Ada to provide useful semantics for failure; (3) analysis of the inadequacies of existing software fault tolerance strategies.
System Analysis and Performance Benefits of an Optimized Rotorcraft Propulsion System
NASA Technical Reports Server (NTRS)
Bruckner, Robert J.
2007-01-01
The propulsion system of rotorcraft vehicles is the most critical system to the vehicle in terms of safety and performance. The propulsion system must provide both vertical lift and forward flight propulsion during the entire mission. Whereas propulsion is a critical element for all flight vehicles, it is particularly critical for rotorcraft due to their limited safe, un-powered landing capability. This unparalleled reliability requirement has led rotorcraft power plants down a certain evolutionary path in which the system looks and performs quite similarly to those of the 1960 s. By and large the advancements in rotorcraft propulsion have come in terms of safety and reliability and not in terms of performance. The concept of the optimized propulsion system is a means by which both reliability and performance can be improved for rotorcraft vehicles. The optimized rotorcraft propulsion system which couples an oil-free turboshaft engine to a highly loaded gearbox that provides axial load support for the power turbine can be designed with current laboratory proven technology. Such a system can provide up to 60% weight reduction of the propulsion system of rotorcraft vehicles. Several technical challenges are apparent at the conceptual design level and should be addressed with current research.
Availability Estimation for Facilities in Extreme Geographical Locations
NASA Technical Reports Server (NTRS)
Fischer, Gerd M.; Omotoso, Oluseun; Chen, Guangming; Evans, John W.
2012-01-01
A value added analysis for the Reliability. Availability and Maintainability of McMurdo Ground Station was developed, which will be a useful tool for system managers in sparing, maintenance planning and determining vital performance metrics needed for readiness assessment of the upgrades to the McMurdo System. Output of this study can also be used as inputs and recommendations for the application of Reliability Centered Maintenance (RCM) for the system. ReliaSoft's BlockSim. a commercial Reliability Analysis software package, has been used to model the availability of the system upgrade to the National Aeronautics and Space Administration (NASA) Near Earth Network (NEN) Ground Station at McMurdo Station in the Antarctica. The logistics challenges due to the closure of access to McMurdo Station during the Antarctic winter was modeled using a weighted composite of four Weibull distributions. one of the possible choices for statistical distributions throughout the software program and usually used to account for failure rates of components supplied by different manufacturers. The inaccessibility of the antenna site on a hill outside McMurdo Station throughout one year due to severe weather was modeled with a Weibull distribution for the repair crew availability. The Weibull distribution is based on an analysis of the available weather data for the antenna site for 2007 in combination with the rules for travel restrictions due to severe weather imposed by the administrating agency, the National Science Foundation (NSF). The simulations resulted in an upper bound for the system availability and allowed for identification of components that would improve availability based on a higher on-site spare count than initially planned.
Speech-driven environmental control systems--a qualitative analysis of users' perceptions.
Judge, Simon; Robertson, Zoë; Hawley, Mark; Enderby, Pam
2009-05-01
To explore users' experiences and perceptions of speech-driven environmental control systems (SPECS) as part of a larger project aiming to develop a new SPECS. The motivation for this part of the project was to add to the evidence base for the use of SPECS and to determine the key design specifications for a new speech-driven system from a user's perspective. Semi-structured interviews were conducted with 12 users of SPECS from around the United Kingdom. These interviews were transcribed and analysed using a qualitative method based on framework analysis. Reliability is the main influence on the use of SPECS. All the participants gave examples of occasions when their speech-driven system was unreliable; in some instances, this unreliability was reported as not being a problem (e.g., for changing television channels); however, it was perceived as a problem for more safety critical functions (e.g., opening a door). Reliability was cited by participants as the reason for using a switch-operated system as back up. Benefits of speech-driven systems focused on speech operation enabling access when other methods were not possible; quicker operation and better aesthetic considerations. Overall, there was a perception of increased independence from the use of speech-driven environmental control. In general, speech was considered a useful method of operating environmental controls by the participants interviewed; however, their perceptions regarding reliability often influenced their decision to have backup or alternative systems for certain functions.
Reliability and Maintainability Analysis for the Amine Swingbed Carbon Dioxide Removal System
NASA Technical Reports Server (NTRS)
Dunbar, Tyler
2016-01-01
I have performed a reliability & maintainability analysis for the Amine Swingbed payload system. The Amine Swingbed is a carbon dioxide removal technology that has gone through 2,400 hours of International Space Station on-orbit use between 2013 and 2016. While the Amine Swingbed is currently an experimental payload system, the Amine Swingbed may be converted to system hardware. If the Amine Swingbed becomes system hardware, it will supplement the Carbon Dioxide Removal Assembly (CDRA) as the primary CO2 removal technology on the International Space Station. NASA is also considering using the Amine Swingbed as the primary carbon dioxide removal technology for future extravehicular mobility units and for the Orion, which will be used for the Asteroid Redirect and Journey to Mars missions. The qualitative component of the reliability and maintainability analysis is a Failure Modes and Effects Analysis (FMEA). In the FMEA, I have investigated how individual components in the Amine Swingbed may fail, and what the worst case scenario is should a failure occur. The significant failure effects are the loss of ability to remove carbon dioxide, the formation of ammonia due to chemical degradation of the amine, and loss of atmosphere because the Amine Swingbed uses the vacuum of space to regenerate the Amine Swingbed. In the quantitative component of the reliability and maintainability analysis, I have assumed a constant failure rate for both electronic and nonelectronic parts. Using this data, I have created a Poisson distribution to predict the failure rate of the Amine Swingbed as a whole. I have determined a mean time to failure for the Amine Swingbed to be approximately 1,400 hours. The observed mean time to failure for the system is between 600 and 1,200 hours. This range includes initial testing of the Amine Swingbed, as well as software faults that are understood to be non-critical. If many of the commercial parts were switched to military-grade parts, the expected mean time to failure would be 2,300 hours. Both calculated mean times to failure for the Amine Swingbed use conservative failure rate models. The observed mean time to failure for CDRA is 2,500 hours. Working on this project and for NASA in general has helped me gain insight into current aeronautics missions, reliability engineering, circuit analysis, and different cultures. Prior my internship, I did not have a lot knowledge about the work being performed at NASA. As a chemical engineer, I had not really considered working for NASA as a career path. By engaging in interactions with civil servants, contractors, and other interns, I have learned a great deal about modern challenges that NASA is addressing. My work has helped me develop a knowledge base in safety and reliability that would be difficult to find elsewhere. Prior to this internship, I had not thought about reliability engineering. Now, I have gained a skillset in performing reliability analyses, and understanding the inner workings of a large mechanical system. I have also gained experience in understanding how electrical systems work while I was analyzing the electrical components of the Amine Swingbed. I did not expect to be exposed to as many different cultures as I have while working at NASA. I am referring to both within NASA and the Houston area. NASA employs individuals with a broad range of backgrounds. It has been great to learn from individuals who have highly diverse experiences and outlooks on the world. In the Houston area, I have come across individuals from different parts of the world. Interacting with such a high number of individuals with significantly different backgrounds has helped me to grow as a person in ways that I did not expect. My time at NASA has opened a window into the field of aeronautics. After earning a bachelor's degree in chemical engineering, I plan to go to graduate school for a PhD in engineering. Prior to coming to NASA, I was not aware of the graduate Pathways program. I intend to apply for the graduate Pathways program as positions are opened up. I would like to pursue future opportunities with NASA, especially as my engineering career progresses.
NASA Astrophysics Data System (ADS)
Shi, J. T.; Han, X. T.; Xie, J. F.; Yao, L.; Huang, L. T.; Li, L.
2013-03-01
A Pulsed High Magnetic Field Facility (PHMFF) has been established in Wuhan National High Magnetic Field Center (WHMFC) and various protection measures are applied in its control system. In order to improve the reliability and robustness of the control system, the safety analysis of the PHMFF is carried out based on Fault Tree Analysis (FTA) technique. The function and realization of 5 protection systems, which include sequence experiment operation system, safety assistant system, emergency stop system, fault detecting and processing system and accident isolating protection system, are given. The tests and operation indicate that these measures improve the safety of the facility and ensure the safety of people.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaCommare, Kristina; Larsen, Peter; Eto, Joseph
Policymakers and regulatory agencies are expressing renewed interest in the reliability and resilience of the U.S. electric power system in large part due to growing recognition of the challenges posed by climate change, extreme weather events, and other emerging threats. Unfortunately, there has been little or no consolidated information in the public domain describing how public utility/service commission (PUC) staff evaluate the economics of proposed investments in the resilience of the power system. Having more consolidated information would give policymakers a better understanding of how different state regulatory entities across the U.S. make economic decisions pertaining to reliability/resiliency. To helpmore » address this, Lawrence Berkeley National Laboratory (LBNL) was tasked by the U.S. Department of Energy Office of Energy Policy and Systems Analysis (EPSA) to conduct an initial set of interviews with PUC staff to learn more about how proposed utility investments in reliability/resilience are being evaluated from an economics perspective. LBNL conducted structured interviews in late May-early June 2016 with staff from the following PUCs: Washington D.C. (DCPSC), Florida (FPSC), and California (CPUC).« less
Tools and techniques for developing policies for complex and uncertain systems.
Bankes, Steven C
2002-05-14
Agent-based models (ABM) are examples of complex adaptive systems, which can be characterized as those systems for which no model less complex than the system itself can accurately predict in detail how the system will behave at future times. Consequently, the standard tools of policy analysis, based as they are on devising policies that perform well on some best estimate model of the system, cannot be reliably used for ABM. This paper argues that policy analysis by using ABM requires an alternative approach to decision theory. The general characteristics of such an approach are described, and examples are provided of its application to policy analysis.
European Workshop Industrical Computer Science Systems approach to design for safety
NASA Technical Reports Server (NTRS)
Zalewski, Janusz
1992-01-01
This paper presents guidelines on designing systems for safety, developed by the Technical Committee 7 on Reliability and Safety of the European Workshop on Industrial Computer Systems. The focus is on complementing the traditional development process by adding the following four steps: (1) overall safety analysis; (2) analysis of the functional specifications; (3) designing for safety; (4) validation of design. Quantitative assessment of safety is possible by means of a modular questionnaire covering various aspects of the major stages of system development.
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
Composing, Analyzing and Validating Software Models
NASA Astrophysics Data System (ADS)
Sheldon, Frederick T.
1998-10-01
This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.
Composing, Analyzing and Validating Software Models
NASA Technical Reports Server (NTRS)
Sheldon, Frederick T.
1998-01-01
This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.
76 FR 70890 - Fenamidone; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-16
.../models/water/index.htm . Based on the Pesticide Root Zone Model/Exposure Analysis Modeling System (PRZM... listed in this unit could also be affected. The North American Industrial Classification System (NAICS... there is reliable information.'' This includes exposure through drinking water and in residential...
76 FR 64330 - Advanced Scientific Computing Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-18
... talks on HPC Reliability, Diffusion on Complex Networks, and Reversible Software Execution Systems Report from Applied Math Workshop on Mathematics for the Analysis, Simulation, and Optimization of Complex Systems Report from ASCR-BES Workshop on Data Challenges from Next Generation Facilities Public...
A rainwater harvesting system reliability model based on nonparametric stochastic rainfall generator
NASA Astrophysics Data System (ADS)
Basinger, Matt; Montalto, Franco; Lall, Upmanu
2010-10-01
SummaryThe reliability with which harvested rainwater can be used as a means of flushing toilets, irrigating gardens, and topping off air-conditioner serving multifamily residential buildings in New York City is assessed using a new rainwater harvesting (RWH) system reliability model. Although demonstrated with a specific case study, the model is portable because it is based on a nonparametric rainfall generation procedure utilizing a bootstrapped markov chain. Precipitation occurrence is simulated using transition probabilities derived for each day of the year based on the historical probability of wet and dry day state changes. Precipitation amounts are selected from a matrix of historical values within a moving 15 day window that is centered on the target day. RWH system reliability is determined for user-specified catchment area and tank volume ranges using precipitation ensembles generated using the described stochastic procedure. The reliability with which NYC backyard gardens can be irrigated and air conditioning units supplied with water harvested from local roofs exceeds 80% and 90%, respectively, for the entire range of catchment areas and tank volumes considered in the analysis. For RWH systems installed on the most commonly occurring rooftop catchment areas found in NYC (51-75 m 2), toilet flushing demand can be met with 7-40% reliability, with lower end of the range representing buildings with high flow toilets and no storage elements, and the upper end representing buildings that feature low flow fixtures and storage tanks of up to 5 m 3. When the reliability curves developed are used to size RWH systems to flush the low flow toilets of all multifamily buildings found a typical residential neighborhood in the Bronx, rooftop runoff inputs to the sewer system are reduced by approximately 28% over an average rainfall year, and potable water demand is reduced by approximately 53%.
Technical Analysis Feasibility Study on Smart Microgrid System in Sekolah Tinggi Teknik PLN
NASA Astrophysics Data System (ADS)
Suyanto, Heri
2018-02-01
Nowadays application of new and renewable energy as main resource of power plant has greatly increased. High penetration of renewable energy into the grid will influence the quality and reliability of the electricity system, due to the intermittent characteristic of new and renewable energy resources. Smart grid or microgrid technology has the ability to deal with this intermittent characteristic especially if these renewable energy resources integrated to grid in large scale, so it can improve the reliability and efficiency of the grid. We plan to implement smart microgrid system at Sekolah Tinggi Teknik PLN as a pilot project. Before the pilot project start, the feasibility study must be conducted. In this feasibility study, the renewable energy resources and load characteristic at the site will be measured. Then the technical aspect of this feasibility study will be analyzed. This paper explains that analysis of ths feasibility study.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
Life Prediction Issues in Thermal/Environmental Barrier Coatings in Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Brewer, David N.; Murthy, Pappu L. N.
2001-01-01
Issues and design requirements for the environmental barrier coating (EBC)/thermal barrier coating (TBC) life that are general and those specific to the NASA Ultra-Efficient Engine Technology (UEET) development program have been described. The current state and trend of the research, methods in vogue related to the failure analysis, and long-term behavior and life prediction of EBCITBC systems are reported. Also, the perceived failure mechanisms, variables, and related uncertainties governing the EBCITBC system life are summarized. A combined heat transfer and structural analysis approach based on the oxidation kinetics using the Arrhenius theory is proposed to develop a life prediction model for the EBC/TBC systems. Stochastic process-based reliability approach that includes the physical variables such as gas pressure, temperature, velocity, moisture content, crack density, oxygen content, etc., is suggested. Benefits of the reliability-based approach are also discussed in the report.
Developments in Cylindrical Shell Stability Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Starnes, James H., Jr.
1998-01-01
Today high-performance computing systems and new analytical and numerical techniques enable engineers to explore the use of advanced materials for shell design. This paper reviews some of the historical developments of shell buckling analysis and design. The paper concludes by identifying key research directions for reliable and robust methods development in shell stability analysis and design.
Karanikola, Maria N K; Papathanassoglou, Elizabeth D E
2015-02-01
The Index of Work Satisfaction (IWS) is a comprehensive scale assessing nurses' professional satisfaction. The aim of the present study was to explore: a) the applicability, reliability and validity of the Greek version of the IWS and b) contrasts among the factors addressed by IWS against the main themes emerging from a qualitative phenomenological investigation of nurses' professional experiences. A descriptive correlational design was applied using a sample of 246 emergency and critical care nurses. Internal consistency and test-retest reliability were tested. Construct and content validity were assessed by factor analysis, and through qualitative phenomenological analysis with a purposive sample of 12 nurses. Scale factors were contrasted to qualitative themes to assure that IWS embraces all aspects of Greek nurses' professional satisfaction. The internal consistency (α = 0.81) and test-retest (tau = 1, p < 0.0001) reliability were adequate. Following appropriate modifications, factor analysis confirmed the construct validity of the scale and subscales. The qualitative data partially clarified the low reliability of one subscale. The Greek version of the IWS scale is supported for use in acute care. The mixed methods approach constitutes a powerful tool for transferring scales to different cultures and healthcare systems. Copyright © 2014 Elsevier Inc. All rights reserved.
IDHEAS – A NEW APPROACH FOR HUMAN RELIABILITY ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. W. Parry; J.A Forester; V.N. Dang
2013-09-01
This paper describes a method, IDHEAS (Integrated Decision-Tree Human Event Analysis System) that has been developed jointly by the US NRC and EPRI as an improved approach to Human Reliability Analysis (HRA) that is based on an understanding of the cognitive mechanisms and performance influencing factors (PIFs) that affect operator responses. The paper describes the various elements of the method, namely the performance of a detailed cognitive task analysis that is documented in a crew response tree (CRT), and the development of the associated time-line to identify the critical tasks, i.e. those whose failure results in a human failure eventmore » (HFE), and an approach to quantification that is based on explanations of why the HFE might occur.« less
Design and reliability analysis of DP-3 dynamic positioning control architecture
NASA Astrophysics Data System (ADS)
Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru
2011-12-01
As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.
Software Fault Tolerance: A Tutorial
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2000-01-01
Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.
Validity and reliability of the Ergomopro powermeter.
Kirkland, A; Coleman, D; Wiles, J D; Hopker, J
2008-11-01
The aim of this investigation was to assess the validity and reliability of the Ergomopro powermeter. Nine participants completed trials on a Monark ergometer fitted with Ergomopro and SRM powermeters simultaneously recording power output. Each participant completed multiple trials at power outputs ranging from 50 to 450 W. The work stages recorded were 60 s in duration and were repeated three times. Participants also completed a single trial on a cycle ergometer designed to assess bilateral contributions to work output (Lode Excaliber Sport PFM). The power output during the trials was significantly different between all three systems, (p < 0.01) 231.2 +/- 114.2 W, 233.0 +/- 112.4 W, 227.8 +/- 108.8 W for the Monark, SRM and Ergomopro system, respectively. When the bilateral contributions were factored into the analysis, there were no significant differences between the powermeters (p = 0.58). The reliability of the Ergomopro system (CV%) was 2.31 % (95 % CI 2.13 - 2.52 %) compared to 1.59 % (95 % CI 1.47 to 1.74 %) for the Monark, and 1.37 % (95 % CI 1.26 - 1.50 %) for the SRM powermeter. These results indicate that the Ergomopro system has acceptable accuracy under these conditions. However, based on the reliability data, the increased variability of the Ergomopro system and bilateral balance issues have to be considered when using this device.
A Risk-Based Approach for Aerothermal/TPS Analysis and Testing
2007-07-01
RTO-EN-AVT-142 17 - 1 A Risk-Based Approach for Aerothermal/ TPS Analysis and Testing Michael J. Wright∗ and Jay H. Grinstead† NASA Ames...of the thermal protection system ( TPS ) is to protect the payload (crew, cargo, or science) from this entry heating environment. The performance of...the TPS is determined by the efficiency and reliability of this system, typically measured
Edouard, Pascal; Junge, Astrid; Kiss-Polauf, Marianna; Ramirez, Christophe; Sousa, Monica; Timpka, Toomas; Branco, Pedro
2018-03-01
The quality of epidemiological injury data depends on the reliability of reporting to an injury surveillance system. Ascertaining whether all physicians/physiotherapists report the same information for the same injury case is of major interest to determine data validity. The aim of this study was therefore to analyse the data collection reliability through the analysis of the interrater reliability. Cross-sectional survey. During the 2016 European Athletics Advanced Athletics Medicine Course in Amsterdam, all national medical teams were asked to complete seven virtual case reports on a standardised injury report form using the same definitions and classifications of injuries as the international athletics championships injury surveillance protocol. The completeness of data and the Fleiss' kappa coefficients for the inter-rater reliability were calculated for: sex, age, event, circumstance, location, type, assumed cause and estimated time-loss. Forty-one team physicians and physiotherapists of national medical teams participated in the study (response rate 89.1%). Data completeness was 96.9%. The Fleiss' kappa coefficients were: almost perfect for sex (k=1), injury location (k=0.991), event (k=0.953), circumstance (k=0.942), and age (k=0.870), moderate for type (k=0.507), fair for assumed cause (k=0.394), and poor for estimated time-loss (k=0.155). The injury surveillance system used during international athletics championships provided reliable data for "sex", "location", "event", "circumstance", and "age". More caution should be taken for "assumed cause" and "type", and even more for "estimated time-loss". This injury surveillance system displays satisfactory data quality (reliable data and high data completeness), and thus, can be recommended as tool to collect epidemiology information on injuries during international athletics championships. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Composite power system well-being analysis
NASA Astrophysics Data System (ADS)
Aboreshaid, Saleh Abdulrahman Saleh
The evaluation of composite system reliability is extremely complex as it is necessary to include detailed modeling of both generation and transmission facilities and their auxiliary elements. The most significant quantitative indices in composite power system adequacy evaluation are those which relate to load curtailment. Many utilities have difficulty in interpreting the expected load curtailment indices as the existing models are based on adequacy analysis and in many cases do not consider realistic operating conditions in the system under study. This thesis presents a security based approach which alleviates this difficulty and provides the ability to evaluate the well-being of customer load points and the overall composite generation and transmission power system. Acceptable deterministic criteria are included in the probabilistic evaluation of the composite system reliability indices to monitor load point well-being. The degree of load point well-being is quantified in terms of the healthy and marginal state indices in addition to the traditional risk indices. The individual well-being indices of the different system load points are aggregated to produce system indices. This thesis presents new models and techniques to quantify the well-being of composite generation and, direct and alternating current transmission systems. Security constraints are basically the operating limits which must be satisfied for normal system operation. These constraints depend mainly on the purpose behind the study. The constraints which govern the practical operation of a power system are divided, in this thesis, into three sets namely, steady-state, voltage stability and transient stability constraints. The inclusion of an appropriate transient stability constraint will lead to a more accurate appraisal of the overall power system well-being. This thesis illustrates the utilization of a bisection method in the analytical evaluation of the critical clearing time which forms the basis of most existing stability assessments. The effect of employing high-speed-simultaneous or adaptive reclosing schemes is presented in this thesis. An effective and fast technique to incorporate voltage stability considerations in composite generation and transmission system reliability evaluation is also presented. The proposed technique can be easily incorporated in an existing composite power system reliability program using voltage stability constraints that are constructed for individual load points based on a relatively simple risk index. It is believed that the concepts, procedures and indices presented in this thesis will provide useful tools for power system designers, planners and operators and assist them to perform composite system well-being analysis in addition to traditional risk assessment.
Preliminary design and analysis of an advanced rotorcraft transmission
NASA Technical Reports Server (NTRS)
Henry, Z. S.
1990-01-01
Future rotorcraft transmissions of the 1990s and beyond the year 2000 require the incorporation of key emerging material and component technologies using advanced and innovative design practices in order to meet the requirements for a reduced weight-to-power ratio, a decreased noise level, and a substantially increased reliability. The specific goals for future rotocraft transmissions when compared with current state-of-the-art transmissions are a 25 percent weight reduction, a 10-dB reduction in the transmitted noise level, and a system reliability of 5000 hours mean-time-between-removal for the transmission. This paper presents the results of the design studies conducted to meet the stated goals for an advanced rotorcraft transmission. These design studies include system configuration, planetary gear train selection, and reliability prediction methods.
Reliability Prediction Analysis: Airborne System Results and Best Practices
NASA Astrophysics Data System (ADS)
Silva, Nuno; Lopes, Rui
2013-09-01
This article presents the results of several reliability prediction analysis for aerospace components, made by both methodologies, the 217F and the 217Plus. Supporting and complementary activities are described, as well as the differences concerning the results and the applications of both methodologies that are summarized in a set of lessons learned that are very useful for RAMS and Safety Prediction practitioners.The effort that is required for these activities is also an important point that is discussed, as is the end result and their interpretation/impact on the system design.The article concludes while positioning these activities and methodologies in an overall process for space and aeronautics equipment/components certification, and highlighting their advantages. Some good practices have also been summarized and some reuse rules have been laid down.
2016-03-14
flows , or continuous state changes, with feedback loops and lags modeled in the flow system. Agent based simulations operate using a discrete event...DeLand, S. M., Rutherford, B . M., Diegert, K. V., & Alvin, K. F. (2002). Error and uncertainty in modeling and simulation . Reliability Engineering...intrinsic complexity of the underlying social systems fundamentally limits the ability to make
Coder Drift: A Reliability Problem for Teacher Observations.
ERIC Educational Resources Information Center
Marston, Paul T.; And Others
The results of two experiments support the hypothesis of "coder drift" which is defined as change that takes place while trained coders are using a system for a number of classroom observation sessions. The coding system used was a modification of the low-inference Flanders System of Interaction Analysis which calls for assigning…
Validity and Realibility of Chemistry Systemic Multiple Choices Questions (CSMCQs)
ERIC Educational Resources Information Center
Priyambodo, Erfan; Marfuatun
2016-01-01
Nowadays, Rasch model analysis is used widely in social research, moreover in educational research. In this research, Rasch model is used to determine the validation and the reliability of systemic multiple choices question in chemistry teaching and learning. There were 30 multiple choices question with systemic approach for high school student…
NASA Technical Reports Server (NTRS)
1979-01-01
Contractor information requirements necessary to support the power extension package project of the space shuttle program are specified for the following categories of data: project management; configuration management; systems engineering and test; manufacturing; reliability, quality assurance and safety; logistics; training; and operations.
Research Staff | Advanced Manufacturing Research | NREL
SYSTEMS CENTER Kevin Bennion leads NREL's Thermal Sciences and Systems research task focused on thermal vehicle thermal management and vehicle systems analysis. He came to NREL from Ford Motor Company, where he focused on thermal management and reliability for power electronics and electric machines for several
Glenn, Jordan M; Galey, Madeline; Edwards, Abigail; Rickert, Bradley; Washington, Tyrone A
2015-07-01
Ability to generate force from the core musculature is a critical factor for sports and general activities with insufficiencies predisposing individuals to injury. This study evaluated isometric force production as a valid and reliable method of assessing abdominal force using the abdominal test and evaluation systems tool (ABTEST). Secondary analysis estimated 1-repetition maximum on commercially available abdominal machine compared to maximum force and average power on ABTEST system. This study utilized test-retest reliability and comparative analysis for validity. Reliability was measured using test-retest design on ABTEST. Validity was measured via comparison to estimated 1-repetition maximum on a commercially available abdominal device. Participants applied isometric, abdominal force against a transducer and muscular activation was evaluated measuring normalized electromyographic activity at the rectus-abdominus, rectus-femoris, and erector-spinae. Test, re-test force production on ABTEST was significantly correlated (r=0.84; p<0.001). Mean electromyographic activity for the rectus-abdominus (72.93% and 75.66%), rectus-femoris (6.59% and 6.51%), and erector-spinae (6.82% and 5.48%) were observed for trial-1 and trial-2, respectively. Significant correlations for the estimated 1-repetition maximum were found for average power (r=0.70, p=0.002) and maximum force (r=0.72, p<0.001). Data indicate the ABTEST can accurately measure rectus-abdominus force isolated from hip-flexor involvement. Negligible activation of erector-spinae substantiates little subjective effort among participants in the lower back. Results suggest ABTEST is a valid and reliable method of evaluating abdominal force. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Reliable Video Analysis Helps Security Company Grow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meurer, Dave; Furgal, Dave; Hobson, Rick
Armed Response Team (ART) has grown to become the largest locally owned security company in New Mexico. With technical assistance from Sandia National Laboratories through the New Mexico Small Business Assistance (NMSBA) Program, ART got help so they could quickly bring workable video security solutions to market. By offering a reliable video analytic camera system, they’ve been able to reduce theft, add hundreds of clients, and increase their number of employees.