NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.
Design of fuel cell powered data centers for sufficient reliability and availability
NASA Astrophysics Data System (ADS)
Ritchie, Alexa J.; Brouwer, Jacob
2018-04-01
It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.
System reliability approaches for advanced propulsion system structures
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Mahadevan, S.
1991-01-01
This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.
NASA Technical Reports Server (NTRS)
Sproles, Darrell W.; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
18 CFR 40.3 - Availability of Reliability Standards.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Availability of Reliability Standards. 40.3 Section 40.3 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... THE BULK-POWER SYSTEM § 40.3 Availability of Reliability Standards. The Electric Reliability...
A study on reliability of power customer in distribution network
NASA Astrophysics Data System (ADS)
Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin
2017-05-01
The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-10-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-01-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245
ETARA PC version 3.3 user's guide: Reliability, availability, maintainability simulation model
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Viterna, Larry A.
1991-01-01
A user's manual describing an interactive, menu-driven, personal computer based Monte Carlo reliability, availability, and maintainability simulation program called event time availability reliability (ETARA) is discussed. Given a reliability block diagram representation of a system, ETARA simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair intervals as a function of exponential and/or Weibull distributions. Availability parameters such as equivalent availability, state availability (percentage of time as a particular output state capability), continuous state duration and number of state occurrences can be calculated. Initial spares allotment and spares replenishment on a resupply cycle can be simulated. The number of block failures are tabulated both individually and by block type, as well as total downtime, repair time, and time waiting for spares. Also, maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can be calculated over a cumulative period of time or at specific points in time.
Systems Issues In Terrestrial Fiber Optic Link Reliability
NASA Astrophysics Data System (ADS)
Spencer, James L.; Lewin, Barry R.; Lee, T. Frank S.
1990-01-01
This paper reviews fiber optic system reliability issues from three different viewpoints - availability, operating environment, and evolving technologies. Present availability objectives for interoffice links and for the distribution loop must be re-examined for applications such as the Synchronous Optical Network (SONET), Fiber-to-the-Home (FTTH), and analog services. The hostile operating environments of emerging applications (such as FTTH) must be carefully considered in system design as well as reliability assessments. Finally, evolving technologies might require the development of new reliability testing strategies.
HiRel - Reliability/availability integrated workstation tool
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Dugan, Joanne B.
1992-01-01
The HiRel software tool is described and demonstrated by application to the mission avionics subsystem of the Advanced System Integration Demonstrations (ASID) system that utilizes the PAVE PILLAR approach. HiRel marks another accomplishment toward the goal of producing a totally integrated computer-aided design (CAD) workstation design capability. Since a reliability engineer generally represents a reliability model graphically before it can be solved, the use of a graphical input description language increases productivity and decreases the incidence of error. The graphical postprocessor module HARPO makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes. The addition of several powerful HARP modeling engines provides the user with a reliability/availability modeling capability for a wide range of system applications all integrated under a common interactive graphical input-output capability.
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram
2017-03-01
The binary states, i.e., success or failed state assumptions used in conventional reliability are inappropriate for reliability analysis of complex industrial systems due to lack of sufficient probabilistic information. For large complex systems, the uncertainty of each individual parameter enhances the uncertainty of the system reliability. In this paper, the concept of fuzzy reliability has been used for reliability analysis of the system, and the effect of coverage factor, failure and repair rates of subsystems on fuzzy availability for fault-tolerant crystallization system of sugar plant is analyzed. Mathematical modeling of the system is carried out using the mnemonic rule to derive Chapman-Kolmogorov differential equations. These governing differential equations are solved with Runge-Kutta fourth-order method.
How reliable are clinical systems in the UK NHS? A study of seven NHS organisations
Franklin, Bryony Dean; Moorthy, Krishna; Cooke, Matthew W; Vincent, Charles
2012-01-01
Background It is well known that many healthcare systems have poor reliability; however, the size and pervasiveness of this problem and its impact has not been systematically established in the UK. The authors studied four clinical systems: clinical information in surgical outpatient clinics, prescribing for hospital inpatients, equipment in theatres, and insertion of peripheral intravenous lines. The aim was to describe the nature, extent and variation in reliability of these four systems in a sample of UK hospitals, and to explore the reasons for poor reliability. Methods Seven UK hospital organisations were involved; each system was studied in three of these. The authors took delivery of the systems' intended outputs to be a proxy for the reliability of the system as a whole. For example, for clinical information, 100% reliability was defined as all patients having an agreed list of clinical information available when needed during their appointment. Systems factors were explored using semi-structured interviews with key informants. Common themes across the systems were identified. Results Overall reliability was found to be between 81% and 87% for the systems studied, with significant variation between organisations for some systems: clinical information in outpatient clinics ranged from 73% to 96%; prescribing for hospital inpatients 82–88%; equipment availability in theatres 63–88%; and availability of equipment for insertion of peripheral intravenous lines 80–88%. One in five reliability failures were associated with perceived threats to patient safety. Common factors causing poor reliability included lack of feedback, lack of standardisation, and issues such as access to information out of working hours. Conclusions Reported reliability was low for the four systems studied, with some common factors behind each. However, this hides significant variation between organisations for some processes, suggesting that some organisations have managed to create more reliable systems. Standardisation of processes would be expected to have significant benefit. PMID:22495099
RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.
USDA-ARS?s Scientific Manuscript database
Armillaria mellea is a serious pathogen of horticultural and agricultural systems in Europe and North America. The lack of a reliable in vitro fruiting system has hindered research, and necessitated dependence on intermittently available wild-collected basidiospores. Here we describe a reliable, rep...
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.
Verification of Triple Modular Redundancy Insertion for Reliable and Trusted Systems
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth
2016-01-01
If a system is required to be protected using triple modular redundancy (TMR), improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process and the complexity of digital designs, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems.
NASA Technical Reports Server (NTRS)
Turnquist, S. R.; Twombly, M.; Hoffman, D.
1989-01-01
A preliminary reliability, availability, and maintainability (RAM) analysis of the proposed Space Station Freedom electric power system (EPS) was performed using the unit reliability, availability, and maintainability (UNIRAM) analysis methodology. Orbital replacement units (ORUs) having the most significant impact on EPS availability measures were identified. Also, the sensitivity of the EPS to variations in ORU RAM data was evaluated for each ORU. Estimates were made of average EPS power output levels and availability of power to the core area of the space station. The results of assessments of the availability of EPS power and power to load distribution points in the space stations are given. Some highlights of continuing studies being performed to understand EPS availability considerations are presented.
Reliability analysis and initial requirements for FC systems and stacks
NASA Astrophysics Data System (ADS)
Åström, K.; Fontell, E.; Virtanen, S.
In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.
NASA Technical Reports Server (NTRS)
Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. The Hybrid Automated Reliability Predictor (HARP) tutorial provides insight into HARP modeling techniques and the interactive textual prompting input language via a step-by-step explanation and demonstration of HARP's fault occurrence/repair model and the fault/error handling models. Example applications are worked in their entirety and the HARP tabular output data are presented for each. Simple models are presented at first with each succeeding example demonstrating greater modeling power and complexity. This document is not intended to present the theoretical and mathematical basis for HARP.
High Available COTS Based Computer for Space
NASA Astrophysics Data System (ADS)
Hartmann, J.; Magistrati, Giorgio
2015-09-01
The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.
Reliability issues of free-space communications systems and networks
NASA Astrophysics Data System (ADS)
Willebrand, Heinz A.
2003-04-01
Free space optics (FSO) is a high-speed point-to-point connectivity solution traditionally used in the enterprise campus networking market for building-to-building LAN connectivity. However, more recently some wire line and wireless carriers started to deploy FSO systems in their networks. The requirements on FSO system reliability, meaing both system availability and component reliability, are far more stringent in the carrier market when compared to the requirements in the enterprise market segment. This paper tries to outline some of the aspects that are important to ensure carrier class system reliability.
NASA Technical Reports Server (NTRS)
Gillespie, Amanda M.
2012-01-01
The future of Space Exploration includes missions to the moon, asteroids, Mars, and beyond. To get there, the mission concept is to launch multiple launch vehicles months, even years apart. In order to achieve this, launch vehicles, payloads (satellites and crew capsules), and ground systems must be highly reliable and/or available, to include maintenance concepts and procedures in the event of a launch scrub. In order to achieve this high probability of mission success, Ground Systems Development and Operations (GSDO) has allocated Reliability, Maintainability, and Availability (RMA) requirements to all hardware and software required for both launch operations and, in the event of a launch scrub, required to support a repair of the ground systems, launch vehicle, or payload. This is done concurrently with the design process (30/60/90 reviews).
DOE Office of Scientific and Technical Information (OSTI.GOV)
CARLSON, A.B.
The document presents updated results of the preliminary reliability, availability, maintainability analysis performed for delivery of waste feed from tanks 241-AZ-101 and 241-AN-105 to British Nuclear Fuels Limited, inc. under the Tank Waste Remediation System Privatization Contract. The operational schedule delay risk is estimated and contributing factors are discussed.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, within a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability, and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation, testing results, and other information. Where appropriate, actual performance history was used to calculate failure rates for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to assess compliance with requirements and to highlight design or performance shortcomings for further decision making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability, and maintainability analysis, and present findings and observation based on analysis leading to the Ground Operations Project Preliminary Design Review milestone.
Space Transportation System Availability Relationships to Life Cycle Cost
NASA Technical Reports Server (NTRS)
Rhodes, Russel E.; Donahue, Benjamin B.; Chen, Timothy T.
2009-01-01
Future space transportation architectures and designs must be affordable. Consequently, their Life Cycle Cost (LCC) must be controlled. For the LCC to be controlled, it is necessary to identify all the requirements and elements of the architecture at the beginning of the concept phase. Controlling LCC requires the establishment of the major operational cost drivers. Two of these major cost drivers are reliability and maintainability, in other words, the system's availability (responsiveness). Potential reasons that may drive the inherent availability requirement are the need to control the number of unique parts and the spare parts required to support the transportation system's operation. For more typical space transportation systems used to place satellites in space, the productivity of the system will drive the launch cost. This system productivity is the resultant output of the system availability. Availability is equal to the mean uptime divided by the sum of the mean uptime plus the mean downtime. Since many operational factors cannot be projected early in the definition phase, the focus will be on inherent availability which is equal to the mean time between a failure (MTBF) divided by the MTBF plus the mean time to repair (MTTR) the system. The MTBF is a function of reliability or the expected frequency of failures. When the system experiences failures the result is added operational flow time, parts consumption, and increased labor with an impact to responsiveness resulting in increased LCC. The other function of availability is the MTTR, or maintainability. In other words, how accessible is the failed hardware that requires replacement and what operational functions are required before and after change-out to make the system operable. This paper will describe how the MTTR can be equated to additional labor, additional operational flow time, and additional structural access capability, all of which drive up the LCC. A methodology will be presented that provides the decision makers with the understanding necessary to place constraints on the design definition. This methodology for the major drivers will determine the inherent availability, safety, reliability, maintainability, and the life cycle cost of the fielded system. This methodology will focus on the achievement of an affordable, responsive space transportation system. It is the intent of this paper to not only provide the visibility of the relationships of these major attribute drivers (variables) to each other and the resultant system inherent availability, but also to provide the capability to bound the variables, thus providing the insight required to control the system's engineering solution. An example of this visibility is the need to provide integration of similar discipline functions to allow control of the total parts count of the space transportation system. Also, selecting a reliability requirement will place a constraint on parts count to achieve a given inherent availability requirement, or require accepting a larger parts count with the resulting higher individual part reliability requirements. This paper will provide an understanding of the relationship of mean repair time (mean downtime) to maintainability (accessibility for repair), and both mean time between failure (reliability of hardware) and the system inherent availability.
NASA Astrophysics Data System (ADS)
Launch vehicle propulsion system reliability considerations during the design and verification processes are discussed. The tools available for predicting and minimizing anomalies or failure modes are described and objectives for validating advanced launch system propulsion reliability are listed. Methods for ensuring vehicle/propulsion system interface reliability are examined and improvements in the propulsion system development process are suggested to improve reliability in launch operations. Also, possible approaches to streamline the specification and procurement process are given. It is suggested that government and industry should define reliability program requirements and manage production and operations activities in a manner that provides control over reliability drivers. Also, it is recommended that sufficient funds should be invested in design, development, test, and evaluation processes to ensure that reliability is not inappropriately subordinated to other management considerations.
Towards cost-effective reliability through visualization of the reliability option space
NASA Technical Reports Server (NTRS)
Feather, Martin S.
2004-01-01
In planning a complex system's development there can be many options to improve its reliability. Typically their sum total cost exceeds the budget available, so it is necessary to select judiciously from among them. Reliability models can be employed to calculate the cost and reliability implications of a candidate selection.
Charlton, Paula C; Mentiplay, Benjamin F; Pua, Yong-Hao; Clark, Ross A
2015-05-01
Traditional methods of assessing joint range of motion (ROM) involve specialized tools that may not be widely available to clinicians. This study assesses the reliability and validity of a custom Smartphone application for assessing hip joint range of motion. Intra-tester reliability with concurrent validity. Passive hip joint range of motion was recorded for seven different movements in 20 males on two separate occasions. Data from a Smartphone, bubble inclinometer and a three dimensional motion analysis (3DMA) system were collected simultaneously. Intraclass correlation coefficients (ICCs), coefficients of variation (CV) and standard error of measurement (SEM) were used to assess reliability. To assess validity of the Smartphone application and the bubble inclinometer against the three dimensional motion analysis system, intraclass correlation coefficients and fixed and proportional biases were used. The Smartphone demonstrated good to excellent reliability (ICCs>0.75) for four out of the seven movements, and moderate to good reliability for the remaining three movements (ICC=0.63-0.68). Additionally, the Smartphone application displayed comparable reliability to the bubble inclinometer. The Smartphone application displayed excellent validity when compared to the three dimensional motion analysis system for all movements (ICCs>0.88) except one, which displayed moderate to good validity (ICC=0.71). Smartphones are portable and widely available tools that are mostly reliable and valid for assessing passive hip range of motion, with potential for large-scale use when a bubble inclinometer is not available. However, caution must be taken in its implementation as some movement axes demonstrated only moderate reliability. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
FEMA and RAM Analysis for the Multi Canister Overpack (MCO) Handling Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
SWENSON, C.E.
2000-06-01
The Failure Modes and Effects Analysis and the Reliability, Availability, and Maintainability Analysis performed for the Multi-Canister Overpack Handling Machine (MHM) has shown that the current design provides for a safe system, but the reliability of the system (primarily due to the complexity of the interlocks and permissive controls) is relatively low. No specific failure modes were identified where significant consequences to the public occurred, or where significant impact to nearby workers should be expected. The overall reliability calculation for the MHM shows a 98.1 percent probability of operating for eight hours without failure, and an availability of the MHMmore » of 90 percent. The majority of the reliability issues are found in the interlocks and controls. The availability of appropriate spare parts and maintenance personnel, coupled with well written operating procedures, will play a more important role in successful mission completion for the MHM than other less complicated systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divan, Deepak; Brumsickle, William; Eto, Joseph
2003-04-01
This report describes a new approach for collecting information on power quality and reliability and making it available in the public domain. Making this information readily available in a form that is meaningful to electricity consumers is necessary for enabling more informed private and public decisions regarding electricity reliability. The system dramatically reduces the cost (and expertise) needed for customers to obtain information on the most significant power quality events, called voltage sags and interruptions. The system also offers widespread access to information on power quality collected from multiple sites and the potential for capturing information on the impacts ofmore » power quality problems, together enabling a wide variety of analysis and benchmarking to improve system reliability. Six case studies demonstrate selected functionality and capabilities of the system, including: Linking measured power quality events to process interruption and downtime; Demonstrating the ability to correlate events recorded by multiple monitors to narrow and confirm the causes of power quality events; and Benchmarking power quality and reliability on a firm and regional basis.« less
RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chokchai "Box" Leangsuksun
2011-05-31
Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.
Modeling and simulation of reliability of unmanned intelligent vehicles
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Dixit, Arati M.; Mustapha, Adam; Singh, Kuldip; Aggarwal, K. K.; Gerhart, Grant R.
2008-04-01
Unmanned ground vehicles have a large number of scientific, military and commercial applications. A convoy of such vehicles can have collaboration and coordination. For the movement of such a convoy, it is important to predict the reliability of the system. A number of approaches are available in literature which describes the techniques for determining the reliability of the system. Graph theoretic approaches are popular in determining terminal reliability and system reliability. In this paper we propose to exploit Fuzzy and Neuro-Fuzzy approaches for predicting the node and branch reliability of the system while Boolean algebra approaches are used to determine terminal reliability and system reliability. Hence a combination of intelligent approaches like Fuzzy, Neuro-Fuzzy and Boolean approaches is used to predict the overall system reliability of a convoy of vehicles. The node reliabilities may correspond to the collaboration of vehicles while branch reliabilities will determine the terminal reliabilities between different nodes. An algorithm is proposed for determining the system reliabilities of a convoy of vehicles. The simulation of the overall system is proposed. Such simulation should be helpful to the commander to take an appropriate action depending on the predicted reliability in different terrain and environmental conditions. It is hoped that results of this paper will lead to more important techniques to have a reliable convoy of vehicles in a battlefield.
Reliability and Normative Data for the Dynamic Visual Acuity Test for Vestibular Screening.
Riska, Kristal M; Hall, Courtney D
2016-06-01
The purpose of this study was to determine reliability of computerized dynamic visual acuity (DVA) testing and to determine reference values for younger and older adults. A primary function of the vestibular system is to maintain gaze stability during head motion. The DVA test quantifies gaze stabilization with the head moving versus stationary. Commercially available computerized systems allow clinicians to incorporate DVA into their assessment; however, information regarding reliability and normative values of these systems is sparse. Forty-six healthy adults, grouped by age, with normal vestibular function were recruited. Each participant completed computerized DVA testing including static visual acuity, minimum perception time, and DVA using the NeuroCom inVision System. Testing was performed by two examiners in the same session and then repeated at a follow-up session 3 to 14 days later. Intraclass correlation coefficients (ICCs) were used to determine inter-rater and test-retest reliability. ICCs for inter-rater reliability ranged from 0.323 to 0.937 and from 0.434 to 0.909 for horizontal and vertical head movements, respectively. ICCs for test-retest reliability ranged from 0.154 to 0.856 and from 0.377 to 0.9062 for horizontal and vertical head movements, respectively. Overall, raw scores (left/right DVA and up/down DVA) were more reliable than DVA loss scores. Reliability of a commercially available DVA system has poor-to-fair reliability for DVA loss scores. The use of a convergence paradigm and not incorporating the forced choice paradigm may contribute to poor reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey T.; Hill, Roger; Walker, Andy
The use of the term 'availability' to describe a photovoltaic (PV) system and power plant has been fraught with confusion for many years. A term that is meant to describe equipment operational status is often omitted, misapplied or inaccurately combined with PV performance metrics due to attempts to measure performance and reliability through the lens of traditional power plant language. This paper discusses three areas where current research in standards, contract language and performance modeling is improving the way availability is used with regards to photovoltaic systems and power plants.
ERIC Educational Resources Information Center
Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
ERIC Educational Resources Information Center
Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald
2012-01-01
In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
Flight control electronics reliability/maintenance study
NASA Technical Reports Server (NTRS)
Dade, W. W.; Edwards, R. H.; Katt, G. T.; Mcclellan, K. L.; Shomber, H. A.
1977-01-01
Collection and analysis of data are reported that concern the reliability and maintenance experience of flight control system electronics currently in use on passenger carrying jet aircraft. Two airlines B-747 airplane fleets were analyzed to assess the component reliability, system functional reliability, and achieved availability of the CAT II configuration flight control system. Also assessed were the costs generated by this system in the categories of spare equipment, schedule irregularity, and line and shop maintenance. The results indicate that although there is a marked difference in the geographic location and route pattern between the airlines studied, there is a close similarity in the reliability and the maintenance costs associated with the flight control electronics.
TIGER reliability analysis in the DSN
NASA Technical Reports Server (NTRS)
Gunn, J. M.
1982-01-01
The TIGER algorithm, the inputs to the program and the output are described. TIGER is a computer program designed to simulate a system over a period of time to evaluate system reliability and availability. Results can be used in the Deep Space Network for initial spares provisioning and system evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Valerie A.; Ogilvie, Alistair B.
2012-01-01
This report addresses the general data requirements for reliability analysis of fielded wind turbines and other wind plant equipment. The report provides a rationale for why this data should be collected, a list of the data needed to support reliability and availability analysis, and specific data recommendations for a Computerized Maintenance Management System (CMMS) to support automated analysis. This data collection recommendations report was written by Sandia National Laboratories to address the general data requirements for reliability analysis of operating wind turbines. This report is intended to help develop a basic understanding of the data needed for reliability analysis frommore » a Computerized Maintenance Management System (CMMS) and other data systems. The report provides a rationale for why this data should be collected, a list of the data needed to support reliability and availability analysis, and specific recommendations for a CMMS to support automated analysis. Though written for reliability analysis of wind turbines, much of the information is applicable to a wider variety of equipment and analysis and reporting needs. The 'Motivation' section of this report provides a rationale for collecting and analyzing field data for reliability analysis. The benefits of this type of effort can include increased energy delivered, decreased operating costs, enhanced preventive maintenance schedules, solutions to issues with the largest payback, and identification of early failure indicators.« less
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
Evolving Reliability and Maintainability Allocations for NASA Ground Systems
NASA Technical Reports Server (NTRS)
Munoz, Gisela; Toon, T.; Toon, J.; Conner, A.; Adams, T.; Miranda, D.
2016-01-01
This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) programs subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.
Evolving Reliability and Maintainability Allocations for NASA Ground Systems
NASA Technical Reports Server (NTRS)
Munoz, Gisela; Toon, Troy; Toon, Jamie; Conner, Angelo C.; Adams, Timothy C.; Miranda, David J.
2016-01-01
This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program’s subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.
Evolving Reliability and Maintainability Allocations for NASA Ground Systems
NASA Technical Reports Server (NTRS)
Munoz, Gisela; Toon, Jamie; Toon, Troy; Adams, Timothy C.; Miranda, David J.
2016-01-01
This paper describes the methodology that was developed to allocate reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program's subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. Allocating is an iterative process; as systems moved beyond their conceptual and preliminary design phases this provided an opportunity for the reliability engineering team to reevaluate allocations based on updated designs and maintainability characteristics of the components. Trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper will discuss the value of modifying reliability and maintainability allocations made for the GSDO subsystems as the program nears the end of its design phase.
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Liu, Zengkai; Liu, Yonghong; Cai, Baoping
2014-01-01
Reliability analysis of the electrical control system of a subsea blowout preventer (BOP) stack is carried out based on Markov method. For the subsea BOP electrical control system used in the current work, the 3-2-1-0 and 3-2-0 input voting schemes are available. The effects of the voting schemes on system performance are evaluated based on Markov models. In addition, the effects of failure rates of the modules and repair time on system reliability indices are also investigated. PMID:25409010
NASA Astrophysics Data System (ADS)
Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey
2018-05-01
The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.
Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth A.
2016-01-01
We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems. If a system is expected to be protected using TMR, improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. This manuscript addresses the challenge of confirming that TMR has been inserted without corruption of functionality and with correct application of the expected TMR topology. The proposed verification method combines the usage of existing formal analysis tools with a novel search-detect-and-verify tool. Field programmable gate array (FPGA),Triple Modular Redundancy (TMR),Verification, Trust, Reliability,
Reliability modeling of fault-tolerant computer based systems
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1987-01-01
Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.
Redundancy management of inertial systems.
NASA Technical Reports Server (NTRS)
Mckern, R. A.; Musoff, H.
1973-01-01
The paper reviews developments in failure detection and isolation techniques applicable to gimballed and strapdown systems. It examines basic redundancy management goals of improved reliability, performance and logistic costs, and explores mechanizations available for both input and output data handling. The meaning of redundant system reliability in terms of available coverage, system MTBF, and mission time is presented and the practical hardware performance limitations of failure detection and isolation techniques are explored. Simulation results are presented illustrating implementation coverages attainable considering IMU performance models and mission detection threshold requirements. The implications of a complete GN&C redundancy management method on inertial techniques are also explored.
Expert system for UNIX system reliability and availability enhancement
NASA Astrophysics Data System (ADS)
Xu, Catherine Q.
1993-02-01
Highly reliable and available systems are critical to the airline industry. However, most off-the-shelf computer operating systems and hardware do not have built-in fault tolerant mechanisms, the UNIX workstation is one example. In this research effort, we have developed a rule-based Expert System (ES) to monitor, command, and control a UNIX workstation system with hot-standby redundancy. The ES on each workstation acts as an on-line system administrator to diagnose, report, correct, and prevent certain types of hardware and software failures. If a primary station is approaching failure, the ES coordinates the switch-over to a hot-standby secondary workstation. The goal is to discover and solve certain fatal problems early enough to prevent complete system failure from occurring and therefore to enhance system reliability and availability. Test results show that the ES can diagnose all targeted faulty scenarios and take desired actions in a consistent manner regardless of the sequence of the faults. The ES can perform designated system administration tasks about ten times faster than an experienced human operator. Compared with a single workstation system, our hot-standby redundancy system downtime is predicted to be reduced by more than 50 percent by using the ES to command and control the system.
Expert System for UNIX System Reliability and Availability Enhancement
NASA Technical Reports Server (NTRS)
Xu, Catherine Q.
1993-01-01
Highly reliable and available systems are critical to the airline industry. However, most off-the-shelf computer operating systems and hardware do not have built-in fault tolerant mechanisms, the UNIX workstation is one example. In this research effort, we have developed a rule-based Expert System (ES) to monitor, command, and control a UNIX workstation system with hot-standby redundancy. The ES on each workstation acts as an on-line system administrator to diagnose, report, correct, and prevent certain types of hardware and software failures. If a primary station is approaching failure, the ES coordinates the switch-over to a hot-standby secondary workstation. The goal is to discover and solve certain fatal problems early enough to prevent complete system failure from occurring and therefore to enhance system reliability and availability. Test results show that the ES can diagnose all targeted faulty scenarios and take desired actions in a consistent manner regardless of the sequence of the faults. The ES can perform designated system administration tasks about ten times faster than an experienced human operator. Compared with a single workstation system, our hot-standby redundancy system downtime is predicted to be reduced by more than 50 percent by using the ES to command and control the system.
Reliability culture at La Silla Paranal Observatory
NASA Astrophysics Data System (ADS)
Gonzalez, Sergio
2010-07-01
The Maintenance Department at the La Silla - Paranal Observatory has been an important base to keep the operations of the observatory at a good level of reliability and availability. Several strategies have been implemented and improved in order to cover these requirements and keep the system and equipment working properly when it is required. For that reason, one of the latest improvements has been the introduction of the concept of reliability, which implies that we don't simply speak about reliability concepts. It involves much more than that. It involves the use of technologies, data collecting, data analysis, decision making, committees concentrated in analysis of failure modes and how they can be eliminated, aligning the results with the requirements of our internal partners and establishing steps to achieve success. Some of these steps have already been implemented: data collection, use of technologies, analysis of data, development of priority tools, committees dedicated to analyze data and people dedicated to reliability analysis. This has permitted us to optimize our process, analyze where we can improve, avoid functional failures, reduce the failures range in several systems and subsystems; all this has had a positive impact in terms of results for our Observatory. All these tools are part of the reliability culture that allows our system to operate with a high level of reliability and availability.
Issues and Methods for Assessing COTS Reliability, Maintainability, and Availability
NASA Technical Reports Server (NTRS)
Schneidewind, Norman F.; Nikora, Allen P.
1998-01-01
Many vendors produce products that are not domain specific (e.g., network server) and have limited functionality (e.g., mobile phone). In contrast, many customers of COTS develop systems that am domain specific (e.g., target tracking system) and have great variability in functionality (e.g., corporate information system). This discussion takes the viewpoint of how the customer can ensure the quality of COTS components. In evaluating the benefits and costs of using COTS, we must consider the environment in which COTS will operate. Thus we must distinguish between using a non-mission critical application like a spreadsheet program to produce a budget and a mission critical application like military strategic and tactical operations. Whereas customers will tolerate an occasional bug in the former, zero tolerance is the rule in the latter. We emphasize the latter because this is the arena where there are major unresolved problems in the application of COTS. Furthermore, COTS components may be embedded in the larger customer system. We refer to these as embedded systems. These components must be reliable, maintainable, and available, and must be with the larger system in order for the customer to benefit from the advertised advantages of lower development and maintenance costs. Interestingly, when the claims of COTS advantages are closely examined, one finds that to a great extent these COTS components consist of hardware and office products, not mission critical software [1]. Obviously, COTS components are different from custom components with respect to one or more of the following attributes: source, development paradigm, safety, reliability, maintainability, availability, security, and other attributes. However, the important question is whether they should be treated differently when deciding to deploy them for operational use; we suggest the answer is no. We use reliability as an example to justify our answer. In order to demonstrate its reliability, a COTS component must pass the same reliability evaluations as the custom components, otherwise the COTS components will be the weakest link in the chain of components and will be the determinant of software system reliability. The challenge is that there will be less information available for evaluating COTS components than for custom components but this does not mean we should despair and do nothing. Actually, there is a lot we can do even in the absence of documentation on COTS components because the customer will have information about how COTS components are to be used in the larger system. To illustrate our approach, we will consider the reliability, maintainability, and availability (RMA) of COTS components as used in larger systems. Finally, COTS suppliers might consider increasing visibility into their products to assist customers in determining the components' fitness for use in a particular application. We offer ideas of information that would be useful to customers, and what vendors might do to provide it.
Hawaii electric system reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva Monroy, Cesar Augusto; Loose, Verne William
2012-09-01
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less
Hawaii Electric System Reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loose, Verne William; Silva Monroy, Cesar Augusto
2012-08-01
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less
System reliability analysis through corona testing
NASA Technical Reports Server (NTRS)
Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.
1975-01-01
In the Reliability and Quality Engineering Test Laboratory at the NASA Lewis Research Center a nondestructive, corona-vacuum test facility for testing power system components was developed using commercially available hardware. The test facility was developed to simulate operating temperature and vacuum while monitoring corona discharges with residual gases. This facility is being used to test various high voltage power system components.
NASA Astrophysics Data System (ADS)
Siddiqi, A.; Muhammad, A.; Wescoat, J. L., Jr.
2017-12-01
Large-scale, legacy canal systems, such as the irrigation infrastructure in the Indus Basin in Punjab, Pakistan, have been primarily conceived, constructed, and operated with a techno-centric approach. The emerging socio-hydrological approaches provide a new lens for studying such systems to potentially identify fresh insights for addressing contemporary challenges of water security. In this work, using the partial definition of water security as "the reliable availability of an acceptable quantity and quality of water", supply reliability is construed as a partial measure of water security in irrigation systems. A set of metrics are used to quantitatively study reliability of surface supply in the canal systems of Punjab, Pakistan using an extensive dataset of 10-daily surface water deliveries over a decade (2007-2016) and of high frequency (10-minute) flow measurements over one year. The reliability quantification is based on comparison of actual deliveries and entitlements, which are a combination of hydrological and social constructs. The socio-hydrological lens highlights critical issues of how flows are measured, monitored, perceived, and experienced from the perspective of operators (government officials) and users (famers). The analysis reveals varying levels of reliability (and by extension security) of supply when data is examined across multiple temporal and spatial scales. The results shed new light on evolution of water security (as partially measured by supply reliability) for surface irrigation in the Punjab province of Pakistan and demonstrate that "information security" (defined as reliable availability of sufficiently detailed data) is vital for enabling water security. It is found that forecasting and management (that are social processes) lead to differences between entitlements and actual deliveries, and there is significant potential to positively affect supply reliability through interventions in the social realm.
Reliability and Maintainability Data for Lead Lithium Cooling Systems
Cadwallader, Lee
2016-11-16
This article presents component failure rate data for use in assessment of lead lithium cooling systems. Best estimate data applicable to this liquid metal coolant is presented. Repair times for similar components are also referenced in this work. These data support probabilistic safety assessment and reliability, availability, maintainability and inspectability analyses.
User-Perceived Reliability of M-for-N (M: N) Shared Protection Systems
NASA Astrophysics Data System (ADS)
Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue
In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.
A Best Practice for Developing Availability Guarantee Language in Photovoltaic (PV) O&M Agreements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Balfour, John
This document outlines the foundation for developing language that can be utilized in an Equipment Availability Guarantee, typically included in an O&M services agreement between a PV system or plant owner and an O&M services provider, or operator. Many of the current PV O&M service agreement Availability Guarantees are based on contracts used for traditional power generation, which create challenges for owners and operators due to the variable nature of grid-tied photovoltaic generating technologies. This report documents language used in early PV availability guarantees and presents best practices and equations that can be used to more openly communicate how themore » reliability of the PV system and plant equipment can be expressed in an availability guarantee. This work will improve the bankability of PV systems by providing greater transparency into the equipment reliability state to all parties involved in an O&M services contract.« less
The Case for Modular Redundancy in Large-Scale High Performance Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L
2009-01-01
Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less
NASA Astrophysics Data System (ADS)
Bogachkov, I. V.; Lutchenko, S. S.
2018-05-01
The article deals with the method for the assessment of the fiber optic communication lines (FOCL) reliability taking into account the effect of the optical fiber tension, the temperature influence and the built-in diagnostic equipment errors of the first kind. The reliability is assessed in terms of the availability factor using the theory of Markov chains and probabilistic mathematical modeling. To obtain a mathematical model, the following steps are performed: the FOCL state is defined and validated; the state graph and system transitions are described; the system transition of states that occur at a certain point is specified; the real and the observed time of system presence in the considered states are identified. According to the permissible value of the availability factor, it is possible to determine the limiting frequency of FOCL maintenance.
Availability Estimation for Facilities in Extreme Geographical Locations
NASA Technical Reports Server (NTRS)
Fischer, Gerd M.; Omotoso, Oluseun; Chen, Guangming; Evans, John W.
2012-01-01
A value added analysis for the Reliability. Availability and Maintainability of McMurdo Ground Station was developed, which will be a useful tool for system managers in sparing, maintenance planning and determining vital performance metrics needed for readiness assessment of the upgrades to the McMurdo System. Output of this study can also be used as inputs and recommendations for the application of Reliability Centered Maintenance (RCM) for the system. ReliaSoft's BlockSim. a commercial Reliability Analysis software package, has been used to model the availability of the system upgrade to the National Aeronautics and Space Administration (NASA) Near Earth Network (NEN) Ground Station at McMurdo Station in the Antarctica. The logistics challenges due to the closure of access to McMurdo Station during the Antarctic winter was modeled using a weighted composite of four Weibull distributions. one of the possible choices for statistical distributions throughout the software program and usually used to account for failure rates of components supplied by different manufacturers. The inaccessibility of the antenna site on a hill outside McMurdo Station throughout one year due to severe weather was modeled with a Weibull distribution for the repair crew availability. The Weibull distribution is based on an analysis of the available weather data for the antenna site for 2007 in combination with the rules for travel restrictions due to severe weather imposed by the administrating agency, the National Science Foundation (NSF). The simulations resulted in an upper bound for the system availability and allowed for identification of components that would improve availability based on a higher on-site spare count than initially planned.
A Passive System Reliability Analysis for a Station Blackout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunett, Acacia; Bucknor, Matthew; Grabaskas, David
2015-05-03
The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less
NASA Technical Reports Server (NTRS)
Morehouse, Dennis V.
2006-01-01
In order to perform public risk analyses for vehicles containing Flight Termination Systems (FTS), it is necessary for the analyst to know the reliability of each of the components of the FTS. These systems are typically divided into two segments; a transmitter system and associated equipment, typically in a ground station or on a support aircraft, and a receiver system and associated equipment on the target vehicle. This analysis attempts to analyze the reliability of the NASA DFRC flight termination system ground transmitter segment for use in the larger risk analysis and to compare the results against two established Department of Defense availability standards for such equipment.
NASA Aerospace Flight Battery Systems Program Update
NASA Technical Reports Server (NTRS)
Manzo, Michelle; ODonnell, Patricia
1997-01-01
The objectives of NASA's Aerospace Flight Battery Systems Program is to: develop, maintain and provide tools for the validation and assessment of aerospace battery technologies; accelerate the readiness of technology advances and provide infusion paths for emerging technologies; provide NASA projects with the required database and validation guidelines for technology selection of hardware and processes relating to aerospace batteries; disseminate validation and assessment tools, quality assurance, reliability, and availability information to the NASA and aerospace battery communities; and ensure that safe, reliable batteries are available for NASA's future missions.
NASA Astrophysics Data System (ADS)
Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue
2012-05-01
In this article, we investigate the reliability of M-for-N (M:N) shared protection systems. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner under the condition that the failed units are not repairable. Mathematical analysis gives the closed-form solution of the reliability and mean time to failure (MTTF). We also analyse several numerical examples of the reliability and MTTF. This result can be applied, for example, to the analysis and design of an integrated circuit consisting of redundant backup components. In such a device, repairing a failed component is unrealistic. The analysis provides useful information for the design for general shared protection systems in which the failed units are not repaired.
ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams. There are no restrictions on the number of total blocks or on the number of blocks in a series, parallel, or M-of-N parallel subsystem. In addition, the same block can appear in more than one subsystem if such an arrangement is necessary for an accurate model. ETARA 3.3 is written in APL2 for IBM PC series computers or compatibles running MS-DOS and the APL2 interpreter. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. The standard distribution medium for this package is a set of two 5.25 inch 360K MS-DOS format diskettes. A sample executable is included. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ETARA was developed in 1990 and last updated in 1991.
Open System for Earthquake Engineering Simulation - Home Page
-X, an expert system for reliable pre-and post-processing of buildings is now available for free /post processor GiD. The interface is available though the the GID+OpenSees website OpenSees Days Europe
Summary of NASA Aerospace Flight Battery Systems Program activities
NASA Technical Reports Server (NTRS)
Manzo, Michelle; Odonnell, Patricia
1994-01-01
A summary of NASA Aerospace Flight Battery Systems Program Activities is presented. The NASA Aerospace Flight Battery Systems Program represents a unified NASA wide effort with the overall objective of providing NASA with the policy and posture which will increase the safety, performance, and reliability of space power systems. The specific objectives of the program are to: enhance cell/battery safety and reliability; maintain current battery technology; increase fundamental understanding of primary and secondary cells; provide a means to bring forth advanced technology for flight use; assist flight programs in minimizing battery technology related flight risks; and ensure that safe, reliable batteries are available for NASA's future missions.
Remote Energy Monitoring System via Cellular Network
NASA Astrophysics Data System (ADS)
Yunoki, Shoji; Tamaki, Satoshi; Takada, May; Iwaki, Takashi
Recently, improvement on power saving and cost efficiency by monitoring the operation status of various facilities over the network has gained attention. Wireless network, especially cellular network, has advantage in mobility, coverage, and scalability. On the other hand, it has disadvantage of low reliability, due to rapid changes in the available bandwidth. We propose a transmission control scheme based on data priority and instantaneous available bandwidth to realize a highly reliable remote monitoring system via cellular network. We have developed our proposed monitoring system and evaluated the effectiveness of our scheme, and proved it reduces the maximum transmission delay of sensor status to 1/10 compared to best effort transmission.
Reliability Driven Space Logistics Demand Analysis
NASA Technical Reports Server (NTRS)
Knezevic, J.
1995-01-01
Accurate selection of the quantity of logistic support resources has a strong influence on mission success, system availability and the cost of ownership. At the same time the accurate prediction of these resources depends on the accurate prediction of the reliability measures of the items involved. This paper presents a method for the advanced and accurate calculation of the reliability measures of complex space systems which are the basis for the determination of the demands for logistics resources needed during the operational life or mission of space systems. The applicability of the method presented is demonstrated through several examples.
Lee, James
2009-01-01
The Long-Term Mechanical Circulatory Support (MCS) System Reliability Recommendation was published in the American Society for Artificial Internal Organs (ASAIO) Journal and the Annals of Thoracic Surgery in 1998. At that time, it was stated that the document would be periodically reviewed to assess its timeliness and appropriateness within 5 years. Given the wealth of clinical experience in MCS systems, a new recommendation has been drafted by consensus of a group of representatives from the medical community, academia, industry, and government. The new recommendation describes a reliability test methodology and provides detailed reliability recommendations. In addition, the new recommendation provides additional information and clinical data in appendices that are intended to assist the reliability test engineer in the development of a reliability test that is expected to give improved predictions of clinical reliability compared with past test methods. The appendices are available for download at the ASAIO journal web site at www.asaiojournal.com.
Comparing the reliability of related populations with the probability of agreement
Stevens, Nathaniel T.; Anderson-Cook, Christine M.
2016-07-26
Combining information from different populations to improve precision, simplify future predictions, or improve underlying understanding of relationships can be advantageous when considering the reliability of several related sets of systems. Using the probability of agreement to help quantify the similarities of populations can help to give a realistic assessment of whether the systems have reliability that are sufficiently similar for practical purposes to be treated as a homogeneous population. In addition, the new method is described and illustrated with an example involving two generations of a complex system where the reliability is modeled using either a logistic or probit regressionmore » model. Note that supplementary materials including code, datasets, and added discussion are available online.« less
Comparing the reliability of related populations with the probability of agreement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Nathaniel T.; Anderson-Cook, Christine M.
Combining information from different populations to improve precision, simplify future predictions, or improve underlying understanding of relationships can be advantageous when considering the reliability of several related sets of systems. Using the probability of agreement to help quantify the similarities of populations can help to give a realistic assessment of whether the systems have reliability that are sufficiently similar for practical purposes to be treated as a homogeneous population. In addition, the new method is described and illustrated with an example involving two generations of a complex system where the reliability is modeled using either a logistic or probit regressionmore » model. Note that supplementary materials including code, datasets, and added discussion are available online.« less
Fault diagnostic instrumentation design for environmental control and life support systems
NASA Technical Reports Server (NTRS)
Yang, P. Y.; You, K. C.; Wynveen, R. A.; Powell, J. D., Jr.
1979-01-01
As a development phase moves toward flight hardware, the system availability becomes an important design aspect which requires high reliability and maintainability. As part of continous development efforts, a program to evaluate, design, and demonstrate advanced instrumentation fault diagnostics was successfully completed. Fault tolerance designs for reliability and other instrumenation capabilities to increase maintainability were evaluated and studied.
Object-oriented fault tree evaluation program for quantitative analyses
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1988-01-01
Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
NASA Astrophysics Data System (ADS)
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2017-06-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
Fast gas spectroscopy using pulsed quantum cascade lasers
NASA Astrophysics Data System (ADS)
Beyer, T.; Braun, M.; Lambrecht, A.
2003-03-01
Laser spectroscopy has found many industrial applications, e.g., control of automotive exhaust and process monitoring. The midinfrared region is of special interest because it has stronger absorption lines compared to the near infrared (NIR). However, in the NIR high quality reliable laser sources, detectors, and passive optical components are available. A quantum cascade laser could change this situation if fundamental advantages can be exploited with compact and reliable systems. It will be shown that, using pulsed lasers and available fast detectors, lower residual sensitivity levels than in corresponding NIR systems can be achieved. The stability is sufficient for industrial applications.
Oak Ridge Leadership Computing Facility Position Paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oral, H Sarp; Hill, Jason J; Thach, Kevin G
This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.
Time-Tagged Risk/Reliability Assessment Program for Development and Operation of Space System
NASA Astrophysics Data System (ADS)
Kubota, Yuki; Takegahara, Haruki; Aoyagi, Junichiro
We have investigated a new method of risk/reliability assessment for development and operation of space system. It is difficult to evaluate risk of spacecraft, because of long time operation, maintenance free and difficulty of test under the ground condition. Conventional methods are FMECA, FTA, ETA and miscellaneous. These are not enough to assess chronological anomaly and there is a problem to share information during R&D. A new method of risk and reliability assessment, T-TRAP (Time-tagged Risk/Reliability Assessment Program) is proposed as a management tool for the development and operation of space system. T-TRAP consisting of time-resolved Fault Tree and Criticality Analyses, upon occurrence of anomaly in the system, facilitates the responsible personnel to quickly identify the failure cause and decide corrective actions. This paper describes T-TRAP method and its availability.
NASA Technical Reports Server (NTRS)
Stiffler, J. J.; Bryant, L. A.; Guccione, L.
1979-01-01
A computer program to aid in accessing the reliability of fault tolerant avionics systems was developed. A simple mathematical expression was used to evaluate the reliability of any redundant configuration over any interval during which the failure rates and coverage parameters remained unaffected by configuration changes. Provision was made for convolving such expressions in order to evaluate the reliability of a dual mode system. A coverage model was also developed to determine the various relevant coverage coefficients as a function of the available hardware and software fault detector characteristics, and subsequent isolation and recovery delay statistics.
Toro A, Richard; Campos, Claudia; Molina, Carolina; Morales S, Raul G E; Leiva-Guzmán, Manuel A
2015-09-01
A critical analysis of Chile's National Air Quality Information System (NAQIS) is presented, focusing on particulate matter (PM) measurement. This paper examines the complexity, availability and reliability of monitoring station information, the implementation of control systems, the quality assurance protocols of the monitoring station data and the reliability of the measurement systems in areas highly polluted by particulate matter. From information available on the NAQIS website, it is possible to confirm that the PM2.5 (PM10) data available on the site correspond to 30.8% (69.2%) of the total information available from the monitoring stations. There is a lack of information regarding the measurement systems used to quantify air pollutants, most of the available data registers contain gaps, almost all of the information is categorized as "preliminary information" and neither standard operating procedures (operational and validation) nor assurance audits or quality control of the measurements are reported. In contrast, events that cause saturation of the monitoring detectors located in northern and southern Chile have been observed using beta attenuation monitoring. In these cases, it can only be concluded that the PM content is equal to or greater than the saturation concentration registered by the monitors and that the air quality indexes obtained from these measurements are underestimated. This occurrence has been observed in 12 (20) public and private stations where PM2.5 (PM10) is measured. The shortcomings of the NAQIS data have important repercussions for the conclusions obtained from the data and for how the data are used. However, these issues represent opportunities for improving the system to widen its use, incorporate comparison protocols between equipment, install new stations and standardize the control system and quality assurance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Space Transportation System Availability Requirements and Its Influencing Attributes Relationships
NASA Technical Reports Server (NTRS)
Rhodes, Russel E.; Adams, TImothy C.
2008-01-01
It is essential that management and engineering understand the need for an availability requirement for the customer's space transportation system as it enables the meeting of his needs, goal, and objectives. There are three types of availability, e.g., operational availability, achieved availability, or inherent availability. The basic definition of availability is equal to the mean uptime divided by the sum of the mean uptime plus the mean downtime. The major difference is the inclusiveness of the functions within the mean downtime and the mean uptime. This paper will address tIe inherent availability which only addresses the mean downtime as that mean time to repair or the time to determine the failed article, remove it, install a replacement article and verify the functionality of the repaired system. The definitions of operational availability include the replacement hardware supply or maintenance delays and other non-design factors in the mean downtime. Also with inherent availability the mean uptime will only consider the mean time between failures (other availability definitions consider this as mean time between maintenance - preventive and corrective maintenance) that requires the repair of the system to be functional. It is also essential that management and engineering understand all influencing attributes relationships to each other and to the resultant inherent availability requirement. This visibility will provide the decision makers with the understanding necessary to place constraints on the design definition for the major drivers that will determine the inherent availability, safety, reliability, maintainability, and the life cycle cost of the fielded system provided the customer. This inherent availability requirement may be driven by the need to use a multiple launch approach to placing humans on the moon or the desire to control the number of spare parts required to support long stays in either orbit or on the surface of the moon or mars. It is the intent of this paper to provide the visibility of relationships of these major attribute drivers (variables) to each other and the resultant system inherent availability, but also provide the capability to bound the variables providing engineering the insight required to control the system's engineering solution. An example of this visibility will be the need to provide integration of similar discipline functions to allow control of the total parts count of the space transportation system. Also the relationship visibility of selecting a reliability requirement will place a constraint on parts count to achieve a given inherent availability requirement or accepting a larger parts count with the resulting higher reliability requirement. This paper will provide an understanding for the relationship of mean repair time (mean downtime) to maintainability, e.g., accessibility for repair, and both mean time between failure, e.g., reliability of hardware and the system inherent availability. Having an understanding of these relationships and resulting requirements before starting the architectural design concept definition will avoid considerable time and money required to iterate the design to meet the redesign and assessment process required to achieve the results required of the customer's space transportation system. In fact the impact to the schedule to being able to deliver the system that meets the customer's needs, goals, and objectives may cause the customer to compromise his desired operational goal and objectives resulting in considerable increased life cycle cost of the fielded space transportation system.
Technology Overview for Advanced Aircraft Armament System Program.
1981-05-01
availability of methods or systems for improving stores and armament safety. Of particular importance are aspects of safety involving hazards analysis ...flutter virtually insensitive to inertia and center-of- gravity location of store - Simplifies and reduces analysis and testing required to flutter- clear...status. Nearly every existing reliability analysis and discipline that prom- ised a positive return on reliability performance was drawn out, dusted
1983-10-05
battle damage. Others are local electrical power and cooling disruptions. Again, a highly critical function is lost if its computer site is destroyed. A...formalized design of the test bed to meet the requirements of the functional description and goals of the program. AMTEC --Z3IT TASKS: 610, 710, 810
Malt, U F
1986-01-01
Experiences from teaching DSM-III to more than three hundred Norwegian psychiatrists and clinical psychologists suggest that reliable DSM-III diagnoses can be achieved within a few hours training with reference to the decision trees and the diagnostic criteria only. The diagnoses provided are more reliable than the corresponding ICD diagnoses which the participants were more familiar with. The three main sources of reduced reliability of the DSM-III diagnoses are related to: poor knowledge of the criteria which often is connected with failure of obtaining diagnostic key information during the clinical interview; unfamiliar concepts and vague or ambiguous criteria. The two first issues are related to the quality of the teaching of DSM-III. The third source of reduced reliability reflects unsolved validity issues. By using the classification of five affective case stories as examples, these sources of diagnostic pitfalls, reducing reliability and ways to overcome these problems when teaching the DSM-III system, are discussed. It is concluded that the DSM-III system of classification is easy to teach and that the system is superior to other classification systems available from a reliability point of view. The current version of the DSM-III system, however, partly owes a high degree of reliability to broad and heterogeneous diagnostic categories like the concept major depression, which may have questionable validity. Thus, the future revisions of the DSM-III system should, above all, address the issue of validity.
Reliability and economy -- Hydro electricity for Iran
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jahromi-Shirazi, M.J.; Zarbakhsh, M.H.
1998-12-31
Reliability is the probability that a device or system will perform its function adequately, for the period of time intended, under the operating conditions intended. Reliability and economy are two important factors in operating any system, especially in power generation. Due to the high rate in population growth in Iran, the experts have estimated that the demand for electricity will be about 63,000 MW in the next 25 years, the installed power is now about 26,000 MW. Therefore, the energy policy decision made in Iran is to go to power generation by hydroelectric plants because of reliability, availability of watermore » resources and the economics of hydroelectric power.« less
Is the medical justice system broken?
Howard, Philip K
2003-09-01
The current lawsuit culture is creating a crisis in US health care. The broad perception that anyone can sue for almost anything has fundamentally altered the practice of medicine, eroding the quality and availability of health care. Current reform proposals to "cap" one category of damages are not nearly ambitious enough. Providing relief to doctors squeezed by insurance premiums is important but will not heal the deep distrust that skews daily decisions, nor will it provide incentives to overhaul outdated practices. The United States needs an entirely new system of medical justice. Its first goal is to be reliable: reliable in protecting patients against bad practices, reliable in protecting physicians who act reasonably, and reliable in interpreting standards of care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less
Reliability Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Anderson, Molly S.
2011-01-01
Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.
Designing for Reliability and Robustness
NASA Technical Reports Server (NTRS)
Svetlik, Randall G.; Moore, Cherice; Williams, Antony
2017-01-01
Long duration spaceflight has a negative effect on the human body, and exercise countermeasures are used on-board the International Space Station (ISS) to minimize bone and muscle loss, combatting these effects. Given the importance of these hardware systems to the health of the crew, this equipment must continue to be readily available. Designing spaceflight exercise hardware to meet high reliability and availability standards has proven to be challenging throughout the time the crewmembers have been living on ISS beginning in 2000. Furthermore, restoring operational capability after a failure is clearly time-critical, but can be problematic given the challenges of troubleshooting the problem from 220 miles away. Several best-practices have been leveraged in seeking to maximize availability of these exercise systems, including designing for robustness, implementing diagnostic instrumentation, relying on user feedback, and providing ample maintenance and sparing. These factors have enhanced the reliability of hardware systems, and therefore have contributed to keeping the crewmembers healthy upon return to Earth. This paper will review the failure history for three spaceflight exercise countermeasure systems identifying lessons learned that can help improve future systems. Specifically, the Treadmill with Vibration Isolation and Stabilization System (TVIS), Cycle Ergometer with Vibration Isolation and Stabilization System (CEVIS), and the Advanced Resistive Exercise Device (ARED) will be reviewed, analyzed, and conclusions identified so as to provide guidance for improving future exercise hardware designs. These lessons learned, paired with thorough testing, offer a path towards reduced system down-time.
High-reliability gas-turbine combined-cycle development program: Phase II. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hecht, K.G.; Sanderson, R.A.; Smith, M.J.
This three-volume report presents the results of Phase II of the multiphase EPRI-sponsored High-Reliability Gas Turbine Combined-Cycle Development Program whose goal is to achieve a highly reliable gas turbine combined-cycle power plant, available by the mid-1980s, which would be an economically attractive baseload generation alternative for the electric utility industry. The Phase II program objective was to prepare the preliminary design of this power plant. This volume presents information of the reliability, availability, and maintainability (RAM) analysis of a representative plant and the preliminary design of the gas turbine, the gas turbine ancillaries, and the balance of plant including themore » steam turbine generator. To achieve the program goals, a gas turbine was incorporated which combined proven reliability characteristics with improved performance features. This gas turbine, designated the V84.3, is the result of a cooperative effort between Kraftwerk Union AG and United Technologies Corporation. Gas turbines of similar design operating in Europe under baseload conditions have demonstrated mean time between failures in excess of 40,000 hours. The reliability characteristics of the gas turbine ancillaries and balance-of-plant equipment were improved through system simplification and component redundancy and by selection of component with inherent high reliability. A digital control system was included with logic, communications, sensor redundancy, and mandual backup. An independent condition monitoring and diagnostic system was also included. Program results provide the preliminary design of a gas turbine combined-cycle baseload power plant. This power plant has a predicted mean time between failure of nearly twice the 3000-hour EPRI goal. The cost of added reliability features is offset by improved performance, which results in a comparable specific cost and an 8% lower cost of electricity compared to present market offerings.« less
Reliability of intracerebral hemorrhage classification systems: A systematic review.
Rannikmäe, Kristiina; Woodfield, Rebecca; Anderson, Craig S; Charidimou, Andreas; Chiewvit, Pipat; Greenberg, Steven M; Jeng, Jiann-Shing; Meretoja, Atte; Palm, Frederic; Putaala, Jukka; Rinkel, Gabriel Je; Rosand, Jonathan; Rost, Natalia S; Strbian, Daniel; Tatlisumak, Turgut; Tsai, Chung-Fen; Wermer, Marieke Jh; Werring, David; Yeh, Shin-Joe; Al-Shahi Salman, Rustam; Sudlow, Cathie Lm
2016-08-01
Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies' characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78-0.97 [six studies, 518 cases], mechanistic 0.89-0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80-1 [three studies, 137 cases], mechanistic 0.92-0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines. © 2016 World Stroke Organization.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Komal
2018-05-01
Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Automated Sneak Circuit Analysis Technique
1990-06-01
the OrCAD/SDT module Port facility. 2. The terminals of all in- circuit voltage sources (e , batteries) must be labeled using the OrCAD/SDT module port...ELECTE 1 MAY 2 01994 _- AUTOMATED SNEAK CIRCUIT ANALYSIS TECHNIQUEIt~ w I wtA who RADC 94-14062 Systems Reliability & Engineering Division Rome...Air Develpment Center Best Avai~lable copy AUTOMATED SNEAK CIRCUIT ANALYSIS TECHNIQUE RADC June 1990 Systems Reliability & Engineering Division Rome Air
Space Transportation System Availability Requirement and Its Influencing Attributes Relationships
NASA Technical Reports Server (NTRS)
Rhodes, Russell E.; Adams, Timothy C.; McCleskey, Carey M.
2008-01-01
It is important that engineering and management accept the need for an availability requirement that is derived with its influencing attributes. It is the intent of this paper to provide the visibility of relationships of these major attribute drivers (variables) to each other and the resultant system inherent availability. Also important to provide bounds of the variables providing engineering the insight required to control the system's engineering solution, e.g., these influencing attributes become design requirements also. These variables will drive the need to provide integration of similar discipline functions or technology selection to allow control of the total parts count. The relationship of selecting a reliability requirement will place a constraint on parts count to achieve a given availability requirement or if allowed to increase the parts count will drive the system reliability requirement higher. They also provide the understanding for the relationship of mean repair time (or mean down time) to maintainability, e.g., accessibility for repair, and both the mean time between failure, e.g., reliability of hardware and availability. The concerns and importance of achieving a strong availability requirement is driven by the need for affordability, the choice of using the two launch solution for the single space application, or the need to control the spare parts count needed to support the long stay in either orbit or on the surface of the moon. Understanding the requirements before starting the architectural design concept will avoid considerable time and money required to iterate the design to meet the redesign and assessment process required to achieve the results required of the customer's space transportation system. In fact the impact to the schedule to being able to deliver the system that meets the customer's needs, goals, and objectives may cause the customer to compromise his desired operational goal and objectives resulting in considerable increased life cycle cost of the fielded space transportation system.
Space Transportation System Availability Requirements and Its Influencing Attributes Relationships
NASA Technical Reports Server (NTRS)
Rhodes, Russell E.; Adams, Timothy C.; McCleskey, Carey M.
2008-01-01
It is important that engineering and management accept the need for an availability requirement that is derived with its influencing attributes. It is the intent of this paper to provide the visibility of relationships of these major attribute drivers (variables) to each other and the resultant system inherent availability. Also important to provide bounds of the variables providing engineering the insight required to control the system's engineering solution, e.g., these influencing attributes become design requirements also. These variables will drive the need to provide integration of similar discipline functions or technology selection to allow control of the total parts count. The relationship of selecting a reliability requirement will place a constraint on parts count to achieve a given availability requirement or if allowed to increase the parts count will drive the system reliability requirement higher. They also provide the understanding for the relationship of mean repair time (or mean down time) to maintainability, e.g., accessibility for repair, and both the mean time between failure, e.g., reliability of hardware and availability. The concerns and importance of achieving a strong availability requirement is driven by the need for affordability, the choice of using the two launch solution for the single space application, or the need to control the spare parts count needed to support the long stay in either orbit or on the surface of the moon. Understanding the requirements before starting the architectural design concept will avoid considerable time and money required to iterate the design to meet the redesign and assessment process required to achieve the results required of the customer's space transportation system. In fact the impact to the schedule to being able to deliver the system that meets the customer's needs, goals, and objectives may cause the customer to compromise his desired operational goal and objectives resulting in considerable increased life cycle cost of the fielded space transportation system.
Solar-Powered Supply Is Light and Reliable
NASA Technical Reports Server (NTRS)
Willis, A. E.; Garrett, H.; Matheney, J.
1982-01-01
DC supply originally intended for use in solar-powered spacecraft propulsion is lightweight and very reliable. Operates from 100-200 volt output of solar panels to produce 11 different dc voltages, with total demand of 3,138 watts. With exception of specially wound inductors and transformers, system uses readily available components.
MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank Mueller
2009-02-05
MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less
Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.
2015-12-01
This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.
Reliability analysis of a robotic system using hybridized technique
NASA Astrophysics Data System (ADS)
Kumar, Naveen; Komal; Lather, J. S.
2017-09-01
In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.
Highly Survivable Avionics Systems for Long-Term Deep Space Exploration
NASA Technical Reports Server (NTRS)
Alkalai, L.; Chau, S.; Tai, A. T.
2001-01-01
The design of highly survivable avionics systems for long-term (> 10 years) exploration of space is an essential technology for all current and future missions in the Outer Planets roadmap. Long-term exposure to extreme environmental conditions such as high radiation and low-temperatures make survivability in space a major challenge. Moreover, current and future missions are increasingly using commercial technology such as deep sub-micron (0.25 microns) fabrication processes with specialized circuit designs, commercial interfaces, processors, memory, and other commercial off the shelf components that were not designed for long-term survivability in space. Therefore, the design of highly reliable, and available systems for the exploration of Europa, Pluto and other destinations in deep-space require a comprehensive and fresh approach to this problem. This paper summarizes work in progress in three different areas: a framework for the design of highly reliable and highly available space avionics systems, distributed reliable computing architecture, and Guarded Software Upgrading (GSU) techniques for software upgrading during long-term missions. Additional information is contained in the original extended abstract.
Hybrid automated reliability predictor integrated work station (HiREL)
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1991-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godwin, Aaron
The scope will be limited to analyzing the effect of the EFC within the system and how one improperly installed coupling affects the rest of the HPFL system. The discussion will include normal operations, impaired flow, and service interruptions. Normal operations are defined as two-way flow to buildings. Impaired operations are defined as a building that only has one-way flow being provided to the building. Service interruptions will be when a building does not have water available to it. The project will look at the following aspects of the reliability of the HPFL system: mean time to failure (MTTF) ofmore » EFCs, mean time between failures (MTBF), series system models, and parallel system models. These calculations will then be used to discuss the reliability of the system when one of the couplings fails. Compare the reliability of two-way feeds versus one-way feeds.« less
Small and Rural Wastewater Systems
Many tools, training, technical assistance, and funding resources are available to develop and maintain reliable and affordable wastewater treatment systems in small and rural communities including in tribal and U.S.-Mexico Border area.
Monteiro-Soares, M; Martins-Mendes, D; Vaz-Carneiro, A; Sampaio, S; Dinis-Ribeiro, M
2014-10-01
We systematically review the available systems used to classify diabetic foot ulcers in order to synthesize their methodological qualitative issues and accuracy to predict lower extremity amputation, as this may represent a critical point in these patients' care. Two investigators searched, in EBSCO, ISI, PubMed and SCOPUS databases, and independently selected studies published until May 2013 and reporting prognostic accuracy and/or reliability of specific systems for patients with diabetic foot ulcer in order to predict lower extremity amputation. We included 25 studies reporting a prevalence of lower extremity amputation between 6% and 78%. Eight different diabetic foot ulcer descriptions and seven prognostic stratification classification systems were addressed with a variable (1-9) number of factors included, specially peripheral arterial disease (n = 12) or infection at the ulcer site (n = 10) or ulcer depth (n = 10). The Meggitt-Wagner, S(AD)SAD and Texas University Classification systems were the most extensively validated, whereas ten classifications were derived or validated only once. Reliability was reported in a single study, and accuracy measures were reported in five studies with another eight allowing their calculation. Pooled accuracy ranged from 0.65 (for gangrene) to 0.74 (for infection). There are numerous classification systems for diabetic foot ulcer outcome prediction, but only few studies evaluated their reliability or external validity. Studies rarely validated several systems simultaneously and only a few reported accuracy measures. Further studies assessing reliability and accuracy of the available systems and their composing variables are needed. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Bhunia, A. K.; Roy, D.
2009-10-01
In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.
Structural Probability Concepts Adapted to Electrical Engineering
NASA Technical Reports Server (NTRS)
Steinberg, Eric P.; Chamis, Christos C.
1994-01-01
Through the use of equivalent variable analogies, the authors demonstrate how an electrical subsystem can be modeled by an equivalent structural subsystem. This allows the electrical subsystem to be probabilistically analyzed by using available structural reliability computer codes such as NESSUS. With the ability to analyze the electrical subsystem probabilistically, we can evaluate the reliability of systems that include both structural and electrical subsystems. Common examples of such systems are a structural subsystem integrated with a health-monitoring subsystem, and smart structures. Since these systems have electrical subsystems that directly affect the operation of the overall system, probabilistically analyzing them could lead to improved reliability and reduced costs. The direct effect of the electrical subsystem on the structural subsystem is of secondary order and is not considered in the scope of this work.
Deterministic Ethernet for Space Applications
NASA Astrophysics Data System (ADS)
Fidi, C.; Wolff, B.
2015-09-01
Typical spacecraft systems are distributed to be able to achieve the required reliability and availability targets of the mission. However the requirements on these systems are different for launchers, satellites, human space flight and exploration missions. Launchers require typically high reliability with very short mission times whereas satellites or space exploration missions require very high availability at very long mission times. Comparing a distributed system of launchers with satellites it shows very fast reaction times in launchers versus much slower once in satellite applications. Human space flight missions are maybe most challenging concerning reliability and availability since human lives are involved and the mission times can be very long e.g. ISS. Also the reaction times of these vehicles can get challenging during mission scenarios like landing or re-entry leading to very fast control loops. In these different applications more and more autonomous functions are required to fulfil the needs of current and future missions. This autonomously leads to new requirements with respect to increase performance, determinism, reliability and availability. On the other hand side the pressure on reducing costs of electronic components in space applications is increasing, leading to the use of more and more COTS components especially for launchers and LEO satellites. This requires a technology which is able to provide a cost competitive solution for both the high reliable and available deep-space as well as the low cost “new space” markets. Future spacecraft communication standards therefore have to be much more flexible, scalable and modular to be able to deal with these upcoming challenges. The only way to fulfill these requirements is, if they are based on open standards which are used cross industry leading to a reduction of the lifecycle costs and an increase in performance. The use of a communication network that fulfills these requirements will be essential for such spacecraft’s to allow the use in launcher, satellite, human space flight and exploration missions. Using one technology and the related infrastructure for these different applications will lead to a significant reduction of complexity and would moreover lead to significant savings in size weight and power while increasing the performance of the overall system. The paper focuses on the use of the TTEthernet technology for launchers, satellites and human spaceflight and will demonstrate the scalability of the technology for the different applications. The data used is derived from the ESA TRP 7594 on “Reliable High-Speed Data Bus/Network for Safety-Oriented Missions”.
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
NASA Astrophysics Data System (ADS)
Alsyouf, Imad
2018-05-01
Reliability and availability of critical systems play an important role in achieving the stated objectives of engineering assets. Preventive replacement time affects the reliability of the components, thus the number of system failures encountered and its downtime expenses. On the other hand, spare parts inventory level is a very critical factor that affects the availability of the system. Usually, the decision maker has many conflicting objectives that should be considered simultaneously for the selection of the optimal maintenance policy. The purpose of this research was to develop a bi-objective model that will be used to determine the preventive replacement time for three maintenance policies (age, block good as new, block bad as old) with consideration of spare parts’ availability. It was suggested to use a weighted comprehensive criterion method with two objectives, i.e. cost and availability. The model was tested with a typical numerical example. The results of the model demonstrated its effectiveness in enabling the decision maker to select the optimal maintenance policy under different scenarios and taking into account preferences with respect to contradicting objectives such as cost and availability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, B.G.; Richards, R.E.; Reece, W.J.
1992-10-01
This Reference Guide contains instructions on how to install and use Version 3.5 of the NRC-sponsored Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR). The NUCLARR data management system is contained in compressed files on the floppy diskettes that accompany this Reference Guide. NUCLARR is comprised of hardware component failure data (HCFD) and human error probability (HEP) data, both of which are available via a user-friendly, menu driven retrieval system. The data may be saved to a file in a format compatible with IRRAS 3.0 and commercially available statistical packages, or used to formulate log-plots and reports of data retrievalmore » and aggregation findings.« less
Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, B.G.; Richards, R.E.; Reece, W.J.
1992-10-01
This Reference Guide contains instructions on how to install and use Version 3.5 of the NRC-sponsored Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR). The NUCLARR data management system is contained in compressed files on the floppy diskettes that accompany this Reference Guide. NUCLARR is comprised of hardware component failure data (HCFD) and human error probability (HEP) data, both of which are available via a user-friendly, menu driven retrieval system. The data may be saved to a file in a format compatible with IRRAS 3.0 and commercially available statistical packages, or used to formulate log-plots and reports of data retrievalmore » and aggregation findings.« less
Lockheed L-1101 avionic flight control redundant systems
NASA Technical Reports Server (NTRS)
Throndsen, E. O.
1976-01-01
The Lockheed L-1011 automatic flight control systems - yaw stability augmentation and automatic landing - are described in terms of their redundancies. The reliability objectives for these systems are discussed and related to in-service experience. In general, the availability of the stability augmentation system is higher than the original design requirement, but is commensurate with early estimates. The in-service experience with automatic landing is not sufficient to provide verification of Category 3 automatic landing system estimated availability.
AGUACLARA: CLEAN WATER FOR SMALL COMMUNITIES
We will systematically evaluate commercially available solar thermal collectors and thermal storage systems for use in residential scale co-generative heat and electrical power systems. Currently, reliable data is unavailable over the range of conditions and installations thes...
NASA Technical Reports Server (NTRS)
Thomas, J. M.; Hanagud, S.
1974-01-01
The design criteria and test options for aerospace structural reliability were investigated. A decision methodology was developed for selecting a combination of structural tests and structural design factors. The decision method involves the use of Bayesian statistics and statistical decision theory. Procedures are discussed for obtaining and updating data-based probabilistic strength distributions for aerospace structures when test information is available and for obtaining subjective distributions when data are not available. The techniques used in developing the distributions are explained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graves, Todd L; Hamada, Michael S
2008-01-01
Good estimates of the reliability of a system make use of test data and expert knowledge at all available levels. Furthermore, by integrating all these information sources, one can determine how best to allocate scarce testing resources to reduce uncertainty. Both of these goals are facilitated by modern Bayesian computational methods. We apply these tools to examples that were previously solvable only through the use of ingenious approximations, and use genetic algorithms to guide resource allocation.
DOT National Transportation Integrated Search
1993-05-01
The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev computer system has bee...
Feasibility and demonstration of a cloud-based RIID analysis system
NASA Astrophysics Data System (ADS)
Wright, Michael C.; Hertz, Kristin L.; Johnson, William C.; Sword, Eric D.; Younkin, James R.; Sadler, Lorraine E.
2015-06-01
A significant limitation in the operational utility of handheld and backpack radioisotope identifiers (RIIDs) is the inability of their onboard algorithms to accurately and reliably identify the isotopic sources of the measured gamma-ray energy spectrum. A possible solution is to move the spectral analysis computations to an external device, the cloud, where significantly greater capabilities are available. The implementation and demonstration of a prototype cloud-based RIID analysis system have shown this type of system to be feasible with currently available communication and computational technology. A system study has shown that the potential user community could derive significant benefits from an appropriately implemented cloud-based analysis system and has identified the design and operational characteristics required by the users and stakeholders for such a system. A general description of the hardware and software necessary to implement reliable cloud-based analysis, the value of the cloud expressed by the user community, and the aspects of the cloud implemented in the demonstrations are discussed.
Koo, Henry; Leveridge, Mike; Thompson, Charles; Zdero, Rad; Bhandari, Mohit; Kreder, Hans J; Stephen, David; McKee, Michael D; Schemitsch, Emil H
2008-07-01
The purpose of this study was to measure interobserver reliability of 2 classification systems of pelvic ring fractures and to determine whether computed tomography (CT) improves reliability. The reliability of several radiographic findings was also tested. Thirty patients taken from a database at a Level I trauma facility were reviewed. For each patient, 3 radiographs (AP pelvis, inlet, and outlet) and CT scans were available. Six different reviewers (pelvic and acetabular specialist, orthopaedic traumatologist, or orthopaedic trainee) classified the injury according to Young-Burgess and Tile classification systems after reviewing plain radiographs and then after CT scans. The Kappa coefficient was used to determine interobserver reliability of these classification systems before and after CT scan. For plain radiographs, overall Kappa values for the Young-Burgess and Tile classification systems were 0.72 and 0.30, respectively. For CT scan and plain radiographs, the overall Kappa values for the Young-Burgess and Tile classification systems were 0.63 and 0.33, respectively. The pelvis/acetabular surgeons demonstrated the highest level of agreement using both classification systems. For individual questions, the addition of CT did significantly improve reviewer interpretation of fracture stability. The pre-CT and post-CT Kappa values for fracture stability were 0.59 and 0.93, respectively. The CT scan can improve the reliability of assessment of pelvic stability because of its ability to identify anatomical features of injury. The Young-Burgess system may be optimal for the learning surgeon. The Tile classification system is more beneficial for specialists in pelvic and acetabular surgery.
Jackson, Brian A; Faith, Kay Sullivan
2013-02-01
Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.
Loss of Load Probability Calculation for West Java Power System with Nuclear Power Plant Scenario
NASA Astrophysics Data System (ADS)
Azizah, I. D.; Abdullah, A. G.; Purnama, W.; Nandiyanto, A. B. D.; Shafii, M. A.
2017-03-01
Loss of Load Probability (LOLP) index showing the quality and performance of an electrical system. LOLP value is affected by load growth, the load duration curve, forced outage rate of the plant, number and capacity of generating units. This reliability index calculation begins with load forecasting to 2018 using multiple regression method. Scenario 1 with compositions of conventional plants produce the largest LOLP in 2017 amounted to 71.609 days / year. While the best reliability index generated in scenario 2 with the NPP amounted to 6.941 days / year in 2015. Improved reliability of systems using nuclear power more efficiently when compared to conventional plants because it also has advantages such as emission-free, inexpensive fuel costs, as well as high level of plant availability.
Identification of reliable gridded reference data for statistical downscaling methods in Alberta
NASA Astrophysics Data System (ADS)
Eum, H. I.; Gupta, A.
2017-12-01
Climate models provide essential information to assess impacts of climate change at regional and global scales. However, statistical downscaling methods have been applied to prepare climate model data for various applications such as hydrologic and ecologic modelling at a watershed scale. As the reliability and (spatial and temporal) resolution of statistically downscaled climate data mainly depend on a reference data, identifying the most reliable reference data is crucial for statistical downscaling. A growing number of gridded climate products are available for key climate variables which are main input data to regional modelling systems. However, inconsistencies in these climate products, for example, different combinations of climate variables, varying data domains and data lengths and data accuracy varying with physiographic characteristics of the landscape, have caused significant challenges in selecting the most suitable reference climate data for various environmental studies and modelling. Employing various observation-based daily gridded climate products available in public domain, i.e. thin plate spline regression products (ANUSPLIN and TPS), inverse distance method (Alberta Townships), and numerical climate model (North American Regional Reanalysis) and an optimum interpolation technique (Canadian Precipitation Analysis), this study evaluates the accuracy of the climate products at each grid point by comparing with the Adjusted and Homogenized Canadian Climate Data (AHCCD) observations for precipitation, minimum and maximum temperature over the province of Alberta. Based on the performance of climate products at AHCCD stations, we ranked the reliability of these publically available climate products corresponding to the elevations of stations discretized into several classes. According to the rank of climate products for each elevation class, we identified the most reliable climate products based on the elevation of target points. A web-based system was developed to allow users to easily select the most reliable reference climate data at each target point based on the elevation of grid cell. By constructing the best combination of reference data for the study domain, the accurate and reliable statistically downscaled climate projections could be significantly improved.
Reliability of Beam Loss Monitors System for the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Guaglio, G.; Dehning, B.; Santoni, C.
2004-11-01
The employment of superconducting magnets in high energy colliders opens challenging failure scenarios and brings new criticalities for the whole system protection. For the LHC beam loss protection system, the failure rate and the availability requirements have been evaluated using the Safety Integrity Level (SIL) approach. A downtime cost evaluation is used as input for the SIL approach. The most critical systems, which contribute to the final SIL value, are the dump system, the interlock system, the beam loss monitors system and the energy monitor system. The Beam Loss Monitors System (BLMS) is critical for short and intense particle losses, while at medium and higher loss time it is assisted by other systems, such as the quench protection system and the cryogenic system. For BLMS, hardware and software have been evaluated in detail. The reliability input figures have been collected using historical data from the SPS, using temperature and radiation damage experimental data as well as using standard databases. All the data have been processed by reliability software (Isograph). The analysis ranges from the components data to the system configuration.
Review and critical analysis: Rolling-element bearings for system life and reliability
NASA Technical Reports Server (NTRS)
Irwin, A. S.; Anderson, W. J.; Derner, W. J.
1985-01-01
A ball and cylindrical roller bearing technical specification which incorporates the latest state-of-the-art advancements was prepared for the purpose of improving bearing reliability in U.S. Army aircraft. The current U.S. Army aviation bearing designs and applications, including life analyses, were analyzed. A bearing restoration and refurbishment specification was prepared to improve bearing availability.
The embedded operating system project
NASA Technical Reports Server (NTRS)
Campbell, R. H.
1984-01-01
This progress report describes research towards the design and construction of embedded operating systems for real-time advanced aerospace applications. The applications concerned require reliable operating system support that must accommodate networks of computers. The report addresses the problems of constructing such operating systems, the communications media, reconfiguration, consistency and recovery in a distributed system, and the issues of realtime processing. A discussion is included on suitable theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based systems. In particular, this report addresses: atomic actions, fault tolerance, operating system structure, program development, reliability and availability, and networking issues. This document reports the status of various experiments designed and conducted to investigate embedded operating system design issues.
The Role of Demand Response in Reducing Water-Related Power Plant Vulnerabilities
NASA Astrophysics Data System (ADS)
Macknick, J.; Brinkman, G.; Zhou, E.; O'Connell, M.; Newmark, R. L.; Miara, A.; Cohen, S. M.
2015-12-01
The electric sector depends on readily available water supplies for reliable and efficient operation. Elevated water temperatures or low water levels can trigger regulatory or plant-level decisions to curtail power generation, which can affect system cost and reliability. In the past decade, dozens of power plants in the U.S. have curtailed generation due to water temperatures and water shortages. Curtailments occur during the summer, when temperatures are highest and there is greatest demand for electricity. Climate change could alter the availability and temperature of water resources, exacerbating these issues. Constructing alternative cooling systems to address vulnerabilities can be capital intensive and can also affect power plant efficiencies. Demand response programs are being implemented by electric system planners and operators to reduce and shift electricity demands from peak usage periods to other times of the day. Demand response programs can also play a role in reducing water-related power sector vulnerabilities during summer months. Traditionally, production cost modeling and demand response analyses do not include water resources. In this effort, we integrate an electricity production cost modeling framework with water-related impacts on power plants in a test system to evaluate the impacts of demand response measures on power system costs and reliability. Specifically, we i) quantify the cost and reliability implications of incorporating water resources into production cost modeling, ii) evaluate the impacts of demand response measures on reducing system costs and vulnerabilities, and iii) consider sensitivity analyses with cooling systems to highlight a range of potential benefits of demand response measures. Impacts from climate change on power plant performance and water resources are discussed. Results provide key insights to policymakers and practitioners for reducing water-related power plant vulnerabilities via lower cost methods.
Simplified Phased-Mission System Analysis for Systems with Independent Component Repairs
NASA Technical Reports Server (NTRS)
Somani, Arun K.
1996-01-01
Accurate analysis of reliability of system requires that it accounts for all major variations in system's operation. Most reliability analyses assume that the system configuration, success criteria, and component behavior remain the same. However, multiple phases are natural. We present a new computationally efficient technique for analysis of phased-mission systems where the operational states of a system can be described by combinations of components states (such as fault trees or assertions). Moreover, individual components may be repaired, if failed, as part of system operation but repairs are independent of the system state. For repairable systems Markov analysis techniques are used but they suffer from state space explosion. That limits the size of system that can be analyzed and it is expensive in computation. We avoid the state space explosion. The phase algebra is used to account for the effects of variable configurations, repairs, and success criteria from phase to phase. Our technique yields exact (as opposed to approximate) results. We demonstrate our technique by means of several examples and present numerical results to show the effects of phases and repairs on the system reliability/availability.
NASA Astrophysics Data System (ADS)
Miara, A.; Macknick, J.; Vorosmarty, C. J.; Corsi, F.; Fekete, B. M.; Newmark, R. L.; Tidwell, V. C.; Cohen, S. M.
2016-12-01
Thermoelectric plants supply 85% of electricity generation in the United States. Under a warming climate, the performance of these power plants may be reduced, as thermoelectric generation is dependent upon cool ambient temperatures and sufficient water supplies at adequate temperatures. In this study, we assess the vulnerability and reliability of 1,100 operational power plants (2015) across the contiguous United States under a comprehensive set of climate scenarios (five Global Circulation Models each with four Representative Concentration Pathways). We model individual power plant capacities using the Thermoelectric Power and Thermal Pollution model (TP2M) coupled with the Water Balance Model (WBM) at a daily temporal resolution and 5x5 km spatial resolution. Together, these models calculate power plant capacity losses that account for geophysical constraints and river network dynamics. Potential losses at the single-plant level are put into a regional energy security context by assessing the collective system-level reliability at the North-American Electricity Reliability Corporation (NERC) regions. Results show that the thermoelectric sector at the national level has low vulnerability under the contemporary climate and that system-level reliability in terms of available thermoelectric resources relative to thermoelectric demand is sufficient. Under future climates scenarios, changes in water availability and warm ambient temperatures lead to constraints on operational capacity and increased vulnerability at individual power plant sites across all regions in the United States. However, there is a strong disparity in regional vulnerability trends and magnitudes that arise from each region's climate, hydrology and technology mix. Despite increases in vulnerabilities at the individual power plant level, regional energy systems may still be reliable (with no system failures) due to sufficient back-up reserve capacities.
Dynamic user data analysis and web composition technique using big data
NASA Astrophysics Data System (ADS)
Soundarya, P.; Vanitha, M.; Sumaiya Thaseen, I.
2017-11-01
In the existing system, a reliable service oriented system is built which is more important when compared with the traditional standalone system in the unpredictable internet service and it also a challenging task to build reliable web service. In the proposed system, the fault tolerance is determined by using the proposed heuristic algorithm. There are two kinds of strategies active and passive strategies. The user requirement is also formulated as local and global constraints. Different services are deployed in the modification process. Two bus reservation and two train reservation services are deployed along with hotel reservation service. User can choose any one of the bus reservation and specify their destination location. If corresponding destination is not available then automatic backup service to another bus reservation system is carried. If same, the service is not available then parallel service of train reservation is initiated. Automatic hotel reservation is also initiated based on the mode and type of travel of the user.
Estimating the Reliability of Electronic Parts in High Radiation Fields
NASA Technical Reports Server (NTRS)
Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd
2008-01-01
Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
Thermoelectric-Driven Sustainable Sensing and Actuation Systems for Fault-Tolerant Nuclear Incidents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longtin, Jon
2016-02-08
The Fukushima Daiichi nuclear incident in March 2011 represented an unprecedented stress test on the safety and backup systems of a nuclear power plant. The lack of reliable information from key components due to station blackout was a serious setback, leaving sensing, actuation, and reporting systems unable to communicate, and safety was compromised. Although there were several independent backup power sources for required safety function on site, ultimately the batteries were drained and the systems stopped working. If, however, key system components were instrumented with self-powered sensing and actuation packages that could report indefinitely on the status of the system,more » then critical system information could be obtained while providing core actuation and control during off-normal status for as long as needed. This research project focused on the development of such a self-powered sensing and actuation system. The electrical power is derived from intrinsic heat in the reactor components, which is both reliable and plentiful. The key concept was based around using thermoelectric generators that can be integrated directly onto key nuclear components, including pipes, pump housings, heat exchangers, reactor vessels, and shielding structures, as well as secondary-side components. Thermoelectric generators are solid-state devices capable of converting heat directly into electricity. They are commercially available technology. They are compact, have no moving parts, are silent, and have excellent reliability. The key components to the sensor package include a thermoelectric generator (TEG), microcontroller, signal processing, and a wireless radio package, environmental hardening to survive radiation, flooding, vibration, mechanical shock (explosions), corrosion, and excessive temperature. The energy harvested from the intrinsic heat of reactor components can be then made available to power sensors, provide bi-directional communication, recharge batteries for other safety systems, etc. Such an approach is intrinsically fault tolerant: in the event that system temperatures increase, the amount of available energy will increase, which will make more power available for applications. The system can also be used during normal conditions to provide enhanced monitoring of key system components.« less
Capacity and reliability analyses with applications to power quality
NASA Astrophysics Data System (ADS)
Azam, Mohammad; Tu, Fang; Shlapak, Yuri; Kirubarajan, Thiagalingam; Pattipati, Krishna R.; Karanam, Rajaiah
2001-07-01
The deregulation of energy markets, the ongoing advances in communication networks, the proliferation of intelligent metering and protective power devices, and the standardization of software/hardware interfaces are creating a dramatic shift in the way facilities acquire and utilize information about their power usage. The currently available power management systems gather a vast amount of information in the form of power usage, voltages, currents, and their time-dependent waveforms from a variety of devices (for example, circuit breakers, transformers, energy and power quality meters, protective relays, programmable logic controllers, motor control centers). What is lacking is an information processing and decision support infrastructure to harness this voluminous information into usable operational and management knowledge to handle the health of their equipment and power quality, minimize downtime and outages, and to optimize operations to improve productivity. This paper considers the problem of evaluating the capacity and reliability analyses of power systems with very high availability requirements (e.g., systems providing energy to data centers and communication networks with desired availability of up to 0.9999999). The real-time capacity and margin analysis helps operators to plan for additional loads and to schedule repair/replacement activities. The reliability analysis, based on computationally efficient sum of disjoint products, enables analysts to decide the optimum levels of redundancy, aids operators in prioritizing the maintenance options for a given budget and monitoring the system for capacity margin. The resulting analytical and software tool is demonstrated on a sample data center.
SSME component assembly and life management expert system
NASA Technical Reports Server (NTRS)
Ali, M.; Dietz, W. E.; Ferber, H. J.
1989-01-01
The space shuttle utilizes several rocket engine systems, all of which must function with a high degree of reliability for successful mission completion. The space shuttle main engine (SSME) is by far the most complex of the rocket engine systems and is designed to be reusable. The reusability of spacecraft systems introduces many problems related to testing, reliability, and logistics. Components must be assembled from parts inventories in a manner which will most effectively utilize the available parts. Assembly must be scheduled to efficiently utilize available assembly benches while still maintaining flight schedules. Assembled components must be assigned to as many contiguous flights as possible, to minimize component changes. Each component must undergo a rigorous testing program prior to flight. In addition, testing and assembly of flight engines and components must be done in conjunction with the assembly and testing of developmental engines and components. The development, testing, manufacture, and flight assignments of the engine fleet involves the satisfaction of many logistical and operational requirements, subject to many constraints. The purpose of the SSME Component Assembly and Life Management Expert System (CALMES) is to assist the engine assembly and scheduling process, and to insure that these activities utilize available resources as efficiently as possible.
Reliability and Maintainability Engineering - A Major Driver for Safety and Affordability
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.
2011-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of an effort to design and build a safe and affordable heavy lift vehicle to go to the moon and beyond. To achieve that, NASA is seeking more innovative and efficient approaches to reduce cost while maintaining an acceptable level of safety and mission success. One area that has the potential to contribute significantly to achieving NASA safety and affordability goals is Reliability and Maintainability (R&M) engineering. Inadequate reliability or failure of critical safety items may directly jeopardize the safety of the user(s) and result in a loss of life. Inadequate reliability of equipment may directly jeopardize mission success. Systems designed to be more reliable (fewer failures) and maintainable (fewer resources needed) can lower the total life cycle cost. The Department of Defense (DOD) and industry experience has shown that optimized and adequate levels of R&M are critical for achieving a high level of safety and mission success, and low sustainment cost. Also, lessons learned from the Space Shuttle program clearly demonstrated the importance of R&M engineering in designing and operating safe and affordable launch systems. The Challenger and Columbia accidents are examples of the severe impact of design unreliability and process induced failures on system safety and mission success. These accidents demonstrated the criticality of reliability engineering in understanding component failure mechanisms and integrated system failures across the system elements interfaces. Experience from the shuttle program also shows that insufficient Reliability, Maintainability, and Supportability (RMS) engineering analyses upfront in the design phase can significantly increase the sustainment cost and, thereby, the total life cycle cost. Emphasis on RMS during the design phase is critical for identifying the design features and characteristics needed for time efficient processing, improved operational availability, and optimized maintenance and logistic support infrastructure. This paper discusses the role of R&M in a program acquisition phase and the potential impact of R&M on safety, mission success, operational availability, and affordability. This includes discussion of the R&M elements that need to be addressed and the R&M analyses that need to be performed in order to support a safe and affordable system design. The paper also provides some lessons learned from the Space Shuttle program on the impact of R&M on safety and affordability.
Proximal humeral fracture classification systems revisited.
Majed, Addie; Macleod, Iain; Bull, Anthony M J; Zyto, Karol; Resch, Herbert; Hertel, Ralph; Reilly, Peter; Emery, Roger J H
2011-10-01
This study evaluated several classification systems and expert surgeons' anatomic understanding of these complex injuries based on a consecutive series of patients. We hypothesized that current proximal humeral fracture classification systems, regardless of imaging methods, are not sufficiently reliable to aid clinical management of these injuries. Complex fractures in 96 consecutive patients were investigated by generation of rapid sequence prototyping models from computed tomography Digital Imaging and Communications in Medicine (DICOM) imaging data. Four independent senior observers were asked to classify each model using 4 classification systems: Neer, AO, Codman-Hertel, and a prototype classification system by Resch. Interobserver and intraobserver κ coefficient values were calculated for the overall classification system and for selected classification items. The κ coefficient values for the interobserver reliability were 0.33 for Neer, 0.11 for AO, 0.44 for Codman-Hertel, and 0.15 for Resch. Interobserver reliability κ coefficient values were 0.32 for the number of fragments and 0.30 for the anatomic segment involved using the Neer system, 0.30 for the AO type (A, B, C), and 0.53, 0.48, and 0.08 for the Resch impaction/distraction, varus/valgus and flexion/extension subgroups, respectively. Three-part fractures showed low reliability for the Neer and AO systems. Currently available evidence suggests fracture classifications in use have poor intra- and inter-observer reliability despite the modality of imaging used thus making treating these injuries difficult as weak as affecting scientific research as well. This study was undertaken to evaluate the reliability of several systems using rapid sequence prototype models. Overall interobserver κ values represented slight to moderate agreement. The most reliable interobserver scores were found with the Codman-Hertel classification, followed by elements of Resch's trial system. The AO system had the lowest values. The higher interobserver reliability values for the Codman-Hertel system showed that is the only comprehensive fracture description studied, whereas the novel classification by Resch showed clear definition in respect to varus/valgus and impaction/distraction angulation. Copyright © 2011 Journal of Shoulder and Elbow Surgery Board of Trustees. All rights reserved.
Big data analytics for the Future Circular Collider reliability and availability studies
NASA Astrophysics Data System (ADS)
Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter
2017-10-01
Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.
Ghirardelli, Alyssa; Quinn, Valerie; Sugerman, Sharon
2011-01-01
To develop a retail grocery instrument with weighted scoring to be used as an indicator of the food environment. Twenty six retail food stores in low-income areas in California. Observational. Inter-rater reliability for grocery store survey instrument. Description of store scoring methodology weighted to emphasize availability of healthful food. Type A intra-class correlation coefficients (ICC) with absolute agreement definition or a κ test for measures using ranges as categories. Measures of availability and price of fruits and vegetables performed well in reliability testing (κ = 0.681-0.800). Items for vegetable quality were better than for fruit (ICC 0.708 vs 0.528). Kappa scores indicated low to moderate agreement (0.372-0.674) on external store marketing measures and higher scores for internal store marketing. "Next to" the checkout counter was more reliable than "within 6 feet." Health departments using the store scoring system reported it as the most useful communication of neighborhood findings. There was good reliability of the measures among the research pairs. The local store scores can show the need to bring in resources and to provide access to fruits and vegetables and other healthful food. Copyright © 2011 Society for Nutrition Education. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu; Yeh, Cheng-Ta
2016-04-01
In supply chain management, satisfying customer demand is the most concerned for the manager. However, the goods may rot or be spoilt during delivery owing to natural disasters, inclement weather, traffic accidents, collisions, and so on, such that the intact goods may not meet market demand. This paper concentrates on a stochastic-flow distribution network (SFDN), in which a node denotes a supplier, a transfer station, or a market, while a route denotes a carrier providing the delivery service for a pair of nodes. The available capacity of the carrier is stochastic because the capacity may be partially reserved by other customers. The addressed problem is to evaluate the system reliability, the probability that the SFDN can satisfy the market demand with the spoilage rate under the budget constraint from multiple suppliers to the customer. An algorithm is developed in terms of minimal paths to evaluate the system reliability along with a numerical example to illustrate the solution procedure. A practical case of fruit distribution is presented accordingly to emphasise the management implication of the system reliability.
Critical issues in assuring long lifetime and fail-safe operation of optical communications network
NASA Astrophysics Data System (ADS)
Paul, Dilip K.
1993-09-01
Major factors in assuring long lifetime and fail-safe operation in optical communications networks are reviewed in this paper. Reliable functionality to design specifications, complexity of implementation, and cost are the most critical issues. As economics is the driving force to set the goals as well as priorities for the design, development, safe operation, and maintenance schedules of reliable networks, a balance is sought between the degree of reliability enhancement, cost, and acceptable outage of services. Protecting both the link and the network with high reliability components, hardware duplication, and diversity routing can ensure the best network availability. Case examples include both fiber optic and lasercom systems. Also, the state-of-the-art reliability of photonics in space environment is presented.
ERIC Educational Resources Information Center
Clifford, Matthew; Menon, Roshni; Gangi, Tracy; Condon, Christopher; Hornung, Katie
2012-01-01
This policy brief provides principal evaluation system designers information about the technical soundness and cost (i.e., time requirements) of publicly available school climate surveys. The authors focus on the technical soundness of school climate surveys because they believe that using validated and reliable surveys as an outcomes measure can…
Electronic Warfare and Radar Systems Engineering Handbook
2012-06-01
Airframe Missile, or Reliability, Availability, and Maintainability R&M Reliability and Maintainability RAT Ram Air Turbine RBOC Rapid Blooming...the Doppler shifted return (see Figure 10). Reflections off rotating jet engine compressor blades, aircraft propellers, ram air turbine (RAT...predict aircraft ground speed and direction of motion. Wind influences are taken into account, such that the radar can also be used to update the aircraft
PERFORMANCE OF SOLAR HOT WATER COLLECTORS FOR ELECTRICITY PRODUCTION AND CLIMATE CONTROL
We will systematically evaluate commercially available solar thermal collectors and thermal storage systems for use in residential scale co-generative heat and electrical power systems. Currently, reliable data is unavailable over the range of conditions and installations thes...
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
Power transfer systems for future navy helicopters. Final report 25 Jun 70--28 Jun 72
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bossler, R.B. Jr.
1972-11-01
The purpose of this program was to conduct an analysis of helicopter power transfer systems (pts), both conventional and advanced concept type, with the objective of reducing specific weights and improving reliability beyond present values. The analysis satisfied requirements specified for a 200,000 pound cargo transport helicopter (CTH), a 70,000 pound heavy assault helicopter, and a 15,000 pound non-combat search and rescue helicopter. Four selected gearing systems (out of seven studied), optimized for lightest weight and equal reliability for the CTH, using component proportioning via stress and stiffness equations, had no significant difference between their aircraft payloads. All optimized ptsmore » were approximately 70% of statistically predicted weight. Reliability increase is predicted via gearbox derating using Weibull relationships. Among advanced concepts, the Turbine Integrated Geared Rotor was competitive for weight, technology availability and reliability increase but handicapped by a special engine requirement. The warm cycle system was found not competitive. Helicopter parametric weight analysis is shown. Advanced development Plans are presented for the pts for the CTH, including total pts system, selected pts components, and scale model flight testing in a Kaman HH2 helicopter.« less
Reliability demonstration test for load-sharing systems with exponential and Weibull components
Hu, Qingpei; Yu, Dan; Xie, Min
2017-01-01
Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030
Reliability demonstration test for load-sharing systems with exponential and Weibull components.
Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min
2017-01-01
Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.
Numerical aerodynamic simulation facility. Preliminary study extension
NASA Technical Reports Server (NTRS)
1978-01-01
The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.
Fuel Cell Balance-of-Plant Reliability Testbed Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sproat, Vern; LaHurd, Debbie
Reliability of the fuel cell system balance-of-plant (BoP) components is a critical factor that needs to be addressed prior to fuel cells becoming fully commercialized. Failure or performance degradation of BoP components has been identified as a life-limiting factor in fuel cell systems.1 The goal of this project is to develop a series of test beds that will test system components such as pumps, valves, sensors, fittings, etc., under operating conditions anticipated in real Polymer Electrolyte Membrane (PEM) fuel cell systems. Results will be made generally available to begin removing reliability as a roadblock to the growth of the PEMmore » fuel cell industry. Stark State College students participating in the project, in conjunction with their coursework, have been exposed to technical knowledge and training in the handling and maintenance of hydrogen, fuel cells and system components as well as component failure modes and mechanisms. Three test beds were constructed. Testing was completed on gas flow pumps, tubing, and pressure and temperature sensors and valves.« less
Continuous analysis of nitrogen dioxide in gas streams of plants
NASA Technical Reports Server (NTRS)
Durkin, W. T.; Kispert, R. C.
1969-01-01
Analyzer and sampling system continuously monitors nitrogen dioxide concentrations in the feed and tail gas streams of a facility recovering nitric acid. The system, using a direct calorimetric approach, makes use of readily available equipment and is flexible and reliable in operation.
ERIC Educational Resources Information Center
Howe, Robert L.
The California Education Information System (CEIS) was developed to provide integrated information processing for educators at every level of operation. The objectives of CEIS are to make available through a state-wide system for local district use, complete, current and reliable information about education at the local and state level. CEIS…
Inertial Measurement Units for Clinical Movement Analysis: Reliability and Concurrent Validity
Nicholas, Kevin; Sparkes, Valerie; Sheeran, Liba; Davies, Jennifer L
2018-01-01
The aim of this study was to investigate the reliability and concurrent validity of a commercially available Xsens MVN BIOMECH inertial-sensor-based motion capture system during clinically relevant functional activities. A clinician with no prior experience of motion capture technologies and an experienced clinical movement scientist each assessed 26 healthy participants within each of two sessions using a camera-based motion capture system and the MVN BIOMECH system. Participants performed overground walking, squatting, and jumping. Sessions were separated by 4 ± 3 days. Reliability was evaluated using intraclass correlation coefficient and standard error of measurement, and validity was evaluated using the coefficient of multiple correlation and the linear fit method. Day-to-day reliability was generally fair-to-excellent in all three planes for hip, knee, and ankle joint angles in all three tasks. Within-day (between-rater) reliability was fair-to-excellent in all three planes during walking and squatting, and poor-to-high during jumping. Validity was excellent in the sagittal plane for hip, knee, and ankle joint angles in all three tasks and acceptable in frontal and transverse planes in squat and jump activity across joints. Our results suggest that the MVN BIOMECH system can be used by a clinician to quantify lower-limb joint angles in clinically relevant movements. PMID:29495600
Digital Avionics Information System (DAIS): Development and Demonstration.
1981-09-01
advances in technology. The DAIS architecture results in improved reliability and availability of avionics systems while at the same time reducing life ...DAIS) represents a significant advance in the technology of avionics system architecture. DAIS is a total systems concept, exploiting standardization...configurations and fully capable of accommodating new advances in technology. These fundamental system charac- teristics are described in this report; the
76 FR 66220 - Automatic Underfrequency Load Shedding and Load Shedding Plans Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-26
..., EPRI Power Systems Dynamics Tutorial, Chapter 4 at page 4-78 (2009), available at http://www.epri.com.... Power systems consist of static components (e.g., transformers and transmission lines) and dynamic... decisions on simulations, both static and dynamic, using area power system models to meet requirements in...
Reliability of Beam Loss Monitor Systems for the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Guaglio, G.; Dehning, B.; Santoni, C.
2005-06-01
The increase of beam energy and beam intensity, together with the use of super conducting magnets, opens new failure scenarios and brings new criticalities for the whole accelerator protection system. For the LHC beam loss protection system, the failure rate and the availability requirements have been evaluated using the Safety Integrity Level (SIL) approach. A downtime cost evaluation is used as input for the SIL approach. The most critical systems, which contribute to the final SIL value, are the dump system, the interlock system, the beam loss monitors system, and the energy monitor system. The Beam Loss Monitors System (BLMS) is critical for short and intense particles losses at 7 TeV and assisted by the Fast Beam Current Decay Monitors at 450 GeV. At medium and higher loss time it is assisted by other systems, such as the quench protection system and the cryogenic system. For BLMS, hardware and software have been evaluated in detail. The reliability input figures have been collected using historical data from the SPS, using temperature and radiation damage experimental data as well as using standard databases. All the data has been processed by reliability software (Isograph). The analysis spaces from the components data to the system configuration.
Li, Tuan; Zhang, Hongping; Niu, Xiaoji; Gao, Zhouzheng
2017-01-01
Dual-frequency Global Positioning System (GPS) Real-time Kinematics (RTK) has been proven in the past few years to be a reliable and efficient technique to obtain high accuracy positioning. However, there are still challenges for GPS single-frequency RTK, such as low reliability and ambiguity resolution (AR) success rate, especially in kinematic environments. Recently, multi-Global Navigation Satellite System (multi-GNSS) has been applied to enhance the RTK performance in terms of availability and reliability of AR. In order to further enhance the multi-GNSS single-frequency RTK performance in terms of reliability, continuity and accuracy, a low-cost micro-electro-mechanical system (MEMS) inertial measurement unit (IMU) is adopted in this contribution. We tightly integrate the single-frequency GPS/BeiDou/GLONASS and MEMS-IMU through the extended Kalman filter (EKF), which directly fuses the ambiguity-fixed double-differenced (DD) carrier phase observables and IMU data. A field vehicular test was carried out to evaluate the impacts of the multi-GNSS and IMU on the AR and positioning performance in different system configurations. Test results indicate that the empirical success rate of single-epoch AR for the tightly-coupled single-frequency multi-GNSS RTK/INS integration is over 99% even at an elevation cut-off angle of 40°, and the corresponding position time series is much more stable in comparison with the GPS solution. Besides, GNSS outage simulations show that continuous positioning with certain accuracy is possible due to the INS bridging capability when GNSS positioning is not available. PMID:29077070
Li, Tuan; Zhang, Hongping; Niu, Xiaoji; Gao, Zhouzheng
2017-10-27
Dual-frequency Global Positioning System (GPS) Real-time Kinematics (RTK) has been proven in the past few years to be a reliable and efficient technique to obtain high accuracy positioning. However, there are still challenges for GPS single-frequency RTK, such as low reliability and ambiguity resolution (AR) success rate, especially in kinematic environments. Recently, multi-Global Navigation Satellite System (multi-GNSS) has been applied to enhance the RTK performance in terms of availability and reliability of AR. In order to further enhance the multi-GNSS single-frequency RTK performance in terms of reliability, continuity and accuracy, a low-cost micro-electro-mechanical system (MEMS) inertial measurement unit (IMU) is adopted in this contribution. We tightly integrate the single-frequency GPS/BeiDou/GLONASS and MEMS-IMU through the extended Kalman filter (EKF), which directly fuses the ambiguity-fixed double-differenced (DD) carrier phase observables and IMU data. A field vehicular test was carried out to evaluate the impacts of the multi-GNSS and IMU on the AR and positioning performance in different system configurations. Test results indicate that the empirical success rate of single-epoch AR for the tightly-coupled single-frequency multi-GNSS RTK/INS integration is over 99% even at an elevation cut-off angle of 40°, and the corresponding position time series is much more stable in comparison with the GPS solution. Besides, GNSS outage simulations show that continuous positioning with certain accuracy is possible due to the INS bridging capability when GNSS positioning is not available.
A Simulation Model for Setting Terms for Performance Based Contract Terms
2010-05-01
torpedo self-noise and the use of ruggedized, embedded, digital micro - processors . The latter capability made it possible for digitally controlled...inventories are: System Reliability, Product Reliability, Operational Availability, Mean Time to Repair (MTTR), Mean Time to Failure ( MTTF ...Failure ( MTTF ) Mean Logistics Delay Time (MLDT) Mean Supply Response Time (MSRT) D ep en de nt M et ric s Mean Accumulated Down Time (MADT
Application of an Integrated HPC Reliability Prediction Framework to HMMWV Suspension System
2010-09-13
model number M966 (TOW Missle Carrier, Basic Armor without weapons), since they were available. Tires used for all simulations were the bias-type...vehicle fleet, including consideration of all kinds of uncertainty, especially including model uncertainty. The end result will be a tool to use...building an adequate vehicle reliability prediction framework for military vehicles is the accurate modeling of the integration of various types of
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
Preliminary candidate advanced avionics system for general aviation
NASA Technical Reports Server (NTRS)
Mccalla, T. M.; Grismore, F. L.; Greatline, S. E.; Birkhead, L. M.
1977-01-01
An integrated avionics system design was carried out to the level which indicates subsystem function, and the methods of overall system integration. Sufficient detail was included to allow identification of possible system component technologies, and to perform reliability, modularity, maintainability, cost, and risk analysis upon the system design. Retrofit to older aircraft, availability of this system to the single engine two place aircraft, was considered.
NASA Astrophysics Data System (ADS)
Riggs, William R.
1994-05-01
SHARP is a Navy wide logistics technology development effort aimed at reducing the acquisition costs, support costs, and risks of military electronic weapon systems while increasing the performance capability, reliability, maintainability, and readiness of these systems. Lower life cycle costs for electronic hardware are achieved through technology transition, standardization, and reliability enhancement to improve system affordability and availability as well as enhancing fleet modernization. Advanced technology is transferred into the fleet through hardware specifications for weapon system building blocks of standard electronic modules, standard power systems, and standard electronic systems. The product lines are all defined with respect to their size, weight, I/O, environmental performance, and operational performance. This method of defining the standard is very conducive to inserting new technologies into systems using the standard hardware. This is the approach taken thus far in inserting photonic technologies into SHARP hardware. All of the efforts have been related to module packaging; i.e. interconnects, component packaging, and module developments. Fiber optic interconnects are discussed in this paper.
Innovative safety valve selection techniques and data.
Miller, Curt; Bredemyer, Lindsey
2007-04-11
The new valve data resources and modeling tools that are available today are instrumental in verifying that that safety levels are being met in both current installations and project designs. If the new ISA 84 functional safety practices are followed closely, good industry validated data used, and a user's maintenance integrity program strictly enforced, plants should feel confident that their design has been quantitatively reinforced. After 2 years of exhaustive reliability studies, there are now techniques and data available to support this safety system component deficiency. Everyone who has gone through the process of safety integrity level (SIL) verification (i.e. reliability math) will appreciate the progress made in this area. The benefits of these advancements are improved safety with lower lifecycle costs such as lower capital investment and/or longer testing intervals. This discussion will start with a review of the different valve, actuator, and solenoid/positioner combinations that can be used and their associated application restraints. Failure rate reliability studies (i.e. FMEDA) and data associated with the final combinations will then discussed. Finally, the impact of the selections on each safety system's SIL verification will be reviewed.
Implementation of Integrated System Fault Management Capability
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Schmalzel, John; Morris, Jon; Smith, Harvey; Turowski, Mark
2008-01-01
Fault Management to support rocket engine test mission with highly reliable and accurate measurements; while improving availability and lifecycle costs. CORE ELEMENTS: Architecture, taxonomy, and ontology (ATO) for DIaK management. Intelligent Sensor Processes; Intelligent Element Processes; Intelligent Controllers; Intelligent Subsystem Processes; Intelligent System Processes; Intelligent Component Processes.
NASA Technical Reports Server (NTRS)
Joseph, T. A.; Birman, Kenneth P.
1989-01-01
A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.
Wireless and Powerless Sensing Node System Developed for Monitoring Motors.
Lee, Dasheng
2008-08-27
Reliability and maintainability of tooling systems can be improved through condition monitoring of motors. However, it is difficult to deploy sensor nodes due to the harsh environment of industrial plants. Sensor cables are easily damaged, which renders the monitoring system deployed to assure the machine's reliability itself unreliable. A wireless and powerless sensing node integrated with a MEMS (Micro Electro-Mechanical System) sensor, a signal processor, a communication module, and a self-powered generator was developed in this study for implementation of an easily mounted network sensor for monitoring motors. A specially designed communication module transmits a sequence of electromagnetic (EM) pulses in response to the sensor signals. The EM pulses can penetrate through the machine's metal case and delivers signals from the sensor inside the motor to the external data acquisition center. By using induction power, which is generated by the motor's shaft rotation, the sensor node is self-sustaining; therefore, no power line is required. A monitoring system, equipped with novel sensing nodes, was constructed to test its performance. The test results illustrate that, the novel sensing node developed in this study can effectively enhance the reliability of the motor monitoring system and it is expected to be a valuable technology, which will be available to the plant for implementation in a reliable motor management program.
Wireless and Powerless Sensing Node System Developed for Monitoring Motors
Lee, Dasheng
2008-01-01
Reliability and maintainability of tooling systems can be improved through condition monitoring of motors. However, it is difficult to deploy sensor nodes due to the harsh environment of industrial plants. Sensor cables are easily damaged, which renders the monitoring system deployed to assure the machine's reliability itself unreliable. A wireless and powerless sensing node integrated with a MEMS (Micro Electro-Mechanical System) sensor, a signal processor, a communication module, and a self-powered generator was developed in this study for implementation of an easily mounted network sensor for monitoring motors. A specially designed communication module transmits a sequence of electromagnetic (EM) pulses in response to the sensor signals. The EM pulses can penetrate through the machine's metal case and delivers signals from the sensor inside the motor to the external data acquisition center. By using induction power, which is generated by the motor's shaft rotation, the sensor node is self-sustaining; therefore, no power line is required. A monitoring system, equipped with novel sensing nodes, was constructed to test its performance. The test results illustrate that, the novel sensing node developed in this study can effectively enhance the reliability of the motor monitoring system and it is expected to be a valuable technology, which will be available to the plant for implementation in a reliable motor management program. PMID:27873798
Reliability, Availability and Maintainability Design Practices Guide. Volume 1,
1981-03-01
Experience 7-3-3 Air Force RIV - Avionics 7-3-4 RIW-S Army 7-3-5a The Application of Availability to Linear 7-3-6 Indifference Contracting Improvement...acceptance of the maintain- ability of Air Force ground electronic systems and equipments. Although the notebook is directed at ground electronic systems...conformal coating standardization, a lack of written instructions, and no standardization between fleet activities. The Naval Air Development Center
Glenn, Jordan M; Galey, Madeline; Edwards, Abigail; Rickert, Bradley; Washington, Tyrone A
2015-07-01
Ability to generate force from the core musculature is a critical factor for sports and general activities with insufficiencies predisposing individuals to injury. This study evaluated isometric force production as a valid and reliable method of assessing abdominal force using the abdominal test and evaluation systems tool (ABTEST). Secondary analysis estimated 1-repetition maximum on commercially available abdominal machine compared to maximum force and average power on ABTEST system. This study utilized test-retest reliability and comparative analysis for validity. Reliability was measured using test-retest design on ABTEST. Validity was measured via comparison to estimated 1-repetition maximum on a commercially available abdominal device. Participants applied isometric, abdominal force against a transducer and muscular activation was evaluated measuring normalized electromyographic activity at the rectus-abdominus, rectus-femoris, and erector-spinae. Test, re-test force production on ABTEST was significantly correlated (r=0.84; p<0.001). Mean electromyographic activity for the rectus-abdominus (72.93% and 75.66%), rectus-femoris (6.59% and 6.51%), and erector-spinae (6.82% and 5.48%) were observed for trial-1 and trial-2, respectively. Significant correlations for the estimated 1-repetition maximum were found for average power (r=0.70, p=0.002) and maximum force (r=0.72, p<0.001). Data indicate the ABTEST can accurately measure rectus-abdominus force isolated from hip-flexor involvement. Negligible activation of erector-spinae substantiates little subjective effort among participants in the lower back. Results suggest ABTEST is a valid and reliable method of evaluating abdominal force. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Interactive Image Analysis System Design,
1982-12-01
This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image
Systems Engineering of Electric and Hybrid Vehicles
NASA Technical Reports Server (NTRS)
Kurtz, D. W.; Levin, R. R.
1986-01-01
Technical paper notes systems engineering principles applied to development of electric and hybrid vehicles such that system performance requirements support overall program goal of reduced petroleum consumption. Paper discusses iterative design approach dictated by systems analyses. In addition to obvious peformance parameters of range, acceleration rate, and energy consumption, systems engineering also considers such major factors as cost, safety, reliability, comfort, necessary supporting infrastructure, and availability of materials.
Operational modes, health, and status monitoring
NASA Astrophysics Data System (ADS)
Taljaard, Corrie
2016-08-01
System Engineers must fully understand the system, its support system and operational environment to optimise the design. Operations and Support Managers must also identify the correct metrics to measure the performance and to manage the operations and support organisation. Reliability Engineering and Support Analysis provide methods to design a Support System and to optimise the Availability of a complex system. Availability modelling and Failure Analysis during the design is intended to influence the design and to develop an optimum maintenance plan for a system. The remote site locations of the SKA Telescopes place emphasis on availability, failure identification and fault isolation. This paper discusses the use of Failure Analysis and a Support Database to design a Support and Maintenance plan for the SKA Telescopes. It also describes the use of modelling to develop an availability dashboard and performance metrics.
Statistical Aspects of Reliability, Maintainability, and Availability.
1987-10-01
A total of 33 research reports were issued, and 35 papers were published in scientific journals or are in press. Research topics included optimal assembly of systems, multistate system theory , testing whether new is better than used nonparameter survival function estimation measuring information in censored models, generalizations of total positively and
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)
NASA Technical Reports Server (NTRS)
DeMott, Diana L.; Bigler, Mark A.
2017-01-01
NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. To determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators, and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules, and operational requirements are developed and then finalized.
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)
NASA Technical Reports Server (NTRS)
DeMott, Diana; Bigler, Mark
2016-01-01
NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.
Pre-Proposal Assessment of Reliability for Spacecraft Docking with Limited Information
NASA Technical Reports Server (NTRS)
Brall, Aron
2013-01-01
This paper addresses the problem of estimating the reliability of a critical system function as well as its impact on the system reliability when limited information is available. The approach addresses the basic function reliability, and then the impact of multiple attempts to accomplish the function. The dependence of subsequent attempts on prior failure to accomplish the function is also addressed. The autonomous docking of two spacecraft was the specific example that generated the inquiry, and the resultant impact on total reliability generated substantial interest in presenting the results due to the relative insensitivity of overall performance to basic function reliability and moderate degradation given sufficient attempts to try and accomplish the required goal. The application of the methodology allows proper emphasis on the characteristics that can be estimated with some knowledge, and to insulate the integrity of the design from those characteristics that can't be properly estimated with any rational value of uncertainty. The nature of NASA's missions contains a great deal of uncertainty due to the pursuit of new science or operations. This approach can be applied to any function where multiple attempts at success, with or without degradation, are allowed.
HTGR plant availability and reliability evaluations. Volume I. Summary of evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadwallader, G.J.; Hannaman, G.W.; Jacobsen, F.K.
1976-12-01
The report (1) describes a reliability assessment methodology for systematically locating and correcting areas which may contribute to unavailability of new and uniquely designed components and systems, (2) illustrates the methodology by applying it to such components in a high-temperature gas-cooled reactor (Public Service Company of Colorado's Fort St. Vrain 330-MW(e) HTGR), and (3) compares the results of the assessment with actual experience. The methodology can be applied to any component or system; however, it is particularly valuable for assessments of components or systems which provide essential functions, or the failure or mishandling of which could result in relatively largemore » economic losses.« less
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
30 CFR 285.429 - What criteria will MMS consider in deciding whether to renew a lease or grant?
Code of Federal Regulations, 2011 CFR
2011-07-01
... existing technology. (b) Availability and feasibility of new technology. (c) Environmental and safety... generation capacity and reliability within the regional electrical distribution and transmission system. ...
Design and reliability analysis of DP-3 dynamic positioning control architecture
NASA Astrophysics Data System (ADS)
Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru
2011-12-01
As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.
A new communication protocol family for a distributed spacecraft control system
NASA Technical Reports Server (NTRS)
Baldi, Andrea; Pace, Marco
1994-01-01
In this paper we describe the concepts behind and architecture of a communication protocol family, which was designed to fulfill the communication requirements of ESOC's new distributed spacecraft control system SCOS 2. A distributed spacecraft control system needs a data delivery subsystem to be used for telemetry (TLM) distribution, telecommand (TLC) dispatch and inter-application communication, characterized by the following properties: reliability, so that any operational workstation is guaranteed to receive the data it needs to accomplish its role; efficiency, so that the telemetry distribution, even for missions with high telemetry rates, does not cause a degradation of the overall control system performance; scalability, so that the network is not the bottleneck both in terms of bandwidth and reconfiguration; flexibility, so that it can be efficiently used in many different situations. The new protocol family which satisfies the above requirements is built on top of widely used communication protocols (UDP and TCP), provides reliable point-to-point and broadcast communication (UDP+) and is implemented in C++. Reliability is achieved using a retransmission mechanism based on a sequence numbering scheme. Such a scheme allows to have cost-effective performances compared to the traditional protocols, because retransmission is only triggered by applications which explicitly need reliability. This flexibility enables applications with different profiles to take advantage of the available protocols, so that the best rate between sped and reliability can be achieved case by case.
On the reliable use of satellite-derived surface water products for global flood monitoring
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Revilla-Romero, B.; Thielen, J.; Salamon, P.; Brakenridge, R.; Pappenberger, F.; de Groeve, T.
2015-12-01
Early flood warning and real-time monitoring systems play a key role in flood risk reduction and disaster response management. To this end, real-time flood forecasting and satellite-based detection systems have been developed at global scale. However, due to the limited availability of up-to-date ground observations, the reliability of these systems for real-time applications have not been assessed in large parts of the globe. In this study, we performed comparative evaluations of the commonly used satellite-based global flood detections and operational flood forecasting system using 10 major flood cases reported over three years (2012-2014). Specially, we assessed the flood detection capabilities of the near real-time global flood maps from the Global Flood Detection System (GFDS), and from the Moderate Resolution Imaging Spectroradiometer (MODIS), and the operational forecasts from the Global Flood Awareness System (GloFAS) for the major flood events recorded in global flood databases. We present the evaluation results of the global flood detection and forecasting systems in terms of correctly indicating the reported flood events and highlight the exiting limitations of each system. Finally, we propose possible ways forward to improve the reliability of large scale flood monitoring tools.
Context-Aided Sensor Fusion for Enhanced Urban Navigation
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-01-01
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments. PMID:23223080
Context-aided sensor fusion for enhanced urban navigation.
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-12-06
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Jack; Lilianstrom, Al; Pasetes, Ray
2004-10-01
The FNAL Email System is the primary point of entry for email destined for an employee or user at Fermilab. This centrally supported system is designed for reliability and availability. It uses multiple layers of protection to help ensure that: (1) SPAM messages are tagged properly; (2) All mail is inspected for viruses; and (3) Valid mail gets delivered. This system employs numerous redundant subsystems to accomplish these tasks.
Propagation Impact on Modern HF (High Frequency) Communications System Design
1986-03-01
received SNR is maximised and interference avoided. As a general principle, system availability and reliability should be improved by the use of...LECTURE SERIES No. 145 propagation Impact on Modern HF Communications System Design. NORTH ATLANTIC TREATY ORGANIZATION gS ^, DISTRIBUTION ...civil and military communities for high frequency communications. It will discuss concepts of real time channel evaluation , system design, as well as
Reliability analysis of a utility-scale solar power plant
NASA Astrophysics Data System (ADS)
Kolb, G. J.
1992-10-01
This paper presents the results of a reliability analysis for a solar central receiver power plant that employs a salt-in-tube receiver. Because reliability data for a number of critical plant components have only recently been collected, this is the first time a credible analysis can be performed. This type of power plant will be built by a consortium of western US utilities led by the Southern California Edison Company. The 10 MW plant is known as Solar Two and is scheduled to be on-line in 1994. It is a prototype which should lead to the construction of 100 MW commercial-scale plants by the year 2000. The availability calculation was performed with the UNIRAM computer code. The analysis predicted a forced outage rate of 5.4 percent and an overall plant availability, including scheduled outages, of 91 percent. The code also identified the most important contributors to plant unavailability. Control system failures were identified as the most important cause of forced outages. Receiver problems were rated second with turbine outages third. The overall plant availability of 91 percent exceeds the goal identified by the US utility study. This paper discuses the availability calculation and presents evidence why the 91 percent availability is a credible estimate.
The Importance of Human Reliability Analysis in Human Space Flight: Understanding the Risks
NASA Technical Reports Server (NTRS)
Hamlin, Teri L.
2010-01-01
HRA is a method used to describe, qualitatively and quantitatively, the occurrence of human failures in the operation of complex systems that affect availability and reliability. Modeling human actions with their corresponding failure in a PRA (Probabilistic Risk Assessment) provides a more complete picture of the risk and risk contributions. A high quality HRA can provide valuable information on potential areas for improvement, including training, procedural, equipment design and need for automation.
Designing robots for industrial environments. [economic factors and vulnerability
NASA Technical Reports Server (NTRS)
1975-01-01
Environmental hazards to industrial robots are summarized. The inherent reliability of the design of the Unimate robot is assessed and the data used in a management system to bring the reliability performance up to a level nearing what is theoretically available. The design is shown to be capable of a mean time between failure of 400 hours and an average up time of 98%. Specific design decisions made in view of application requirements are explored.
High-rate multi-GNSS: what does it mean to seismology?
NASA Astrophysics Data System (ADS)
Geng, J.
2017-12-01
GNSS precise point positioning (PPP) is capable of measuring centimeter-level positions epoch by epoch at a single station, and is thus treasured in tsunami/earthquake early warning where static displacements in the near field are critical to rapidly and reliably determining the magnitude of destructive events. However, most operational real-time PPP systems at present rely on only GPS data. The deficiency of such systems is that the high reliability and availability of precise displacements cannot be maintained continuously in real time, which is however a crucial requirement for disaster resistance and response. Multi-GNSS, including GLONASS, BeiDou, Galileo and QZSS other than only GPS, can be a solution to this problem because much more satellites per epoch (e.g. 30-40) will be available. In this case, positioning failure due to data loss or blunders can be minimized, and on the other hand, positioning initializations can be accelerated to a great extent since the satellite geometry for each epoch will be enhanced enormously. We established a prototype real-time multi-GNSS PPP service based on Asia-Pacific real-time network which can collect and stream high-rate data from all five navigation systems above. We estimated high-rate satellite clock corrections and enabled undifferenced ambiguity fixing for multi-GNSS, which therefore ensures high availability and reliability of precise displacement estimates in contrast to GPS-only systems. We will report how we can benefit from multi-GNSS for seismology, especially the noise characteristics of high-rate and sub-daily displacements. We will also use storm surge loading events to demonstrate the contribution of multi-GNSS to sub-daily transient signals.
A REVIEW OF EFFORTS TO ORGANIZE INFORMATION ABOUT HUMAN LEARNING, TRANSFER, AND RETENTION.
ERIC Educational Resources Information Center
GINSBERG, ROSE; AND OTHERS
FOURTEEN EFFORTS TO ORGANIZE AVAILABLE INFORMATION ON HUMAN LEARNING, TRANSFER, AND RETENTION ARE SUMMARIZED AND EVALUATED ON SIX CRITERIA--BEHAVIORAL SIGNIFICANCE OF CATEGORIES, SCOPE, OBJECTIVITY AND RELIABILITY OF CATEGORIES, PROGNOSIS FOR THE SYSTEM, LOGICAL STRUCTURE, AND HEURISTIC VALUE OF THE SYSTEM. ATTENTION IS GIVEN TO OTHER SOURCES OF…
Therese M. Poland; Robert A. Haack; Toby R. Petrice; Deborah L. Miller; Leah S. Bauer; Ruitong Gao
2006-01-01
Anoplophora glabripennis (Motschulsky) (Coleoptera: Cerambycidae), a pest native to China and Korea, was discovered in North America in 1996. Currently, the only reliable strategy available for eradication and control is to cut and chip all infested trees. We evaluated various doses of the systemic insecticides azadirachtin, emamectin benzoate,...
1982-02-05
McCo0ne Sheridan’"-M nSdHill Silver Dow Powder River Wibaux Jefferson Stillwater The following counties in Wyoming: Judith Basin Sweet Grass Albany...Sweetwater Lassen Sonoma Johnson Teton Misalera Stanislaus Lincoln Uinta Main Sutter Park Washakie Maripoaa Tehama"Sheridan Mendocino Trinity Merced
Machine learning approach for automatic quality criteria detection of health web pages.
Gaudinat, Arnaud; Grabar, Natalia; Boyer, Célia
2007-01-01
The number of medical websites is constantly growing [1]. Owing to the open nature of the Web, the reliability of information available on the Web is uneven. Internet users are overwhelmed by the quantity of information available on the Web. The situation is even more critical in the medical area, as the content proposed by health websites can have a direct impact on the users' well being. One way to control the reliability of health websites is to assess their quality and to make this assessment available to users. The HON Foundation has defined a set of eight ethical principles. HON's experts are working in order to manually define whether a given website complies with s the required principles. As the number of medical websites is constantly growing, manual expertise becomes insufficient and automatic systems should be used in order to help medical experts. In this paper we present the design and the evaluation of an automatic system conceived for the categorisation of medical and health documents according to he HONcode ethical principles. A first evaluation shows promising results. Currently the system shows 0.78 micro precision and 0.73 F-measure, with 0.06 errors.
Key issues for determining the exploitable water resources in a Mediterranean river basin.
Pedro-Monzonís, María; Ferrer, Javier; Solera, Abel; Estrela, Teodoro; Paredes-Arquiola, Javier
2015-01-15
One of the major difficulties in water planning is to determine the water availability in a water resource system in order to distribute water sustainably. In this paper, we analyze the key issues for determining the exploitable water resources as an indicator of water availability in a Mediterranean river basin. Historically, these territories are characterized by heavily regulated water resources and the extensive use of unconventional resources (desalination and wastewater reuse); hence, emulating the hydrological cycle is not enough. This analysis considers the Jucar River Basin as a case study. We have analyzed the different possible combinations between the streamflow time series, the length of the simulation period and the reliability criteria. As expected, the results show a wide dispersion, proving the great influence of the reliability criteria used for the quantification and localization of the exploitable water resources in the system. Therefore, it is considered risky to provide a single value to represent the water availability in the Jucar water resource system. In this sense, it is necessary that policymakers and stakeholders make a decision about the methodology used to determine the exploitable water resources in a river basin. Copyright © 2014 Elsevier B.V. All rights reserved.
Application of the Systematic Sensor Selection Strategy for Turbofan Engine Diagnostics
NASA Technical Reports Server (NTRS)
Sowers, T. Shane; Kopasakis, George; Simon, Donald L.
2008-01-01
The data acquired from available system sensors forms the foundation upon which any health management system is based, and the available sensor suite directly impacts the overall diagnostic performance that can be achieved. While additional sensors may provide improved fault diagnostic performance, there are other factors that also need to be considered such as instrumentation cost, weight, and reliability. A systematic sensor selection approach is desired to perform sensor selection from a holistic system-level perspective as opposed to performing decisions in an ad hoc or heuristic fashion. The Systematic Sensor Selection Strategy is a methodology that optimally selects a sensor suite from a pool of sensors based on the system fault diagnostic approach, with the ability of taking cost, weight, and reliability into consideration. This procedure was applied to a large commercial turbofan engine simulation. In this initial study, sensor suites tailored for improved diagnostic performance are constructed from a prescribed collection of candidate sensors. The diagnostic performance of the best performing sensor suites in terms of fault detection and identification are demonstrated, with a discussion of the results and implications for future research.
Application of the Systematic Sensor Selection Strategy for Turbofan Engine Diagnostics
NASA Technical Reports Server (NTRS)
Sowers, T. Shane; Kopasakis, George; Simon, Donald L.
2008-01-01
The data acquired from available system sensors forms the foundation upon which any health management system is based, and the available sensor suite directly impacts the overall diagnostic performance that can be achieved. While additional sensors may provide improved fault diagnostic performance there are other factors that also need to be considered such as instrumentation cost, weight, and reliability. A systematic sensor selection approach is desired to perform sensor selection from a holistic system-level perspective as opposed to performing decisions in an ad hoc or heuristic fashion. The Systematic Sensor Selection Strategy is a methodology that optimally selects a sensor suite from a pool of sensors based on the system fault diagnostic approach, with the ability of taking cost, weight and reliability into consideration. This procedure was applied to a large commercial turbofan engine simulation. In this initial study, sensor suites tailored for improved diagnostic performance are constructed from a prescribed collection of candidate sensors. The diagnostic performance of the best performing sensor suites in terms of fault detection and identification are demonstrated, with a discussion of the results and implications for future research.
Investigation of Desiccants and CO2 Sorbents for Exploration Systems 2016-2017
NASA Technical Reports Server (NTRS)
Knox, James C.; Watson, David W.; Giesy, Timothy J.; Cmarik, Gregory E.; Miller, Lee A.
2017-01-01
NASA has embarked on the mission to enable humans to explore deep space, including the goal of sending humans to Mars. This journey will require significant developments in a wide range of technical areas as resupply and early return are not possible. Additionally, mass, power, and volume must be minimized for all phases to maximize propulsion availability. Among the critical areas identified for development are life support systems, which will require increases in reliability as well as reduce resource usage. Two primary points for reliability are the mechanical stability of sorbent pellets and recovery of CO2 sorbent productivity after off-nominal events. In this paper, we discuss the present efforts towards screening and characterizing commercially-available sorbents for extended operation in desiccant and CO2 removal beds. With minimized dusting as the primary criteria, a commercial 13X zeolite was selected and tested for performance and risk.
Basis for the power supply reliability study of the 1 MW neutron source
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGhee, D.G.; Fathizadeh, M.
1993-07-01
The Intense Pulsed Neutron Source (IPNS) upgrade to 1 MW requires new power supply designs. This paper describes the tools and the methodology needed to assess the reliability of the power supplies. Both the design and operation of the power supplies in the synchrotron will be taken into account. To develop a reliability budget, the experiments to be conducted with this accelerator are reviewed, and data is collected on the number and duration of interruptions possible before an experiment is required to start over. Once the budget is established, several accelerators of this type will be examined. The budget ismore » allocated to the different accelerator systems based on their operating experience. The accelerator data is usually in terms of machine availability and system down time. It takes into account mean time to failure (MTTF), time to diagnose, time to repair or replace the failed components, and time to get the machine back online. These estimated times are used as baselines for the design. Even though we are in the early stage of design, available data can be analyzed to estimate the MTTF for the power supplies.« less
A Closed Network Queue Model of Underground Coal Mining Production, Failure, and Repair
NASA Technical Reports Server (NTRS)
Lohman, G. M.
1978-01-01
Underground coal mining system production, failures, and repair cycles were mathematically modeled as a closed network of two queues in series. The model was designed to better understand the technological constraints on availability of current underground mining systems, and to develop guidelines for estimating the availability of advanced mining systems and their associated needs for spares as well as production and maintenance personnel. It was found that: mine performance is theoretically limited by the maintainability ratio, significant gains in availability appear possible by means of small improvements in the time between failures the number of crews and sections should be properly balanced for any given maintainability ratio, and main haulage systems closest to the mine mouth require the most attention to reliability.
Composite shade guides and color matching.
Paolone, Gaetano; Orsini, Giovanna; Manauta, Jordi; Devoto, Walter; Putignano, Angelo
2014-01-01
Finding reliable systems that can help the clinician match the color of direct composite restorations is often an issue. After reviewing several composite shade guides available on the market and outlining their main characteristics and limits (unrealistic specimen thickness, not made with the same material the clinician will use, only a few allow to overlap enamel tabs on dentin ones), the authors evaluated the reliability of a system designed to produce self-made standardized "tooth-shaped" shade guide specimens. Small changes in composite enamel thickness may determine huge differences in esthetic outcomes. In conclusion, the results showed that all the specimens demonstrated comparable enamel thickness in all the examined areas (cervical, middle, incisal).
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
Ultrareliable PACS: design and clinical evaluation
NASA Astrophysics Data System (ADS)
Goble, John C.; Kronander, Torbjorn; Wilske, Nils-Olof; Yngvesson, Jonas T.; Ejderholm, Henrik; Ekstrom, Marie
1999-07-01
We describe our experience in the design, installation and clinical evaluation o fan ultra-reliable PACS - a system in which the fundamental design constraint was system availability. This syste has ben constructed using commercial, off-the-shelf hardware and software, using an open system, standards-based approach. The system is deployed in the film-free Department of Pediatric Radiology at the Astrid Lindgren Barnsjukhus a nit of the Karolinska Institute in Stockholm, Sweden.
An Integrated Fault Tolerant Robotic Controller System for High Reliability and Safety
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Tso, Kam S.; Hecht, Myron
1994-01-01
This paper describes the concepts and features of a fault-tolerant intelligent robotic control system being developed for applications that require high dependability (reliability, availability, and safety). The system consists of two major elements: a fault-tolerant controller and an operator workstation. The fault-tolerant controller uses a strategy which allows for detection and recovery of hardware, operating system, and application software failures.The fault-tolerant controller can be used by itself in a wide variety of applications in industry, process control, and communications. The controller in combination with the operator workstation can be applied to robotic applications such as spaceborne extravehicular activities, hazardous materials handling, inspection and maintenance of high value items (e.g., space vehicles, reactor internals, or aircraft), medicine, and other tasks where a robot system failure poses a significant risk to life or property.
Shift level analysis of cable yarder availability, utilization, and productive time
James R. Sherar; Chris B. LeDoux
1989-01-01
Decision makers, loggers, managers, and planners need to understand and have methods for estimating utilization and productive time of cable logging systems. In making an accurate prediction of how much area and volume a machine will log per unit time and the associated cable yarding costs, a reliable estimate of the availability, utilization, and productive time of...
Reliability assessment and improvement for a fast corrector power supply in TPS
NASA Astrophysics Data System (ADS)
Liu, Kuo-Bin; Liu, Chen-Yao; Wang, Bao-Sheng; Wong, Yong Seng
2018-07-01
Fast Orbit Feedback System (FOFB) can be installed in a synchrotron light source to eliminate undesired disturbances and to improve the stability of beam orbit. The design and implementation of an accurate and reliable Fast Corrector Power Supply (FCPS) is essential to realize the effectiveness and availability of the FOFB. A reliability assessment for the FCPSs in the FOFB of Taiwan Photon Source (TPS) considering MOSFETs' temperatures is represented in this paper. The FCPS is composed of a full-bridge topology and a low-pass filter. A Hybrid Pulse Width Modulation (HPWM) requiring two MOSFETs in the full-bridge circuit to be operated at high frequency and the other two be operated at the output frequency is adopted to control the implemented FCPS. Due the characteristic of HPWM, the conduction loss and switching loss of each MOSFET in the FCPS is not same. Two of the MOSFETs in the full-bridge circuit will suffer higher temperatures and therefore the circuit reliability of FCPS is reduced. A Modified PWM Scheme (MPWMS) designed to average MOSFETs' temperatures and to improve circuit reliability is proposed in this paper. Experimental results measure the MOSFETs' temperatures of FCPS controlled by the HPWM and the proposed MPWMS. The reliability indices under different PWM controls are then assessed. From the experimental results, it can be observed that the reliability of FCPS using the proposed MPWMS can be improved because the MOSFETs' temperatures are closer. Since the reliability of FCPS can be enhanced, the availability of FOFB can also be improved.
78 FR 36093 - Fenpyroximate; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-17
... pesticide manufacturer. The following list of North American Industrial Classification System (NAICS) codes... there is reliable information.'' This includes exposure through drinking water and in residential... the available scientific data and other relevant information in support of this action. EPA has...
Reliability of the quench protection system for the LHC superconducting elements
NASA Astrophysics Data System (ADS)
Vergara Fernández, A.; Rodríguez-Mateos, F.
2004-06-01
The Quench Protection System (QPS) is the sole system in the Large Hadron Collider machine monitoring the signals from the superconducting elements (bus bars, current leads, magnets) which form the cold part of the electrical circuits. The basic functions to be accomplished by the QPS during the machine operation will be briefly presented. With more than 4000 internal trigger channels (quench detectors and others), the final QPS design is the result of an optimised balance between on-demand availability and false quench reliability. The built-in redundancy for the different equipment will be presented, focusing on the calculated, expected number of missed quenches and false quenches. Maintenance strategies in order to improve the performance over the years of operation will be addressed.
NASA Technical Reports Server (NTRS)
Moore, Cherice; Svetlik, Randall; Williams, Antony
2017-01-01
As spaceflight durations have increased over the last four decades, the effects of microgravity on the human body have become far better understood, as have the exercise countermeasures. Through use of a combination of aerobic and resistive exercise devices, today's astronauts and cosmonauts are able to partially counter the losses in muscle strength, aerobic fitness, and bone strength that otherwise might occur during their missions on the International Space Station (ISS). Since 2000, the ISS has employed a variety of exercise equipment used as countermeasures to these risks. Providing reliable and available exercise systems has presented significant challenges due to the unique environment. In solving these, lessons have been learned that can inform development of future systems.
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (ORION)
NASA Technical Reports Server (NTRS)
Mott, Diana L.; Bigler, Mark A.
2017-01-01
NASA uses two HRA assessment methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is still expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a PRA model that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more problematic. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.
NASA Technical Reports Server (NTRS)
Kleinhammer, Roger K.; Graber, Robert R.; DeMott, D. L.
2016-01-01
Reliability practitioners advocate getting reliability involved early in a product development process. However, when assigned to estimate or assess the (potential) reliability of a product or system early in the design and development phase, they are faced with lack of reasonable models or methods for useful reliability estimation. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, analysts attempt to develop the "best" or composite analog data to support the assessments. Industries, consortia and vendors across many areas have spent decades collecting, analyzing and tabulating fielded item and component reliability performance in terms of observed failures and operational use. This data resource provides a huge compendium of information for potential use, but can also be compartmented by industry, difficult to find out about, access, or manipulate. One method used incorporates processes for reviewing these existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component. It can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. It also establishes a baseline prior that may updated based on test data or observed operational constraints and failures, i.e., using Bayesian techniques. This tutorial presents a descriptive compilation of historical data sources across numerous industries and disciplines, along with examples of contents and data characteristics. It then presents methods for combining failure information from different sources and mathematical use of this data in early reliability estimation and analyses.
Orbiter Autoland reliability analysis
NASA Technical Reports Server (NTRS)
Welch, D. Phillip
1993-01-01
The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.
The development of a reliable amateur boxing performance analysis template.
Thomson, Edward; Lamb, Kevin; Nicholas, Ceri
2013-01-01
The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing.
Choosing the optimal wind turbine variant using the ”ELECTRE” method
NASA Astrophysics Data System (ADS)
Ţişcă, I. A.; Anuşca, D.; Dumitrescu, C. D.
2017-08-01
This paper presents a method of choosing the “optimal” alternative, both under certainty and under uncertainty, based on relevant analysis criteria. Taking into account that a product can be assimilated to a system and that the reliability of the system depends on the reliability of its components, the choice of product (the appropriate system decision) can be done using the “ELECTRE” method and depending on the level of reliability of each product. In the paper, the “ELECTRE” method is used in choosing the optimal version of a wind turbine required to equip a wind farm in western Romania. The problems to be solved are related to the current situation of wind turbines that involves reliability problems. A set of criteria has been proposed to compare two or more products from a range of available products: Operating conditions, Environmental conditions during operation, Time requirements. Using the ELECTRE hierarchical mathematical method it was established that on the basis of the obtained coefficients of concordance the optimal variant of the wind turbine and the order of preference of the variants are determined, the values chosen as limits being arbitrary.
Constraining uncertainties in water supply reliability in a tropical data scarce basin
NASA Astrophysics Data System (ADS)
Kaune, Alexander; Werner, Micha; Rodriguez, Erasmo; de Fraiture, Charlotte
2015-04-01
Assessing the water supply reliability in river basins is essential for adequate planning and development of irrigated agriculture and urban water systems. In many cases hydrological models are applied to determine the surface water availability in river basins. However, surface water availability and variability is often not appropriately quantified due to epistemic uncertainties, leading to water supply insecurity. The objective of this research is to determine the water supply reliability in order to support planning and development of irrigated agriculture in a tropical, data scarce environment. The approach proposed uses a simple hydrological model, but explicitly includes model parameter uncertainty. A transboundary river basin in the tropical region of Colombia and Venezuela with an approximately area of 2100 km² was selected as a case study. The Budyko hydrological framework was extended to consider climatological input variability and model parameter uncertainty, and through this the surface water reliability to satisfy the irrigation and urban demand was estimated. This provides a spatial estimate of the water supply reliability across the basin. For the middle basin the reliability was found to be less than 30% for most of the months when the water is extracted from an upstream source. Conversely, the monthly water supply reliability was high (r>98%) in the lower basin irrigation areas when water was withdrawn from a source located further downstream. Including model parameter uncertainty provides a complete estimate of the water supply reliability, but that estimate is influenced by the uncertainty in the model. Reducing the uncertainty in the model through improved data and perhaps improved model structure will improve the estimate of the water supply reliability allowing better planning of irrigated agriculture and dependable water allocation decisions.
The effect of density gradients on hydrometers
NASA Astrophysics Data System (ADS)
Heinonen, Martti; Sillanpää, Sampo
2003-05-01
Hydrometers are simple but effective instruments for measuring the density of liquids. In this work, we studied the effect of non-uniform density of liquid on a hydrometer reading. The effect induced by vertical temperature gradients was investigated theoretically and experimentally. A method for compensating for the effect mathematically was developed and tested with experimental data obtained with the MIKES hydrometer calibration system. In the tests, the method was found reliable. However, the reliability depends on the available information on the hydrometer dimensions and density gradients.
NASA Technical Reports Server (NTRS)
Wiener, Earl L.
1988-01-01
The aims and methods of aircraft cockpit automation are reviewed from a human-factors perspective. Consideration is given to the mixed pilot reception of increased automation, government concern with the safety and reliability of highly automated aircraft, the formal definition of automation, and the ground-proximity warning system and accidents involving controlled flight into terrain. The factors motivating automation include technology availability; safety; economy, reliability, and maintenance; workload reduction and two-pilot certification; more accurate maneuvering and navigation; display flexibility; economy of cockpit space; and military requirements.
Validity and Reliability of the 8-Item Work Limitations Questionnaire.
Walker, Timothy J; Tullar, Jessica M; Diamond, Pamela M; Kohl, Harold W; Amick, Benjamin C
2017-12-01
Purpose To evaluate factorial validity, scale reliability, test-retest reliability, convergent validity, and discriminant validity of the 8-item Work Limitations Questionnaire (WLQ) among employees from a public university system. Methods A secondary analysis using de-identified data from employees who completed an annual Health Assessment between the years 2009-2015 tested research aims. Confirmatory factor analysis (CFA) (n = 10,165) tested the latent structure of the 8-item WLQ. Scale reliability was determined using a CFA-based approach while test-retest reliability was determined using the intraclass correlation coefficient. Convergent/discriminant validity was tested by evaluating relations between the 8-item WLQ with health/performance variables for convergent validity (health-related work performance, number of chronic conditions, and general health) and demographic variables for discriminant validity (gender and institution type). Results A 1-factor model with three correlated residuals demonstrated excellent model fit (CFI = 0.99, TLI = 0.99, RMSEA = 0.03, and SRMR = 0.01). The scale reliability was acceptable (0.69, 95% CI 0.68-0.70) and the test-retest reliability was very good (ICC = 0.78). Low-to-moderate associations were observed between the 8-item WLQ and the health/performance variables while weak associations were observed between the demographic variables. Conclusions The 8-item WLQ demonstrated sufficient reliability and validity among employees from a public university system. Results suggest the 8-item WLQ is a usable alternative for studies when the more comprehensive 25-item WLQ is not available.
Woodham, W.M.
1982-01-01
This report provides results of reliability and cost-effective studies of the goes satellite data-collection system used to operate a small hydrologic data network in west-central Florida. The GOES system, in its present state of development, was found to be about as reliable as conventional methods of data collection. Benefits of using the GOES system include some cost and manpower reduction, improved data accuracy, near real-time data availability, and direct computer storage and analysis of data. The GOES system could allow annual manpower reductions of 19 to 23 percent with reduction in cost for some and increase in cost for other single-parameter sites, such as streamflow, rainfall, and ground-water monitoring stations. Manpower reductions of 46 percent or more appear possible for multiple-parameter sites. Implementation of expected improvements in instrumentation and data handling procedures should further reduce costs. (USGS)
Are Education Cost Functions Ready for Prime Time? An Examination of Their Validity and Reliability
ERIC Educational Resources Information Center
Duncombe, William; Yinger, John
2011-01-01
This article makes the case that cost functions are the best available methodology for ensuring consistency between a state's educational accountability system and its education finance system. Because they are based on historical data and well-known statistical methods, cost functions are a particularly flexible and low-cost way to forecast what…
Design of testbed and emulation tools
NASA Technical Reports Server (NTRS)
Lundstrom, S. F.; Flynn, M. J.
1986-01-01
The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems.
Simple, Script-Based Science Processing Archive
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle
2007-01-01
The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.
NASA Astrophysics Data System (ADS)
Zammouri, Mounira; Ribeiro, Luis
2017-05-01
Groundwater flow model of the transboundary Saharan aquifer system is developed in 2003 and used for management and decision-making by Algeria, Tunisia and Libya. In decision-making processes, reliability plays a decisive role. This paper looks into the reliability assessment of the Saharan aquifers model. It aims to detect the shortcomings of the model considered properly calibrated. After presenting the calibration results of the effort modelling in 2003, the uncertainty in the model which arising from the lack of the groundwater level and the transmissivity data is analyzed using kriging technique and stochastic approach. The structural analysis of piezometry in steady state and logarithms of transmissivity were carried out for the Continental Intercalaire (CI) and the Complexe Terminal (CT) aquifers. The available data (piezometry and transmissivity) were compared to the calculated values, using geostatistics approach. Using a stochastic approach, 2500 realizations of a log-normal random transmissivity field of the CI aquifer has been performed to assess the errors of the model output, due to the uncertainty in transmissivity. Two types of bad calibration are shown. In some regions, calibration should be improved using the available data. In others areas, undertaking the model refinement requires gathering new data to enhance the aquifer system knowledge. Stochastic simulations' results showed that the calculated drawdowns in 2050 could be higher than the values predicted by the calibrated model.
Bru, Juan; Berger, Christopher A
2012-01-01
Background Point-of-care electronic medical records (EMRs) are a key tool to manage chronic illness. Several EMRs have been developed for use in treating HIV and tuberculosis, but their applicability to primary care, technical requirements and clinical functionalities are largely unknown. Objectives This study aimed to address the needs of clinicians from resource-limited settings without reliable internet access who are considering adopting an open-source EMR. Study eligibility criteria Open-source point-of-care EMRs suitable for use in areas without reliable internet access. Study appraisal and synthesis methods The authors conducted a comprehensive search of all open-source EMRs suitable for sites without reliable internet access. The authors surveyed clinician users and technical implementers from a single site and technical developers of each software product. The authors evaluated availability, cost and technical requirements. Results The hardware and software for all six systems is easily available, but they vary considerably in proprietary components, installation requirements and customisability. Limitations This study relied solely on self-report from informants who developed and who actively use the included products. Conclusions and implications of key findings Clinical functionalities vary greatly among the systems, and none of the systems yet meet minimum requirements for effective implementation in a primary care resource-limited setting. The safe prescribing of medications is a particular concern with current tools. The dearth of fully functional EMR systems indicates a need for a greater emphasis by global funding agencies to move beyond disease-specific EMR systems and develop a universal open-source health informatics platform. PMID:22763661
Nodal failure index approach to groundwater remediation design
Lee, J.; Reeves, H.W.; Dowding, C.H.
2008-01-01
Computer simulations often are used to design and to optimize groundwater remediation systems. We present a new computationally efficient approach that calculates the reliability of remedial design at every location in a model domain with a single simulation. The estimated reliability and other model information are used to select a best remedial option for given site conditions, conceptual model, and available data. To evaluate design performance, we introduce the nodal failure index (NFI) to determine the number of nodal locations at which the probability of success is below the design requirement. The strength of the NFI approach is that selected areas of interest can be specified for analysis and the best remedial design determined for this target region. An example application of the NFI approach using a hypothetical model shows how the spatial distribution of reliability can be used for a decision support system in groundwater remediation design. ?? 2008 ASCE.
Data reliability in complex directed networks
NASA Astrophysics Data System (ADS)
Sanz, Joaquín; Cozzo, Emanuele; Moreno, Yamir
2013-12-01
The availability of data from many different sources and fields of science has made it possible to map out an increasing number of networks of contacts and interactions. However, quantifying how reliable these data are remains an open problem. From Biology to Sociology and Economics, the identification of false and missing positives has become a problem that calls for a solution. In this work we extend one of the newest, best performing models—due to Guimerá and Sales-Pardo in 2009—to directed networks. The new methodology is able to identify missing and spurious directed interactions with more precision than previous approaches, which renders it particularly useful for analyzing data reliability in systems like trophic webs, gene regulatory networks, communication patterns and several social systems. We also show, using real-world networks, how the method can be employed to help search for new interactions in an efficient way.
A Filing System for Medical Literature
Cumming, Millie
1988-01-01
The author reviews the types of systems available for personal literature files and makes specific recommendations for filing systems for family physicians. A personal filing system can be an integral part of family practice, and need not require time out of proportion to the worth of the system. Because it is a personal system, different types will suit different users; some systems, however, are more reliable than others for use in family practice. (Can Fam Physician 1988; 34:425-433.) PMID:21253062
ERIC Educational Resources Information Center
Amrein-Beardsley, Audrey; Collins, Clarin
2012-01-01
The SAS Educational Value-Added Assessment System (SAS[R] EVAAS[R]) is the most widely used value-added system in the country. It is also self-proclaimed as "the most robust and reliable" system available, with its greatest benefit to help educators improve their teaching practices. This study critically examined the effects of SAS[R] EVAAS[R] as…
Reliability-Productivity Curve, a Tool for Adaptation Measures Identification
NASA Astrophysics Data System (ADS)
Chávez-Jiménez, A.; Granados, A.; Garrote, L. M.
2015-12-01
Due to climate change effects, water scarcity problems would intensify in several regions. These problems are going to impact negatively in the water low-priority demands, since these will be reduced in favor of those with high-priority. An example would be the reduction of agriculture water resources in favor of the urban ones. Then, it is important the evaluation of adaptation measures for a better water resources management. An important tool to face this challenge is the economic valuation of the water demands' impact within a water resources system. In agriculture this valuation is usually performed through the water productivity evaluation. The water productivity evaluation requires detailed information regarding the different crops like the applied technology, the agricultural supplies management, the water availability, etc. This is a restriction for an evaluation at basin scale due to the difficulty of gathers this level of detailed information. Besides, only the water availability is taken into account, but not the period when the water is distributed (i.e. water resources reliability). Water resources reliability is one of the most important variables in water resources management. This research proposes a methodology to determine the agriculture water productivity, using as variables the crops information, the crops price, the water resources availability, and the water resources reliability, at a basin scale. This methodology would allow identifying general water resources adaptation measures, providing the basis for further detailed studies in critical regions.
NASA Technical Reports Server (NTRS)
Smith, R. M.
1991-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.
NASA Astrophysics Data System (ADS)
Aguilar, Mariela C.; Gonzalez, Alex; Rowaan, Cornelis; De Freitas, Carolina; Rosa, Potyra R.; Alawa, Karam; Lam, Byron L.; Parel, Jean-Marie A.
2016-03-01
As there is no clinically available instrument to systematically and reliably determine the photosensitivity thresholds of patients with dry eyes, blepharospasms, migraines, traumatic brain injuries, and genetic disorders such as Achromatopsia, retinitis pigmentosa and other retinal dysfunctions, a computer-controlled optoelectronics system was designed. The BPEI Photosensitivity System provides a light stimuli emitted from a bi-cupola concave, 210 white LED array with varying intensity ranging from 1 to 32,000 lux. The system can either utilize a normal or an enhanced testing mode for subjects with low light tolerance. The automated instrument adjusts the intensity of each light stimulus. The subject is instructed to indicate discomfort by pressing a hand-held button. Reliability of the responses is tracked during the test. The photosensitivity threshold is then calculated after 10 response reversals. In a preliminary study, we demonstrated that subjects suffering from Achromatopsia experienced lower photosensitivity thresholds than normal subjects. Hence, the system can safely and reliably determine the photosensitivity thresholds of healthy and light sensitive subjects by detecting and quantifying the individual differences. Future studies will be performed with this system to determine the photosensitivity threshold differences between normal subjects and subjects suffering from other conditions that affect light sensitivity.
A Synthetic Vision Preliminary Integrated Safety Analysis
NASA Technical Reports Server (NTRS)
Hemm, Robert; Houser, Scott
2001-01-01
This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.
75 FR 62893 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-13
... for using portland cement grout to protect prestressing steel from corrosion. The prestressing tendon system of a prestressed concrete containment structure is a principal strength element of the structure... of the structure depends on the functional reliability of the structure's principal strength elements...
State-of-the-practice in freight data : a review of available freight data in the U.S.
DOT National Transportation Integrated Search
2004-02-01
State and regional transportation planning agencies are increasingly recognizing the need for : policies and programs addressing freight issues to ensure an efficient and reliable freight transportation : system. A major challenge, however, remains t...
System-reliability studies for wave-energy generation
NASA Astrophysics Data System (ADS)
Dawson, J. M.; Din, S.; Mytton, M. G.; Shore, N. L.; Stansfield, H. B.
1980-06-01
A study is reported that is being undertaken in the United Kingdom to determine means of developing the potential of the large wave-energy resource around the coast, in particular, that to the west facing the Atlantic. It is shown that derivation of the mean annual energy to be expected involved knowledge, not only of the wave climates, conversion efficiency characteristics of the proposed devices and of the power transmission system, but also of factors reflecting the availability overall. Attention is given to a simplified approach to the quantifying of reliability for each stage of the process. An appropriate method of analysis is established and a summary of the results obtained is given.
Limitations of Reliability for Long-Endurance Human Spaceflight
NASA Technical Reports Server (NTRS)
Owens, Andrew C.; de Weck, Olivier L.
2016-01-01
Long-endurance human spaceflight - such as missions to Mars or its moons - will present a never-before-seen maintenance logistics challenge. Crews will be in space for longer and be farther way from Earth than ever before. Resupply and abort options will be heavily constrained, and will have timescales much longer than current and past experience. Spare parts and/or redundant systems will have to be included to reduce risk. However, the high cost of transportation means that this risk reduction must be achieved while also minimizing mass. The concept of increasing system and component reliability is commonly discussed as a means to reduce risk and mass by reducing the probability that components will fail during a mission. While increased reliability can reduce maintenance logistics mass requirements, the rate of mass reduction decreases over time. In addition, reliability growth requires increased test time and cost. This paper assesses trends in test time requirements, cost, and maintenance logistics mass savings as a function of increase in Mean Time Between Failures (MTBF) for some or all of the components in a system. In general, reliability growth results in superlinear growth in test time requirements, exponential growth in cost, and sublinear benefits (in terms of logistics mass saved). These trends indicate that it is unlikely that reliability growth alone will be a cost-effective approach to maintenance logistics mass reduction and risk mitigation for long-endurance missions. This paper discusses these trends as well as other options to reduce logistics mass such as direct reduction of part mass, commonality, or In-Space Manufacturing (ISM). Overall, it is likely that some combination of all available options - including reliability growth - will be required to reduce mass and mitigate risk for future deep space missions.
High-reliability gas-turbine combined-cycle development program: Phase II, Volume 3. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hecht, K.G.; Sanderson, R.A.; Smith, M.J.
This three-volume report presents the results of Phase II of the multiphase EPRI-sponsored High-Reliability Gas Turbine Combined-Cycle Development Program whose goal is to achieve a highly reliable gas turbine combined-cycle power plant, available by the mid-1980s, which would be an economically attractive baseload generation alternative for the electric utility industry. The Phase II program objective was to prepare the preliminary design of this power plant. The power plant was addressed in three areas: (1) the gas turbine, (2) the gas turbine ancillaries, and (3) the balance of plant including the steam turbine generator. To achieve the program goals, a gasmore » turbine was incorporated which combined proven reliability characteristics with improved performance features. This gas turbine, designated the V84.3, is the result of a cooperative effort between Kraftwerk Union AG and United Technologies Corporation. Gas turbines of similar design operating in Europe under baseload conditions have demonstrated mean time between failures in excess of 40,000. The reliability characteristics of the gas turbine ancillaries and balance-of-plant equipment were improved through system simplification and component redundancy and by selection of component with inherent high reliability. A digital control system was included with logic, communications, sensor redundancy, and manual backup. An independent condition monitoring and diagnostic system was also included. Program results provide the preliminary design of a gas turbine combined-cycle baseload power plant. This power plant has a predicted mean time between failure of nearly twice the 3000-h EPRI goal. The cost of added reliability features is offset by improved performance, which results in a comparable specific cost and an 8% lower cost of electricty compared to present market offerings.« less
Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events
NASA Astrophysics Data System (ADS)
DeChant, C. M.; Moradkhani, H.
2014-12-01
Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.
Availability of quality vaccines: policies of a non-government organization.
Poore, P
1992-01-01
The availability of vaccines, or any other health service, depends upon, first, the existence of a reliable system of delivery, and the effective management of this system to reach the target population and, second, the acceptance by parents or guardians of the value of the vaccine in preventing death and disability in young children and their mothers. This system must be fully funded and resourced for the foreseeable future if the service is to be sustainable. Today the major obstacles to effective immunization of young children in developing countries is the inadequate, insecure and unpredictable availability of funds and their management. Unless these problems are addressed and solved, the immunization targets set by the World Health Assembly (WHA) will not be met.
Electric system restructuring and system reliability
NASA Astrophysics Data System (ADS)
Horiuchi, Catherine Miller
In 1996 the California legislature passed AB 1890, explicitly defining economic benefits and detailing specific mechanisms for initiating a partial restructuring the state's electric system. Critics have since sought re-regulation and proponents have asked for patience as the new institutions and markets take shape. Other states' electric system restructuring activities have been tempered by real and perceived problems in the California model. This study examines the reduced regulatory controls and new constraints introduced in California's limited restructuring model using utility and regulatory agency records from the 1990's to investigate effects of new institutions and practices on system reliability for the state's five largest public and private utilities. Logit and negative binomial regressions indicate negative impact from the California model of restructuring on system reliability as measured by customer interruptions. Time series analysis of outage data could not predict the wholesale power market collapse and the subsequent rolling blackouts in early 2001; inclusion of near-outage reliability disturbances---load shedding and energy emergencies---provided a measure of forewarning. Analysis of system disruptions, generation capacity and demand, and the role of purchased power challenge conventional wisdom on the causality of Californian's power problems. The quantitative analysis was supplemented by a targeted survey of electric system restructuring participants. Findings suggest each utility and the organization controlling the state's electric grid provided protection from power outages comparable to pre-restructuring operations through 2000; however, this reliability has come at an inflated cost, resulting in reduced system purchases and decreased marginal protection. The historic margin of operating safety has fully eroded, increasing mandatory load shedding and emergency declarations for voluntary and mandatory conservation. Proposed remedies focused on state-funded contracts and government-managed power authorities may not help, as the findings suggest pricing models, market uncertainty, interjurisdictional conflict and an inability to respond to market perturbations are more significant contributors to reduced regional generation availability than the particular contract mechanisms and funding sources used for power purchases.
Dhital, Anup; Bancroft, Jared B; Lachapelle, Gérard
2013-11-07
In natural and urban canyon environments, Global Navigation Satellite System (GNSS) signals suffer from various challenges such as signal multipath, limited or lack of signal availability and poor geometry. Inertial sensors are often employed to improve the solution continuity under poor GNSS signal quality and availability conditions. Various fault detection schemes have been proposed in the literature to detect and remove biased GNSS measurements to obtain a more reliable navigation solution. However, many of these methods are found to be sub-optimal and often lead to unavailability of reliability measures, mostly because of the improper characterization of the measurement errors. A robust filtering architecture is thus proposed which assumes a heavy-tailed distribution for the measurement errors. Moreover, the proposed filter is capable of adapting to the changing GNSS signal conditions such as when moving from open sky conditions to deep canyons. Results obtained by processing data collected in various GNSS challenged environments show that the proposed scheme provides a robust navigation solution without having to excessively reject usable measurements. The tests reported herein show improvements of nearly 15% and 80% for position accuracy and reliability, respectively, when applying the above approach.
Dhital, Anup; Bancroft, Jared B.; Lachapelle, Gérard
2013-01-01
In natural and urban canyon environments, Global Navigation Satellite System (GNSS) signals suffer from various challenges such as signal multipath, limited or lack of signal availability and poor geometry. Inertial sensors are often employed to improve the solution continuity under poor GNSS signal quality and availability conditions. Various fault detection schemes have been proposed in the literature to detect and remove biased GNSS measurements to obtain a more reliable navigation solution. However, many of these methods are found to be sub-optimal and often lead to unavailability of reliability measures, mostly because of the improper characterization of the measurement errors. A robust filtering architecture is thus proposed which assumes a heavy-tailed distribution for the measurement errors. Moreover, the proposed filter is capable of adapting to the changing GNSS signal conditions such as when moving from open sky conditions to deep canyons. Results obtained by processing data collected in various GNSS challenged environments show that the proposed scheme provides a robust navigation solution without having to excessively reject usable measurements. The tests reported herein show improvements of nearly 15% and 80% for position accuracy and reliability, respectively, when applying the above approach. PMID:24212120
Data Used in Quantified Reliability Models
NASA Technical Reports Server (NTRS)
DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.
2014-01-01
Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.
Reliability and validity of the Microsoft Kinect for evaluating static foot posture
2013-01-01
Background The evaluation of foot posture in a clinical setting is useful to screen for potential injury, however disagreement remains as to which method has the greatest clinical utility. An inexpensive and widely available imaging system, the Microsoft Kinect™, may possess the characteristics to objectively evaluate static foot posture in a clinical setting with high accuracy. The aim of this study was to assess the intra-rater reliability and validity of this system for assessing static foot posture. Methods Three measures were used to assess static foot posture; traditional visual observation using the Foot Posture Index (FPI), a 3D motion analysis (3DMA) system and software designed to collect and analyse image and depth data from the Kinect. Spearman’s rho was used to assess intra-rater reliability and concurrent validity of the Kinect to evaluate foot posture, and a linear regression was used to examine the ability of the Kinect to predict total visual FPI score. Results The Kinect demonstrated moderate to good intra-rater reliability for four FPI items of foot posture (ρ = 0.62 to 0.78) and moderate to good correlations with the 3DMA system for four items of foot posture (ρ = 0.51 to 0.85). In contrast, intra-rater reliability of visual FPI items was poor to moderate (ρ = 0.17 to 0.63), and correlations with the Kinect and 3DMA systems were poor (absolute ρ = 0.01 to 0.44). Kinect FPI items with moderate to good reliability predicted 61% of the variance in total visual FPI score. Conclusions The majority of the foot posture items derived using the Kinect were more reliable than the traditional visual assessment of FPI, and were valid when compared to a 3DMA system. Individual foot posture items recorded using the Kinect were also shown to predict a moderate degree of variance in the total visual FPI score. Combined, these results support the future potential of the Kinect to accurately evaluate static foot posture in a clinical setting. PMID:23566934
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
Constructing the "Best" Reliability Data for the Job
NASA Technical Reports Server (NTRS)
DeMott, D. L.; Kleinhammer, R. K.
2014-01-01
Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.
Constructing the Best Reliability Data for the Job
NASA Technical Reports Server (NTRS)
Kleinhammer, R. K.; Kahn, J. C.
2014-01-01
Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.
Market assessment of photovoltaic power systems for agricultural applications in Nigeria
NASA Technical Reports Server (NTRS)
Staples, D.; Steingass, H.; Nolfi, J.
1981-01-01
The market potential for stand-alone photovoltaic systems in agriculture was studied. Information is presented on technical and economically feasible applications, and assessments of the business, government and financial climate for photovoltaic sales. It is concluded that the market for stand-alone systems will be large because of the availability of captial and the high premium placed on high reliability, low maintenance power systems. Various specific applications are described, mostly related to agriculture.
Market assessment of photovoltaic power systems for agricultural applications in Nigeria
NASA Astrophysics Data System (ADS)
Staples, D.; Steingass, H.; Nolfi, J.
1981-10-01
The market potential for stand-alone photovoltaic systems in agriculture was studied. Information is presented on technical and economically feasible applications, and assessments of the business, government and financial climate for photovoltaic sales. It is concluded that the market for stand-alone systems will be large because of the availability of captial and the high premium placed on high reliability, low maintenance power systems. Various specific applications are described, mostly related to agriculture.
Sensor validation and fusion for gas turbine vibration monitoring
NASA Astrophysics Data System (ADS)
Yan, Weizhong; Goebel, Kai F.
2003-08-01
Vibration monitoring is an important practice throughout regular operation of gas turbine power systems and, even more so, during characterization tests. Vibration monitoring relies on accurate and reliable sensor readings. To obtain accurate readings, sensors are placed such that the signal is maximized. In the case of characterization tests, strain gauges are placed at the location of vibration modes on blades inside the gas turbine. Due to the prevailing harsh environment, these sensors have a limited life and decaying accuracy, both of which impair vibration assessment. At the same time bandwidth limitations may restrict data transmission, which in turn limits the number of sensors that can be used for assessment. Knowing the sensor status (normal or faulty), and more importantly, knowing the true vibration level of the system all the time is essential for successful gas turbine vibration monitoring. This paper investigates a dynamic sensor validation and system health reasoning scheme that addresses the issues outlined above by considering only the information required to reliably assess system health status. In particular, if abnormal system health is suspected or if the primary sensor is determined to be faulted, information from available "sibling" sensors is dynamically integrated. A confidence expresses the complex interactions of sensor health and system health, their reliabilities, conflicting information, and what the health assessment is. Effectiveness of the scheme in achieving accurate and reliable vibration evaluation is then demonstrated using a combination of simulated data and a small sample of a real-world application data where the vibration of compressor blades during a real time characterization test of a new gas turbine power system is monitored.
Quantifying the Economic and Grid Reliability Impacts of Improved Wind Power Forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qin; Martinez-Anido, Carlo Brancucci; Wu, Hongyu
Wind power forecasting is an important tool in power system operations to address variability and uncertainty. Accurately doing so is important to reducing the occurrence and length of curtailment, enhancing market efficiency, and improving the operational reliability of the bulk power system. This research quantifies the value of wind power forecasting improvements in the IEEE 118-bus test system as modified to emulate the generation mixes of Midcontinent, California, and New England independent system operator balancing authority areas. To measure the economic value, a commercially available production cost modeling tool was used to simulate the multi-timescale unit commitment (UC) and economicmore » dispatch process for calculating the cost savings and curtailment reductions. To measure the reliability improvements, an in-house tool, FESTIV, was used to calculate the system's area control error and the North American Electric Reliability Corporation Control Performance Standard 2. The approach allowed scientific reproducibility of results and cross-validation of the tools. A total of 270 scenarios were evaluated to accommodate the variation of three factors: generation mix, wind penetration level, and wind fore-casting improvements. The modified IEEE 118-bus systems utilized 1 year of data at multiple timescales, including the day-ahead UC, 4-hour-ahead UC, and 5-min real-time dispatch. The value of improved wind power forecasting was found to be strongly tied to the conventional generation mix, existence of energy storage devices, and the penetration level of wind energy. The simulation results demonstrate that wind power forecasting brings clear benefits to power system operations.« less
NASA Astrophysics Data System (ADS)
Newmark, R. L.; Cohen, S. M.; Averyt, K.; Macknick, J.; Meldrum, J.; Sullivan, P.
2014-12-01
Climate change has the potential to exacerbate reliability concerns for the power sector through changes in water availability and air temperatures. The power sector is responsible for 41% of U.S. freshwater withdrawals, primarily for power plant cooling needs, and any changes in the water available for the power sector, given increasing competition among water users, could affect decisions about new power plant builds and reliable operations for existing generators. Similarly, increases in air temperatures can reduce power plant efficiencies, which in turn increases fuel consumption as well as water withdrawal and consumption rates. This analysis describes an initial link between climate, water, and electricity systems using the National Renewable Energy Laboratory's (NREL) Regional Energy Deployment System (ReEDS) electricity system capacity expansion model. Average surface water runoff projections from Coupled Model Intercomparison Project 5 (CMIP5) data are applied to surface water available to generating capacity in ReEDS, and electric sector growth is compared with and without climate-influenced water availability for the 134 electricity balancing regions in the ReEDS model. In addition, air temperature changes are considered for their impacts on electricity load, transmission capacity, and power plant efficiencies and water use rates. Mean climate projections have only a small impact on national or regional capacity growth and water use because most regions have sufficient unappropriated or previously retired water access to offset climate impacts. Climate impacts are notable in southwestern states, which experience reduced water access purchases and a greater share of water acquired from wastewater and other higher-cost water resources. The electric sector climate impacts demonstrated herein establish a methodology to be later exercised with more extreme climate scenarios and a more rigorous representation of legal and physical water availability.
Studies on the interaction of water with three granular biopesticide formulations
USDA-ARS?s Scientific Manuscript database
Two obstacles for biopesticide commercialization, long shelf-life and reliable efficacy, are both affected by moisture availability or more specifically, water activity. In the present study, the moisture sorption isotherms of three clay-based biopesticide delivery systems denoted as TRE-G, Pesta, ...
OUTDOOR BIOMASS GASIFIER HYDRONIC HEATER (OBGHH) - PHASE I
America needs a clean, affordable, reliable and sustainable product or system to obtain heat for residences in cold climates using renewable, carbon-neutral, plentiful, low-cost biomass fuels of diverse types found close to the location of usage. The available biomass could he...
Wireless Sensors Network (Sensornet)
NASA Technical Reports Server (NTRS)
Perotti, J.
2003-01-01
The Wireless Sensor Network System presented in this paper provides a flexible reconfigurable architecture that could be used in a broad range of applications. It also provides a sensor network with increased reliability; decreased maintainability costs, and assured data availability by autonomously and automatically reconfiguring to overcome communication interferences.
Product component genealogy modeling and field-failure prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Caleb; Hong, Yili; Meeker, William Q.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Product component genealogy modeling and field-failure prediction
King, Caleb; Hong, Yili; Meeker, William Q.
2016-04-13
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.
Izumi, Betty T; Findholt, Nancy E; Pickus, Hayley A; Nguyen, Thuan; Cuneo, Monica K
2014-06-01
Food stores have gained attention as potential intervention targets for improving children's eating habits. There is a need for valid and reliable instruments to evaluate changes in food store snack and beverage availability secondary to intervention. The aim of this study was to develop a valid, reliable, and resource-efficient instrument to evaluate the healthfulness of food store environments faced by children. The SNACZ food store checklist was developed to assess availability of healthier alternatives to the energy-dense snacks and beverages commonly consumed by children. After pretesting, two trained observers independently assessed the availability of 48 snack and beverage items in 50 food stores located near elementary and middle schools in Portland, Oregon, over a 2-week period in summer 2012. Inter-rater reliability was calculated using the kappa statistic. Overall, the instrument had mostly high inter-rater reliability. Seventy-three percent of items assessed had almost perfect or substantial reliability. Two items had moderate reliability (0.41-0.60), and no items had a reliability score less than 0.41. Eleven items occurred too infrequently to generate a kappa score. The SNACZ food store checklist is a first-step toward developing a valid and reliable tool to evaluate the healthfulness of food store environments faced by children. The tool can be used to compare availability of healthier snack and beverage alternatives across communities and measure change secondary to intervention. As a wider variety of healthier snack and beverage alternatives become available in food stores, the checklist should be updated.
NASA Astrophysics Data System (ADS)
Karki, Rajesh
Renewable energy application in electric power systems is growing rapidly worldwide due to enhanced public concerns for adverse environmental impacts and escalation in energy costs associated with the use of conventional energy sources. Photovoltaics and wind energy sources are being increasingly recognized as cost effective generation sources. A comprehensive evaluation of reliability and cost is required to analyze the actual benefits of utilizing these energy sources. The reliability aspects of utilizing renewable energy sources have largely been ignored in the past due the relatively insignificant contribution of these sources in major power systems, and consequently due to the lack of appropriate techniques. Renewable energy sources have the potential to play a significant role in the electrical energy requirements of small isolated power systems which are primarily supplied by costly diesel fuel. A relatively high renewable energy penetration can significantly reduce the system fuel costs but can also have considerable impact on the system reliability. Small isolated systems routinely plan their generating facilities using deterministic adequacy methods that cannot incorporate the highly erratic behavior of renewable energy sources. The utilization of a single probabilistic risk index has not been generally accepted in small isolated system evaluation despite its utilization in most large power utilities. Deterministic and probabilistic techniques are combined in this thesis using a system well-being approach to provide useful adequacy indices for small isolated systems that include renewable energy. This thesis presents an evaluation model for small isolated systems containing renewable energy sources by integrating simulation models that generate appropriate atmospheric data, evaluate chronological renewable power outputs and combine total available energy and load to provide useful system indices. A software tool SIPSREL+ has been developed which generates risk, well-being and energy based indices to provide realistic cost/reliability measures of utilizing renewable energy. The concepts presented and the examples illustrated in this thesis will help system planners to decide on appropriate installation sites, the types and mix of different energy generating sources, the optimum operating policies, and the optimum generation expansion plans required to meet increasing load demands in small isolated power systems containing photovoltaic and wind energy sources.
Bae, Sungwoo; Kim, Myungchin
2016-01-01
In order to realize a true WoT environment, a reliable power circuit is required to ensure interconnections among a range of WoT devices. This paper presents research on sensors and their effects on the reliability and response characteristics of power circuits in WoT devices. The presented research can be used in various power circuit applications, such as energy harvesting interfaces, photovoltaic systems, and battery management systems for the WoT devices. As power circuits rely on the feedback from voltage/current sensors, the system performance is likely to be affected by the sensor failure rates, sensor dynamic characteristics, and their interface circuits. This study investigated how the operational availability of the power circuits is affected by the sensor failure rates by performing a quantitative reliability analysis. In the analysis process, this paper also includes the effects of various reconstruction and estimation techniques used in power processing circuits (e.g., energy harvesting circuits and photovoltaic systems). This paper also reports how the transient control performance of power circuits is affected by sensor interface circuits. With the frequency domain stability analysis and circuit simulation, it was verified that the interface circuit dynamics may affect the transient response characteristics of power circuits. The verification results in this paper showed that the reliability and control performance of the power circuits can be affected by the sensor types, fault tolerant approaches against sensor failures, and the response characteristics of the sensor interfaces. The analysis results were also verified by experiments using a power circuit prototype. PMID:27608020
Effectiveness of glucose monitoring systems modified for the visually impaired.
Bernbaum, M; Albert, S G; Brusca, S; McGinnis, J; Miller, D; Hoffmann, J W; Mooradian, A D
1993-10-01
To compare three glucose meters modified for use by individuals with diabetes and visual impairment regarding accuracy, precision, and clinical reliability. Ten subjects with diabetes and visual impairment performed self-monitoring of blood glucose using each of the three commercially available blood glucose meters modified for visually impaired users (the AccuChek Freedom [Boehringer Mannheim, Indianapolis, IN], the Diascan SVM [Home Diagnostics, Eatontown, NJ], and the One Touch [Lifescan, Milpitas, CA]). The meters were independently evaluated by a laboratory technologist for precision and accuracy determinations. Only two meters were acceptable with regard to laboratory precision (coefficient of variation < 10%)--the Accuchek and the One Touch. The Accuchek and the One Touch did not differ significantly with regard to laboratory estimates of accuracy. A great discrepancy of the clinical reliability results was observed between these two meters. The Accuchek maintained a high degree of reliability (y = 0.99X + 0.44, r = 0.97, P = 0.001). The visually impaired subjects were unable to perform reliable testing using the One Touch system because of a lack of appropriate tactile landmarks and auditory signals. In addition to laboratory assessments of glucose meters, monitoring systems designed for the visually impaired must include adequate tactile and audible feedback features to allow for the acquisition and placement of appropriate blood samples.
GPS/Optical/Inertial Integration for 3D Navigation Using Multi-Copter Platforms
NASA Technical Reports Server (NTRS)
Dill, Evan T.; Young, Steven D.; Uijt De Haag, Maarten
2017-01-01
In concert with the continued advancement of a UAS traffic management system (UTM), the proposed uses of autonomous unmanned aerial systems (UAS) have become more prevalent in both the public and private sectors. To facilitate this anticipated growth, a reliable three-dimensional (3D) positioning, navigation, and mapping (PNM) capability will be required to enable operation of these platforms in challenging environments where global navigation satellite systems (GNSS) may not be available continuously. Especially, when the platform's mission requires maneuvering through different and difficult environments like outdoor opensky, outdoor under foliage, outdoor-urban and indoor, and may include transitions between these environments. There may not be a single method to solve the PNM problem for all environments. The research presented in this paper is a subset of a broader research effort, described in [1]. The research is focused on combining data from dissimilar sensor technologies to create an integrated navigation and mapping method that can enable reliable operation in both an outdoor and structured indoor environment. The integrated navigation and mapping design is utilizes a Global Positioning System (GPS) receiver, an Inertial Measurement Unit (IMU), a monocular digital camera, and three short to medium range laser scanners. This paper describes specifically the techniques necessary to effectively integrate the monocular camera data within the established mechanization. To evaluate the developed algorithms a hexacopter was built, equipped with the discussed sensors, and both hand-carried and flown through representative environments. This paper highlights the effect that the monocular camera has on the aforementioned sensor integration scheme's reliability, accuracy and availability.
A Novel Concept for the Rapid Deployment of Electric Power Cables. Phase 1.
1987-04-30
cable toward the tactical position that requires power. The approach effectively neutralisasl both man-made and naturally occurring deployment...guided system with a reputation for extreme accuracy, it is anticipated that the cable can be delivered to a user located within a 1000 foot range...thus readily available, because it is an effective and reliable weapon system. The system has been up-graded several times which indicates that its
NASA Astrophysics Data System (ADS)
Pasam, Gopi Krishna; Manohar, T. Gowri
2016-09-01
Determination of available transfer capability (ATC) requires the use of experience, intuition and exact judgment in order to meet several significant aspects in the deregulated environment. Based on these points, this paper proposes two heuristic approaches to compute ATC. The first proposed heuristic algorithm integrates the five methods known as continuation repeated power flow, repeated optimal power flow, radial basis function neural network, back propagation neural network and adaptive neuro fuzzy inference system to obtain ATC. The second proposed heuristic model is used to obtain multiple ATC values. Out of these, a specific ATC value will be selected based on a number of social, economic, deregulated environmental constraints and related to specific applications like optimization, on-line monitoring, and ATC forecasting known as multi-objective decision based optimal ATC. The validity of results obtained through these proposed methods are scrupulously verified on various buses of the IEEE 24-bus reliable test system. The results presented and derived conclusions in this paper are very useful for planning, operation, maintaining of reliable power in any power system and its monitoring in an on-line environment of deregulated power system. In this way, the proposed heuristic methods would contribute the best possible approach to assess multiple objective ATC using integrated methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, Michael; Frew, Bethany A.; Bloom, Aaron
This paper discusses challenges that relate to assessing and properly incentivizing the resources necessary to ensure a reliable electricity system with growing penetrations of variable generation (VG). The output of VG (primarily wind and solar generation) varies over time and cannot be predicted precisely. Therefore, the energy from VG is not always guaranteed to be available at times when it is most needed. This means that its contribution towards resource adequacy can be significantly less than the contribution from traditional resources. Variable renewable resources also have near-zero variable costs, and with production-based subsidies they may even have negative offer costs.more » Because variable costs drive the spot price of energy, this can lead to reduced prices, sales, and therefore revenue for all resources within the energy market. The characteristics of VG can also result in increased price volatility as well as the need for more flexibility in the resource fleet in order to maintain system reliability. We explore both traditional and evolving electricity market designs in the United States that aim to ensure resource adequacy and sufficient revenues to recover costs when those resources are needed for longterm reliability. We also investigate how reliability needs may be evolving and discuss how VG may affect future electricity market designs« less
High Reliability and the Evaluation of ATC System Configuration by Communizing Resources
NASA Astrophysics Data System (ADS)
Yamamoto, Masanori
Automatic Train Control (ATC) in the railway signalling system is required high safety, high availability, reduction of unit, energy saving and cost reduction. This paper described the resources communization redundancy of the ATC system that shared the redundant units in preparation for common use units in order to accommodate with this issue by keeping safety and availability in the same level of conventional ATC. It was evaluated on N+2 redundant system which established 2 spares for the common use system N piece in transmission division. It was done the safety evaluation of the N+2 redundant system by way of hazard analysis of FTA method and safety issue was confirmed by FMEA. The new redundant system concludes that 19% of downsizing and 36% of the energy saving are surely possible.
NASA Astrophysics Data System (ADS)
Tan, Zhukui; Xie, Baiming; Zhao, Yuanliang; Dou, Jinyue; Yan, Tong; Liu, Bin; Zeng, Ming
2018-06-01
This paper presents a new integrated planning framework for effective accommodating electric vehicles in smart distribution systems (SDS). The proposed method incorporates various investment options available for the utility collectively, including distributed generation (DG), capacitors and network reinforcement. Using a back-propagation algorithm combined with cost-benefit analysis, the optimal network upgrade plan, allocation and sizing of the selected components are determined, with the purpose of minimizing the total system capital and operating costs of DG and EV accommodation. Furthermore, a new iterative reliability test method is proposed. It can check the optimization results by subsequently simulating the reliability level of the planning scheme, and modify the generation reserve margin to guarantee acceptable adequacy levels for each year of the planning horizon. Numerical results based on a 32-bus distribution system verify the effectiveness of the proposed method.
Analysis of key thresholds leading to upstream dependencies in global transboundary water bodies
NASA Astrophysics Data System (ADS)
Munia, Hafsa Ahmed; Guillaume, Joseph; Kummu, Matti; Mirumachi, Naho; Wada, Yoshihide
2017-04-01
Transboundary water bodies supply 60% of global fresh water flow and are home to about 1/3 of the world's population; creating hydrological, social and economic interdependencies between countries. Trade-offs between water users are delimited by certain thresholds, that, when crossed, result in changes in system behavior, often related to undesirable impacts. A wide variety of thresholds are potentially related to water availability and scarcity. Scarcity can occur because of the country's own water use, and that is potentially intensified by upstream water use. In general, increased water scarcity escalates the reliance on shared water resources, which increases interdependencies between riparian states. In this paper the upstream dependencies of global transboundary river basins are examined at the scale of sub-basin areas. We aim to assess how upstream water withdrawals cause changes in the scarcity categories, such that crossing thresholds is interpreted in terms of downstream dependency on upstream water availability. The thresholds are defined for different types of water availability on which a sub-basin relies: - reliable local runoff (available even in a dry year), - less reliable local water (available in the wet year), - reliable dry year inflows from possible upstream area, and - less reliable wet year inflows from upstream. Possible upstream withdrawals reduce available water downstream, influencing the latter two water availabilities. Upstream dependencies have then been categorized by comparing a sub-basin's scarcity category across different water availability types. When population (or water consumption) grows, the sub-basin satisfies its needs using less reliable water. Thus, the factors affecting the type of water availability being used are different not only for each type of dependency category, but also possibly for every sub- basin. Our results show that, in the case of stress (impacts from high use of water), in 104 (12%) sub- basins out of 886 sub-basins are dependent on upstream water, while in the case of shortage (impacts from insufficient water availability per person), 79 (9%) sub-basins out of 886 sub-basins dependent on upstream water. Categorization of the upstream dependency of the sub-basins helps to differentiate between areas where i) there is currently no dependency on upstream water, ii) upstream water withdrawals are sufficiently high that they alter the scarcity and dependency status, and iii) which are always dependent on upstream water regardless of upstream water withdrawals. Our dependency assessment is expected to considerably support the studies and discussions of hydro-political power relations and management practices in transboundary basins.
Modular photovoltaic stand-alone systems: Phase 1
NASA Technical Reports Server (NTRS)
Naff, G. J.; Marshall, N. A.
1983-01-01
A family of modular stand-alone power systems that covered the range in power level from 1 kw to 14 kw was developed. Products within this family were required to be easily adaptable to different environments and applications, and were to be both reliable and cost effective. Additionally, true commonality in hardware was to be exploited, and unnecessary recurrence of design and development costs were to be minimized; thus improving hardware availability. Assurance of compatibility with large production runs, was also an underlying program goal. A secondary objective was to compile, evaluate, and determine the economic and technical status of available, and potentially available, technology options associated with the balance of systems (BOS) for stand-along photovoltaic (PV) power systems. The secondary objective not only directly supported the primary but additionally contributed to the definition and implementation of the BOS cost reduction plan.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
Increased Reliability of Gas Turbine Components by Robust Coatings Manufacturing
NASA Astrophysics Data System (ADS)
Sharma, A.; Dudykevych, T.; Sansom, D.; Subramanian, R.
2017-08-01
The expanding operational windows of the advanced gas turbine components demand increasing performance capability from protective coating systems. This demand has led to the development of novel multi-functional, multi-materials coating system architectures over the last years. In addition, the increasing dependency of components exposed to extreme environment on protective coatings results in more severe penalties, in case of a coating system failure. This emphasizes that reliability and consistency of protective coating systems are equally important to their superior performance. By means of examples, this paper describes the effects of scatter in the material properties resulting from manufacturing variations on coating life predictions. A strong foundation in process-property-performance correlations as well as regular monitoring and control of the coating process is essential for robust and well-controlled coating process. Proprietary and/or commercially available diagnostic tools can help in achieving these goals, but their usage in industrial setting is still limited. Various key contributors to process variability are briefly discussed along with the limitations of existing process and product control methods. Other aspects that are important for product reliability and consistency in serial manufacturing as well as advanced testing methodologies to simplify and enhance product inspection and improve objectivity are briefly described.
RICOR K527 highly reliable linear cooler: applications and model overview
NASA Astrophysics Data System (ADS)
Riabzev, Sergey; Nachman, Ilan; Levin, Eli; Perach, Adam; Vainshtein, Igor; Gover, Dan
2017-05-01
The K527 linear cooler was developed in order to meet the requirements of reliability, cooling power needs and versatility for a wide range of applications such as hand held, 24/7 and MWS. During the recent years the cooler was incorporated in variety of systems. Some of these systems can be sensitive to vibrations which are induced from the cooler. In order to reduce those vibrations significantly, a Tuned Dynamic Absorber (TDA) was added to the cooler. Other systems, such as the MWS type, are not sensitive to vibrations, but require a robust cooler in order to meet the high demand for environmental vibration and temperature. Therefore various mounting interfaces are designed to meet system requirements. The latest K527 version was designed to be integrated with the K508 cold finger, in order to give it versatility to standard detectors that are already designed and available for the K508 cooler type. The reliability of the cooler is of a high priority. In order to meet the 30,000 working hours target, special design features were implemented. Eight K527 coolers have passed the 19,360 working hours without degradations, and are still running according to our expectations.
Reliability and Availability Evaluation Program Manual.
1982-11-01
research and development. The manual’s purpose was to provide a practical method for making reliability measurements, measurements directly related to... Research , Development, Test and Evaluation. RMA Reliability, Maintainability and Availability. R&R Repair and Refurbishment, Repair and Replacement, etc...length. phenomena such as mechanical wear and A number of researchers in the reliability chemical deterioration. Maintenance should field 14-pages 402
NASA Technical Reports Server (NTRS)
Alexander, R. H. (Principal Investigator); Mcginty, H. K., III
1975-01-01
The author has identified the following significant results. Recommendations resulting from the CARETS evaluation reflect the need to establish a flexible and reliable system for providing more detailed raw and processed land resource information as well as the need to improve the methods of making information available to users.
Contribution potential of glaciers to water availability in different climate regimes
Kaser, Georg; Großhauser, Martin; Marzeion, Ben
2010-01-01
Although reliable figures are often missing, considerable detrimental changes due to shrinking glaciers are universally expected for water availability in river systems under the influence of ongoing global climate change. We estimate the contribution potential of seasonally delayed glacier melt water to total water availability in large river systems. We find that the seasonally delayed glacier contribution is largest where rivers enter seasonally arid regions and negligible in the lowlands of river basins governed by monsoon climates. By comparing monthly glacier melt contributions with population densities in different altitude bands within each river basin, we demonstrate that strong human dependence on glacier melt is not collocated with highest population densities in most basins. PMID:21059938
NASA Technical Reports Server (NTRS)
Shooman, Martin L.
1991-01-01
Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Divito, Ben L.; Holloway, C. Michael
1994-01-01
In this paper the design and formal verification of the lower levels of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications, are presented. The RCP uses NMR-style redundancy to mask faults and internal majority voting to flush the effects of transient faults. Two new layers of the RCP hierarchy are introduced: the Minimal Voting refinement (DA_minv) of the Distributed Asynchronous (DA) model and the Local Executive (LE) Model. Both the DA_minv model and the LE model are specified formally and have been verified using the Ehdm verification system. All specifications and proofs are available electronically via the Internet using anonymous FTP or World Wide Web (WWW) access.
Navy applications experience with small wind power systems
NASA Astrophysics Data System (ADS)
Pal, D.
1985-05-01
This report describes the experience gained and lesson learned from the ongoing field evaluations of seven small, 2-to 20-kW wind energy conversion systems (WECS) at Navy installations located in the Southern California desert, on San Nicolas Island, in California, and in Kaneohe Bay, Hawaii. The field tests show that the WECS's bearings and yaw slip-rings are prone to failure. The failures were attributed to the corrosive environment and poor design practices. Based upon the field tests, it is concluded that a reliable WECS must use a permanent magnet alternator without a gearbox and yaw slip-rings that are driven by a fixed pitch wind turbine rotor. The present state-of-the-art in small WECS technology, including environmental concerns, is reviewed. Also presented is how the technology is advancing to improve reliability and availability for effectively using wind power at Navy bases. The field evaluations are continuing on the small WECS in order to develop operation, maintenance, and reliability data.
USDA-ARS?s Scientific Manuscript database
Traditional plating methods are reliable means for Campylobacter identification from poultry samples but automated gene-based detection systems now available can reduce assay time, data collection and analysis. Bio-Rad and DuPont Qualicon recently introduced Campylobacter assays for their real-time ...
Pressure-Height Properties of Water with Automated Data Collection
ERIC Educational Resources Information Center
Bates, Alan
2013-01-01
Instrumentation available for teachers and students has changed considerably during the last 20 years. The data logger-sensor system has the advantage of taking reliable measurements over time with suitable sample rates. This experiment is not an open-ended investigation but an opportunity to explore the established relationship between the…
Ocean color - Availability of the global data set
NASA Technical Reports Server (NTRS)
Feldman, Gene; Kuring, Norman; Ng, Carolyn; Esaias, Wayne; Mcclain, Chuck; Elrod, Jane; Maynard, Nancy; Endres, Dan
1989-01-01
The use of satellite observations of ocean color to provide reliable estimates of marine phytoplankton biomass on synoptic scales is examined. An overview is given of the Coastal Zone Color Scanner data processing system. The archiving and distribution of ocean color data are discussed, and NASA-sponsored archive sites are listed.
Complexation by dissolved humic substances has an important influence on
trace metal behavior in natural systems. Unfortunately, few analytical
techniques are available with adequate sensitivity and selectivity to measure
free metal ions reliably at the low concent...
Advanced gas turbines breathe new life into vintage reheat units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-04-01
This article describes the repowering of reheat units with advanced gas turbines. The topics of the article include a project overview, plant configuration including heat recovery steam generators and the plant-wide distributed control system, upgrade of existing steam turbines, gas turbine technology, reliability, availability, maintenance features, and training.
77 FR 64935 - Reliability Standards for Geomagnetic Disturbances
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-24
... Ridge Study'') on the effects of electromagnetic pulses on the Bulk-Power System. Available at http... . \\6\\ Oak Ridge National Laboratory, Electromagnetic Pulse: Effects on the U.S. Power Grid: Meta-R-319... issued reports assessing the threat to the United States from Electromagnetic Pulse (EMP) attack in 2004...
NASA Technical Reports Server (NTRS)
1992-01-01
The purpose of QASE RT is to enable system analysts and software engineers to evaluate performance and reliability implications of design alternatives. The program resulted from two Small Business Innovation Research (SBIR) projects. After receiving a description of the system architecture and workload from the user, QASE RT translates the system description into simulation models and executes them. Simulation provides detailed performance evaluation. The results of the evaluations are service and response times, offered load and device utilizations and functional availability.
NASA Technical Reports Server (NTRS)
Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Touati, Said; Chennai, Salim; Souli, Aissa
The increased requirements on supervision, control, and performance in modern power systems make power quality monitoring a common practise for utilities. Large databases are created and automatic processing of the data is required for fast and effective use of the available information. Aim of the work presented in this paper is the development of tools for analysis of monitoring power quality data and in particular measurements of voltage and currents in various level of electrical power distribution. The study is extended to evaluate the reliability of the electrical system in nuclear plant. Power Quality is a measure of how wellmore » a system supports reliable operation of its loads. A power disturbance or event can involve voltage, current, or frequency. Power disturbances can originate in consumer power systems, consumer loads, or the utility. The effect of power quality problems is the loss power supply leading to severe damage to equipments. So, we try to track and improve system reliability. The assessment can be focused on the study of impact of short circuits on the system, harmonics distortion, power factor improvement and effects of transient disturbances on the Electrical System during motor starting and power system fault conditions. We focus also on the review of the Electrical System design against the Nuclear Directorate Safety Assessment principles, including those extended during the last Fukushima nuclear accident. The simplified configuration of the required system can be extended from this simple scheme. To achieve these studies, we have used a demo ETAP power station software for several simulations. (authors)« less
Alanis-Lobato, Gregorio
2015-01-01
High-throughput detection of protein interactions has had a major impact in our understanding of the intricate molecular machinery underlying the living cell, and has permitted the construction of very large protein interactomes. The protein networks that are currently available are incomplete and a significant percentage of their interactions are false positives. Fortunately, the structural properties observed in good quality social or technological networks are also present in biological systems. This has encouraged the development of tools, to improve the reliability of protein networks and predict new interactions based merely on the topological characteristics of their components. Since diseases are rarely caused by the malfunction of a single protein, having a more complete and reliable interactome is crucial in order to identify groups of inter-related proteins involved in disease etiology. These system components can then be targeted with minimal collateral damage. In this article, an important number of network mining tools is reviewed, together with resources from which reliable protein interactomes can be constructed. In addition to the review, a few representative examples of how molecular and clinical data can be integrated to deepen our understanding of pathogenesis are discussed.
Characterizing the reliability of a bioMEMS-based cantilever sensor
NASA Astrophysics Data System (ADS)
Bhalerao, Kaustubh D.
2004-12-01
The cantilever-based BioMEMS sensor represents one instance from many competing ideas of biosensor technology based on Micro Electro Mechanical Systems. The advancement of BioMEMS from laboratory-scale experiments to applications in the field will require standardization of their components and manufacturing procedures as well as frameworks to evaluate their performance. Reliability, the likelihood with which a system performs its intended task, is a compact mathematical description of its performance. The mathematical and statistical foundation of systems-reliability has been applied to the cantilever-based BioMEMS sensor. The sensor is designed to detect one aspect of human ovarian cancer, namely the over-expression of the folate receptor surface protein (FR-alpha). Even as the application chosen is clinically motivated, the objective of this study was to demonstrate the underlying systems-based methodology used to design, develop and evaluate the sensor. The framework development can be readily extended to other BioMEMS-based devices for disease detection and will have an impact in the rapidly growing $30 bn industry. The Unified Modeling Language (UML) is a systems-based framework for design and development of object-oriented information systems which has potential application for use in systems designed to interact with biological environments. The UML has been used to abstract and describe the application of the biosensor, to identify key components of the biosensor, and the technology needed to link them together in a coherent manner. The use of the framework is also demonstrated in computation of system reliability from first principles as a function of the structure and materials of the biosensor. The outcomes of applying the systems-based framework to the study are the following: (1) Characterizing the cantilever-based MEMS device for disease (cell) detection. (2) Development of a novel chemical interface between the analyte and the sensor that provides a degree of selectivity towards the disease. (3) Demonstrating the performance and measuring the reliability of the biosensor prototype, and (4) Identification of opportunities in technological development in order to further refine the proposed biosensor. Application of the methodology to design develop and evaluate the reliability of BioMEMS devices will be beneficial in the streamlining the growth of the BioMEMS industry, while providing a decision-support tool in comparing and adopting suitable technologies from available competing options.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Space Station man-machine automation trade-off analysis
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Bard, J.; Feinberg, A.
1985-01-01
The man machine automation tradeoff methodology presented is of four research tasks comprising the autonomous spacecraft system technology (ASST) project. ASST was established to identify and study system level design problems for autonomous spacecraft. Using the Space Station as an example spacecraft system requiring a certain level of autonomous control, a system level, man machine automation tradeoff methodology is presented that: (1) optimizes man machine mixes for different ground and on orbit crew functions subject to cost, safety, weight, power, and reliability constraints, and (2) plots the best incorporation plan for new, emerging technologies by weighing cost, relative availability, reliability, safety, importance to out year missions, and ease of retrofit. A fairly straightforward approach is taken by the methodology to valuing human productivity, it is still sensitive to the important subtleties associated with designing a well integrated, man machine system. These subtleties include considerations such as crew preference to retain certain spacecraft control functions; or valuing human integration/decision capabilities over equivalent hardware/software where appropriate.
MSIX - A general and user-friendly platform for RAM analysis
NASA Astrophysics Data System (ADS)
Pan, Z. J.; Blemel, Peter
The authors present a CAD (computer-aided design) platform supporting RAM (reliability, availability, and maintainability) analysis with efficient system description and alternative evaluation. The design concepts, implementation techniques, and application results are described. This platform is user-friendly because of its graphic environment, drawing facilities, object orientation, self-tutoring, and access to the operating system. The programs' independency and portability make them generally applicable to various analysis tasks.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Spanu, Teresa; Posteraro, Brunella; Fiori, Barbara; D'Inzeo, Tiziana; Campoli, Serena; Ruggeri, Alberto; Tumbarello, Mario; Canu, Giulia; Trecarichi, Enrico Maria; Parisi, Gabriella; Tronci, Mirella; Sanguinetti, Maurizio; Fadda, Giovanni
2012-01-01
We evaluated the reliability of the Bruker Daltonik's MALDI Biotyper system in species-level identification of yeasts directly from blood culture bottles. Identification results were concordant with those of the conventional culture-based method for 95.9% of Candida albicans (187/195) and 86.5% of non-albicans Candida species (128/148). Results were available in 30 min (median), suggesting that this approach is a reliable, time-saving tool for routine identification of Candida species causing bloodstream infection.
Feasibility of interactive biking exercise system for telemanagement in elderly.
Finkelstein, Joseph; Jeong, In Cheol
2013-01-01
Inexpensive cycling equipment is widely available for home exercise however its use is hampered by lack of tools supporting real-time monitoring of cycling exercise in elderly and coordination with a clinical care team. To address these barriers, we developed a low-cost mobile system aimed at facilitating safe and effective home-based cycling exercise. The system used a miniature wireless 3-axis accelerometer that transmitted the cycling acceleration data to a tablet PC that was integrated with a multi-component disease management system. An exercise dashboard was presented to a patient allowing real-time graphical visualization of exercise progress. The system was programmed to alert patients when exercise intensity exceeded the levels recommended by the patient care providers and to exchange information with a central server. The feasibility of the system was assessed by testing the accuracy of cycling speed monitoring and reliability of alerts generated by the system. Our results demonstrated high validity of the system both for upper and lower extremity exercise monitoring as well as reliable data transmission between home unit and central server.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Recent developments of the NESSUS probabilistic structural analysis computer program
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.
1992-01-01
The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.
NASA Technical Reports Server (NTRS)
1989-01-01
This document establishes electrical, electronic, and electromechanical (EEE) parts management and control requirements for contractors providing and maintaining space flight and mission-essential or critical ground support equipment for NASA space flight programs. Although the text is worded 'the contractor shall,' the requirements are also to be used by NASA Headquarters and field installations for developing program/project parts management and control requirements for in-house and contracted efforts. This document places increased emphasis on parts programs to ensure that reliability and quality are considered through adequate consideration of the selection, control, and application of parts. It is the intent of this document to identify disciplines that can be implemented to obtain reliable parts which meet mission needs. The parts management and control requirements described in this document are to be selectively applied, based on equipment class and mission needs. Individual equipment needs should be evaluated to determine the extent to which each requirement should be implemented on a procurement. Utilization of this document does not preclude the usage of other documents. The entire process of developing and implementing requirements is referred to as 'tailoring' the program for a specific project. Some factors that should be considered in this tailoring process include program phase, equipment category and criticality, equipment complexity, and mission requirements. Parts management and control requirements advocated by this document directly support the concept of 'reliability by design' and are an integral part of system reliability and maintainability. Achieving the required availability and mission success objectives during operation depends on the attention given reliability and maintainability in the design phase. Consequently, it is intended that the requirements described in this document are consistent with those of NASA publications, 'Reliability Program Requirements for Aeronautical and Space System Contractors,' NHB 5300.4(1A-l); 'Maintainability Program Requirements for Space Systems,' NHB 5300.4(1E); and 'Quality Program Provisions for Aeronautical and Space System Contractors,' NHB 5300.4(1B).
NASA Astrophysics Data System (ADS)
Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng
2017-12-01
In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.
Projecting technology change to improve space technology planning and systems management
NASA Astrophysics Data System (ADS)
Walk, Steven Robert
2011-04-01
Projecting technology performance evolution has been improving over the years. Reliable quantitative forecasting methods have been developed that project the growth, diffusion, and performance of technology in time, including projecting technology substitutions, saturation levels, and performance improvements. These forecasts can be applied at the early stages of space technology planning to better predict available future technology performance, assure the successful selection of technology, and improve technology systems management strategy. Often what is published as a technology forecast is simply scenario planning, usually made by extrapolating current trends into the future, with perhaps some subjective insight added. Typically, the accuracy of such predictions falls rapidly with distance in time. Quantitative technology forecasting (QTF), on the other hand, includes the study of historic data to identify one of or a combination of several recognized universal technology diffusion or substitution patterns. In the same manner that quantitative models of physical phenomena provide excellent predictions of system behavior, so do QTF models provide reliable technological performance trajectories. In practice, a quantitative technology forecast is completed to ascertain with confidence when the projected performance of a technology or system of technologies will occur. Such projections provide reliable time-referenced information when considering cost and performance trade-offs in maintaining, replacing, or migrating a technology, component, or system. This paper introduces various quantitative technology forecasting techniques and illustrates their practical application in space technology and technology systems management.
Reliability and coverage analysis of non-repairable fault-tolerant memory systems
NASA Technical Reports Server (NTRS)
Cox, G. W.; Carroll, B. D.
1976-01-01
A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, Michael; Frew, Bethany A.; Bloom, Aaron
This paper discusses challenges that relate to assessing and properly incentivizing the resources necessary to ensure a reliable electricity system with growing penetrations of variable generation (VG). The output of VG (primarily wind and solar generation) varies over time and cannot be predicted precisely. Therefore, the energy from VG is not always guaranteed to be available at times when it is most needed. This means that its contribution towards resource adequacy can be significantly less than the contribution from traditional resources. Variable renewable resources also have near-zero variable costs, and with production-based subsidies they may even have negative offer costs.more » Because variable costs drive the spot price of energy, this can lead to reduced prices, sales, and therefore revenue for all resources within the energy market. The characteristics of VG can also result in increased price volatility as well as the need for more flexibility in the resource fleet in order to maintain system reliability. Furthermore, we explore both traditional and evolving electricity market designs in the United States that aim to ensure resource adequacy and sufficient revenues to recover costs when those resources are needed for long-term reliability. We also investigate how reliability needs may be evolving and discuss how VG may affect future electricity market designs.« less
NASA Astrophysics Data System (ADS)
Brunner, Siegfried; Kargel, Christian
2011-06-01
The conservation and efficient use of natural and especially strategic resources like oil and water have become global issues, which increasingly initiate environmental and political activities for comprehensive recycling programs. To effectively reutilize oil-based materials necessary in many industrial fields (e.g. chemical and pharmaceutical industry, automotive, packaging), appropriate methods for a fast and highly reliable automated material identification are required. One non-contacting, color- and shape-independent new technique that eliminates the shortcomings of existing methods is to label materials like plastics with certain combinations of fluorescent markers ("optical codes", "optical fingerprints") incorporated during manufacture. Since time-resolved measurements are complex (and expensive), fluorescent markers must be designed that possess unique spectral signatures. The number of identifiable materials increases with the number of fluorescent markers that can be reliably distinguished within the limited wavelength band available. In this article we shall investigate the reliable detection and classification of fluorescent markers with specific fluorescence emission spectra. These simulated spectra are modeled based on realistic fluorescence spectra acquired from material samples using a modern VNIR spectral imaging system. In order to maximize the number of materials that can be reliably identified, we evaluate the performance of 8 classification algorithms based on different spectral similarity measures. The results help guide the design of appropriate fluorescent markers, optical sensors and the overall measurement system.
Milligan, Michael; Frew, Bethany A.; Bloom, Aaron; ...
2016-03-22
This paper discusses challenges that relate to assessing and properly incentivizing the resources necessary to ensure a reliable electricity system with growing penetrations of variable generation (VG). The output of VG (primarily wind and solar generation) varies over time and cannot be predicted precisely. Therefore, the energy from VG is not always guaranteed to be available at times when it is most needed. This means that its contribution towards resource adequacy can be significantly less than the contribution from traditional resources. Variable renewable resources also have near-zero variable costs, and with production-based subsidies they may even have negative offer costs.more » Because variable costs drive the spot price of energy, this can lead to reduced prices, sales, and therefore revenue for all resources within the energy market. The characteristics of VG can also result in increased price volatility as well as the need for more flexibility in the resource fleet in order to maintain system reliability. Furthermore, we explore both traditional and evolving electricity market designs in the United States that aim to ensure resource adequacy and sufficient revenues to recover costs when those resources are needed for long-term reliability. We also investigate how reliability needs may be evolving and discuss how VG may affect future electricity market designs.« less
Modeling integrated water user decisions in intermittent supply systems
NASA Astrophysics Data System (ADS)
Rosenberg, David E.; Tarawneh, Tarek; Abdel-Khaleq, Rania; Lund, Jay R.
2007-07-01
We apply systems analysis to estimate household water use in an intermittent supply system considering numerous interdependent water user behaviors. Some 39 household actions include conservation; improving local storage or water quality; and accessing sources having variable costs, availabilities, reliabilities, and qualities. A stochastic optimization program with recourse decisions identifies the infrastructure investments and short-term coping actions a customer can adopt to cost-effectively respond to a probability distribution of piped water availability. Monte Carlo simulations show effects for a population of customers. Model calibration reproduces the distribution of billed residential water use in Amman, Jordan. Parametric analyses suggest economic and demand responses to increased availability and alternative pricing. It also suggests potential market penetration for conservation actions, associated water savings, and subsidies to entice further adoption. We discuss new insights to size, target, and finance conservation.
Psychometric properties of the Transitions from Foster Care Key Leader Survey.
Salazar, Amy M; Brown, Eric C; Monahan, Kathryn C; Catalano, Richard F
2016-04-01
This study summarizes the development and piloting of the Transitions from Foster Care Key Leader Survey (TFC-KLS), an instrument designed to measure change in systems serving young people transitioning from foster care to adulthood. The Jim Casey Youth Opportunity Initiative's logic model was used as a basis for instrument development. The instrument was piloted with 119 key leaders in six communities. Seven of eight latent scales performed well in psychometric testing. The relationships among the 24 measures of system change were explored. A CFA testing overall model fit was satisfactory following slight modifications. Finally, a test of inter-rater reliability between two raters did not find reliable reporting of service availability in a supplemental portion of the survey. The findings were generally positive and supported the validity and utility of the instrument for measuring system change, following some adaptations. Implications for the field are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automatic non-destructive system for quality assurance of welded elements in the aircraft industry
NASA Astrophysics Data System (ADS)
Chady, Tomasz; Waszczuk, Paweł; Szydłowski, Michał; Szwagiel, Mariusz
2018-04-01
Flaws that might be a result of the welding process have to be detected, in order to assure high quality thus reliability of elements exploited in aircraft industry. Currently the inspection stage is conducted manually by a qualified workforce. There are no commercially available systems that could support or replace humans in the flaw detection process. In this paper authors present a novel non-destructive system developed for quality assurance purposes of welded elements utilized in the aircraft industry.
Online Cable Tester and Rerouter
NASA Technical Reports Server (NTRS)
Lewis, Mark; Medelius, Pedro
2012-01-01
Hardware and algorithms have been developed to transfer electrical power and data connectivity safely, efficiently, and automatically from an identified damaged/defective wire in a cable to an alternate wire path. The combination of online cable testing capabilities, along with intelligent signal rerouting algorithms, allows the user to overcome the inherent difficulty of maintaining system integrity and configuration control, while autonomously rerouting signals and functions without introducing new failure modes. The incorporation of this capability will increase the reliability of systems by ensuring system availability during operations.
NASA Technical Reports Server (NTRS)
Dalton, Penni; Cohen, Fred
2004-01-01
The ISS currently uses Ni-H2 batteries in the main power system. Although Ni-H2 is a robust and reliable system, recent advances in battery technology have paved the way for future replacement batteries to be constructed using Li-ion technology. This technology will provide lower launch weight as well as increase ISS electric power system (EPS) efficiency. The result of incorporating this technology in future re-support hardware will be greater power availability and reduced program cost. the presentations of incorporating the new technology.
Llorens, Roberto; Latorre, Jorge; Noé, Enrique; Keshner, Emily A
2016-01-01
Posturography systems that incorporate force platforms are considered to assess balance and postural control with greater sensitivity and objectivity than conventional clinical tests. The Wii Balance Board (WBB) system has been shown to have similar performance characteristics as other force platforms, but with lower cost and size. To determine the validity and reliability of a freely available WBB-based posturography system that combined the WBB with several traditional balance assessments, and to assess the performance of a cohort of stroke individuals with respect to healthy individuals. Healthy subjects and individuals with stroke were recruited. Both groups were assessed using the WBB-based posturography system. Individuals with stroke were also assessed using a laboratory grade posturography system and a battery of clinical tests to determine the concurrent validity of the system. A group of subjects were assessed twice with the WBB-based system to determine its reliability. A total of 144 healthy individuals and 53 individuals with stroke participated in the study. Concurrent validity with another posturography system was moderate to high. Correlations with clinical scales were consistent with previous research. The reliability of the system was excellent in almost all measures. In addition, the system successfully characterized individuals with stroke with respect to the healthy population. The WBB-based posturography system exhibited excellent psychometric properties and sensitivity for identifying balance performance of individuals with stroke in comparison with healthy subjects, which supports feasibility of the system as a clinical tool. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radder, J.A.; Cramer, D.S.
1997-06-01
In order to meet Department of Energy (DOE) Defense Program requirements for tritium in the 2005-2007 time frame, new production capability must be made available. The Accelerator Production of Tritium (APT) Plant is being considered as an alternative to nuclear reactor production of tritium, which has been the preferred method in the past. The proposed APT plant will use a high-power proton accelerator to generate thermal neutrons that will be captured in {sup 3}He to produce tritium (3H). It is expected that the APT Plant will be built and operated at the DOE`s Savannah River Site (SRS) in Aiken, Southmore » Carolina. Discussion is focused on Reliability, Availability, Maintainability, and Inspectability (RAMI) modeling of recent conceptual designs for balance of plant (BOP) systems in the proposed APT Plant. In the conceptual designs for balance of plant (BOP) systems in the proposed APT Plant. In the conceptual design phase, system RAMI estimates are necessary to identify the best possible system alternative and to provide a valid picture of the cost effectiveness of the proposed system for comparison with other system alternatives. RAMI estimates in the phase must necessarily be based on generic data. The objective of the RAMI analyses at the conceptual design stage is to assist the designers in achieving an optimum design which balances the reliability and maintainability requirements among the subsystems and components.« less
Custom LSI plus hybrid equals cost effectiveness
NASA Astrophysics Data System (ADS)
Friedman, S. N.
The possibility to combine various technologies, such as Bi-Polar linear and CMOS/Digital makes it feasible to create systems with a tailored performance not available on a single monolithic circuit. The custom LSI 'BLOCK', especially if it is universal in nature, is proving to be a cost effective way for the developer to improve his product. The custom LSI represents a low price part in contrast to the discrete components it will replace. In addition, the hybrid assembly can realize a savings in labor as a result of the reduced parts handling and associated wire bonds. The possibility of the use of automated system manufacturing techniques leads to greater reliability as the human factor is partly eliminated. Attention is given to reliability predictions, cost considerations, and a product comparison study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beretta, D.; Lanzani, G.; Dipartimento di Fisica, P.zza Leonardo da Vinci 32, Politecnico di Milano, 20133 Milano
2015-07-15
A new experimental setup for reliable measurement of the in-plane Seebeck coefficient of organic and inorganic thin films and bulk materials is reported. The system is based on the “Quasi-Static” approach and can measure the thermopower in the range of temperature between 260 K and 460 K. The system has been tested on a pure nickel bulk sample and on a thin film of commercially available PEDOT:PSS deposited by spin coating on glass. Repeatability within 1.5% for the nickel sample is demonstrated, while accuracy in the measurement of both organic and inorganic samples is guaranteed by time interpolation of datamore » and by operating with a temperature difference over the sample of less than 1 K.« less
77 FR 11515 - Application To Export Electric Energy; Pilot Power Group, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-27
... reliability of the U.S. electric power supply system. Copies of this application will be made available, upon... DEPARTMENT OF ENERGY [OE Docket No. EA-383] Application To Export Electric Energy; Pilot Power... application. SUMMARY: Pilot Power Group, Inc. (Pilot Power) has applied for authority to transmit electric...
1980-01-01
specified condition. 10. Mean-Time-Between-Maintenance ( MTDM ). The mean of the distribution of the time intervals between all maintenance actions...which requires unscheduled maint. action. 3. MTDM : Mean-Time-Between-Maintenance. 4. W4I/OH: Maint Man-Hour/Engine Opbrating Hour S. CLASS III FAIURE
30 CFR 585.429 - What criteria will BOEM consider in deciding whether to renew a lease or grant?
Code of Federal Regulations, 2012 CFR
2012-07-01
... criteria in deciding whether to renew a lease or grant: (a) Design life of existing technology. (b) Availability and feasibility of new technology. (c) Environmental and safety record of the lessee or grantee... reliability within the regional electrical distribution and transmission system. ...
30 CFR 585.429 - What criteria will BOEM consider in deciding whether to renew a lease or grant?
Code of Federal Regulations, 2014 CFR
2014-07-01
... criteria in deciding whether to renew a lease or grant: (a) Design life of existing technology. (b) Availability and feasibility of new technology. (c) Environmental and safety record of the lessee or grantee... reliability within the regional electrical distribution and transmission system. ...
30 CFR 585.429 - What criteria will BOEM consider in deciding whether to renew a lease or grant?
Code of Federal Regulations, 2013 CFR
2013-07-01
... criteria in deciding whether to renew a lease or grant: (a) Design life of existing technology. (b) Availability and feasibility of new technology. (c) Environmental and safety record of the lessee or grantee... reliability within the regional electrical distribution and transmission system. ...
Characteristics of gaps and natural regeneration in mature longleaf pine flatwoods ecosystems
Jennifer L. Gagnon; Eric J. Jokela; W.K. Moser; Dudley A. Huber
2004-01-01
Developing uneven-aged structure in mature stands of longleaf pine requires scientifically based silvicultural systems that are reliable, productive and sustainable. Understanding seedling responses to varying levels of site resource availability within forest gaps is essential for effectively converting even-aged stands to uneven-aged stands. A project was initiated...
Are Handheld Computers Dependable? A New Data Collection System for Classroom-Based Observations
ERIC Educational Resources Information Center
Adiguzel, Tufan; Vannest, Kimberly J.; Parker, Richard I.
2009-01-01
Very little research exists on the dependability of handheld computers used in public school classrooms. This study addresses four dependability criteria--reliability, maintainability, availability, and safety--to evaluate a data collection tool on a handheld computer. Data were collected from five sources: (1) time-use estimations by 19 special…
Tutorial: Performance and reliability in redundant disk arrays
NASA Technical Reports Server (NTRS)
Gibson, Garth A.
1993-01-01
A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reason, J.
Transmission terminations available today are very reliable, but they need to be. In the field, they are continually exposed to pollution and extremes of ambient temperature. In many cases, they are in the rifle sights of vandals. In contrast, cable joints - often cited as the weakest links from an electrical viewpoint - are generally protected from physical damage underground and many of the short cable systems being installed in the US today can be built without joints. All cable systems need terminations - mostly to air-insulated equipment. At 69 through 138 kV, there is intense competition among manufacturers tomore » supply terminations for solid-dielectric cable that are low in cost, reliable, and require a minimum of skill to install. Some utilities are looking also for terminations that fit a range of cable sizes; terminations that do not contain liquid that can leak out; and terminations that are shatter-proof. All of these improvements are available in the US up to 69 kV. For higher voltages, they are on the horizon, if not already in use, overseas. 16 figs.« less
Integration of RAMS in LCC analysis for linear transport infrastructures. A case study for railways.
NASA Astrophysics Data System (ADS)
Calle-Cordón, Álvaro; Jiménez-Redondo, Noemi; Morales-Gámiz, F. J.; García-Villena, F. A.; Garmabaki, Amir H. S.; Odelius, Johan
2017-09-01
Life-cycle cost (LCC) analysis is an economic technique used to assess the total costs associated with the lifetime of a system in order to support decision making in long term strategic planning. For complex systems, such as railway and road infrastructures, the cost of maintenance plays an important role in the LCC analysis. Costs associated with maintenance interventions can be more reliably estimated by integrating the probabilistic nature of the failures associated to these interventions in the LCC models. Reliability, Maintainability, Availability and Safety (RAMS) parameters describe the maintenance needs of an asset in a quantitative way by using probabilistic information extracted from registered maintenance activities. Therefore, the integration of RAMS in the LCC analysis allows obtaining reliable predictions of system maintenance costs and the dependencies of these costs with specific cost drivers through sensitivity analyses. This paper presents an innovative approach for a combined RAMS & LCC methodology for railway and road transport infrastructures being developed under the on-going H2020 project INFRALERT. Such RAMS & LCC analysis provides relevant probabilistic information to be used for condition and risk-based planning of maintenance activities as well as for decision support in long term strategic investment planning.
Okundamiya, Michael S; Emagbetere, Joy O; Ogujor, Emmanuel A
2014-01-01
The rapid growth of the mobile telecommunication sectors of many emerging countries creates a number of problems such as network congestion and poor service delivery for network operators. This results primarily from the lack of a reliable and cost-effective power solution within such regions. This study presents a comprehensive review of the underlying principles of the renewable energy technology (RET) with the objective of ensuring a reliable and cost-effective energy solution for a sustainable development in the emerging world. The grid-connected hybrid renewable energy system incorporating a power conversion and battery storage unit has been proposed based on the availability, dynamism, and technoeconomic viability of energy resources within the region. The proposed system's performance validation applied a simulation model developed in MATLAB, using a practical load data for different locations with varying climatic conditions in Nigeria. Results indicate that, apart from being environmentally friendly, the increase in the overall energy throughput of about 4 kWh/$ of the proposed system would not only improve the quality of mobile services, by making the operations of GSM base stations more reliable and cost effective, but also better the living standards of the host communities.
Okundamiya, Michael S.; Emagbetere, Joy O.; Ogujor, Emmanuel A.
2014-01-01
The rapid growth of the mobile telecommunication sectors of many emerging countries creates a number of problems such as network congestion and poor service delivery for network operators. This results primarily from the lack of a reliable and cost-effective power solution within such regions. This study presents a comprehensive review of the underlying principles of the renewable energy technology (RET) with the objective of ensuring a reliable and cost-effective energy solution for a sustainable development in the emerging world. The grid-connected hybrid renewable energy system incorporating a power conversion and battery storage unit has been proposed based on the availability, dynamism, and technoeconomic viability of energy resources within the region. The proposed system's performance validation applied a simulation model developed in MATLAB, using a practical load data for different locations with varying climatic conditions in Nigeria. Results indicate that, apart from being environmentally friendly, the increase in the overall energy throughput of about 4 kWh/$ of the proposed system would not only improve the quality of mobile services, by making the operations of GSM base stations more reliable and cost effective, but also better the living standards of the host communities. PMID:24578673
Scheduling and Pricing for Expected Ramp Capability in Real-Time Power Markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ela, Erik; O'Malley, Mark
2016-05-01
Higher variable renewable generation penetrations are occurring throughout the world on different power systems. These resources increase the variability and uncertainty on the system which must be accommodated by an increase in the flexibility of the system resources in order to maintain reliability. Many scheduling strategies have been discussed and introduced to ensure that this flexibility is available at multiple timescales. To meet variability, that is, the expected changes in system conditions, two recent strategies have been introduced: time-coupled multi-period market clearing models and the incorporation of ramp capability constraints. To appropriately evaluate these methods, it is important to assessmore » both efficiency and reliability. But it is also important to assess the incentive structure to ensure that resources asked to perform in different ways have the proper incentives to follow these directions, which is a step often ignored in simulation studies. We find that there are advantages and disadvantages to both approaches. We also find that look-ahead horizon length in multi-period market models can impact incentives. This paper proposes scheduling and pricing methods that ensure expected ramps are met reliably, efficiently, and with associated prices based on true marginal costs that incentivize resources to do as directed by the market. Case studies show improvements of the new method.« less
Designing a reliable leak bio-detection system for natural gas pipelines.
Batzias, F A; Siontorou, C G; Spanidis, P-M P
2011-02-15
Monitoring of natural gas (NG) pipelines is an important task for economical/safety operation, loss prevention and environmental protection. Timely and reliable leak detection of gas pipeline, therefore, plays a key role in the overall integrity management for the pipeline system. Owing to the various limitations of the currently available techniques and the surveillance area that needs to be covered, the research on new detector systems is still thriving. Biosensors are worldwide considered as a niche technology in the environmental market, since they afford the desired detector capabilities at low cost, provided they have been properly designed/developed and rationally placed/networked/maintained by the aid of operational research techniques. This paper addresses NG leakage surveillance through a robust cooperative/synergistic scheme between biosensors and conventional detector systems; the network is validated in situ and optimized in order to provide reliable information at the required granularity level. The proposed scheme is substantiated through a knowledge based approach and relies on Fuzzy Multicriteria Analysis (FMCA), for selecting the best biosensor design that suits both, the target analyte and the operational micro-environment. This approach is illustrated in the design of leak surveying over a pipeline network in Greece. Copyright © 2010 Elsevier B.V. All rights reserved.
Impact of Distributed Energy Resources on the Reliability of a Critical Telecommunications Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D.; Atcitty, C.; Zuffranieri, J.
2006-03-01
Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages, as documented by analyses of Federal Communications Commission (FCC) outage reports by the National Reliability Steering Committee (under auspices of the Alliance for Telecommunications Industry Solutions). There are two major issues that are having increasing impact on the sensitivity of the power distribution to telecommunication facilities: deregulation of the power industry, and changing weather patterns. A logical approach to improve the robustness of telecommunicationmore » facilities would be to increase the depth and breadth of technologies available to restore power in the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. But does the diversity in power sources actually increase the reliability of offered power to the office equipment, or does the complexity of installing and managing the extended power system induce more potential faults and higher failure rates? This report analyzes a system involving a telecommunications facility consisting of two switch-bays and a satellite reception system.« less
Advanced Self-Calibrating, Self-Repairing Data Acquisition System
NASA Technical Reports Server (NTRS)
Medelius, Pedro J. (Inventor); Eckhoff, Anthony J. (Inventor); Angel, Lucena R. (Inventor); Perotti, Jose M. (Inventor)
2002-01-01
An improved self-calibrating and self-repairing Data Acquisition System (DAS) for use in inaccessible areas, such as onboard spacecraft, and capable of autonomously performing required system health checks, failure detection. When required, self-repair is implemented utilizing a "spare parts/tool box" system. The available number of spare components primarily depends upon each component's predicted reliability which may be determined using Mean Time Between Failures (MTBF) analysis. Failing or degrading components are electronically removed and disabled to reduce power consumption, before being electronically replaced with spare components.
Ecological and economical efficiency of monitoring systems for oil and gas production on the shelf
NASA Astrophysics Data System (ADS)
Kurakin, A. L.; Lobkovsky, L. I.
2014-02-01
Requirements for signals' reliability of monitoring systems (with respect to the errors of the 1st and 2nd kinds, i.e., false alarms and skipping of danger) are deduced from the ratio of expenditures of different kinds (of exploitation expenses and losses due to accidents). The expressions obtained in the research may be used for economic foundations (and optimization) of specifications for monitoring systems. In cases when optimal parameters are not available, the sufficient condition of monitoring systems economical efficiency is presented.
An approach to operating system testing
NASA Technical Reports Server (NTRS)
Sum, R. N., Jr.; Campbell, R. H.; Kubitz, W. J.
1984-01-01
To ensure the reliability and performance of a new system, it must be verified or validated in some manner. Currently, testing is the only resonable technique available for doing this. Part of this testing process is the high level system test. System testing is considered with respect to operating systems and in particular UNIX. This consideration results in the development and presentation of a good method for performing the system test. The method includes derivations from the system specifications and ideas for management of the system testing project. Results of applying the method to the IBM System/9000 XENIX operating system test and the development of a UNIX test suite are presented.
A particle swarm model for estimating reliability and scheduling system maintenance
NASA Astrophysics Data System (ADS)
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
Optimal maintenance of a multi-unit system under dependencies
NASA Astrophysics Data System (ADS)
Sung, Ho-Joon
The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.
Dual-Use Aspects of System Health Management
NASA Technical Reports Server (NTRS)
Owens, P. R.; Jambor, B. J.; Eger, G. W.; Clark, W. A.
1994-01-01
System Health Management functionality is an essential part of any space launch system. Health management functionality is an integral part of mission reliability, since it is needed to verify the reliability before the mission starts. Health Management is also a key factor in life cycle cost reduction and in increasing system availability. The degree of coverage needed by the system and the degree of coverage made available at a reasonable cost are critical parameters of a successful design. These problems are not unique to the launch vehicle world. In particular, the Intelligent Vehicle Highway System, commercial aircraft systems, train systems, and many types of industrial production facilities require various degrees of system health management. In all of these applications, too, the designers must balance the benefits and costs of health management in order to optimize costs. The importance of an integrated system is emphasized. That is, we present the case for considering health management as an integral part of system design, rather than functionality to be added on at the end of the design process. The importance of maintaining the system viewpoint is discussed in making hardware and software tradeoffs and in arriving at design decisions. We describe an approach to determine the parameters to be monitored in any system health management application. This approach is based on Design of Experiments (DOE), prototyping, failure modes and effects analyses, cost modeling and discrete event simulation. The various computer-based tools that facilitate the approach are discussed. The approach described originally was used to develop a fault tolerant avionics architecture for launch vehicles that incorporated health management as an integral part of the system. Finally, we discuss generalizing the technique to apply it to other domains. Several illustrations are presented.
Li, Xingxing; Zhang, Xiaohong; Ren, Xiaodong; Fritsche, Mathias; Wickert, Jens; Schuh, Harald
2015-02-09
The world of satellite navigation is undergoing dramatic changes with the rapid development of multi-constellation Global Navigation Satellite Systems (GNSSs). At the moment more than 70 satellites are already in view, and about 120 satellites will be available once all four systems (BeiDou + Galileo + GLONASS + GPS) are fully deployed in the next few years. This will bring great opportunities and challenges for both scientific and engineering applications. In this paper we develop a four-system positioning model to make full use of all available observations from different GNSSs. The significant improvement of satellite visibility, spatial geometry, dilution of precision, convergence, accuracy, continuity and reliability that a combining utilization of multi-GNSS brings to precise positioning are carefully analyzed and evaluated, especially in constrained environments.
State-of-the-Art for Small Satellite Propulsion Systems
NASA Technical Reports Server (NTRS)
Parker, Khary I.
2016-01-01
The NASA/Goddard Space Flight Center (NASA/GSFC) is in the business of performing world-class, space-based, scientific research on various spacecraft platforms, which now include small satellites (SmallSats). In order to perform world class science on a SmallSat, NASA/GSFC requires that their components be highly reliable, high performing, have low power consumption, at the lowest cost possible. The Propulsion Branch (Code 597) at NASA/GSFC has conducted a SmallSat propulsion system survey to determine their availability and level of development. Based on publicly available information and unique features, this paper discusses some of the existing SmallSat propulsion systems.. The systems described in this paper do not indicate or imply any endorsement by NASA or NASA/GSFC over those not included.
A Methodology for Quantifying Certain Design Requirements During the Design Phase
NASA Technical Reports Server (NTRS)
Adams, Timothy; Rhodes, Russel
2005-01-01
A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.
Iglesias-Parra, Maria Rosa; García-Guerrero, Alfonso; García-Mayor, Silvia; Kaknani-Uttumchandani, Shakira; León-Campos, Álvaro; Morales-Asencio, José Miguel
2015-07-01
To develop an evaluation system of clinical competencies for the practicum of nursing students based on the Nursing Interventions Classification (NIC). Psychometric validation study: the first two phases addressed definition and content validation, and the third phase consisted of a cross-sectional study for analyzing reliability. The study population was undergraduate nursing students and clinical tutors. Through the Delphi technique, 26 competencies and 91 interventions were isolated. Cronbach's α was 0.96. Factor analysis yielded 18 factors that explained 68.82% of the variance. Overall inter-item correlation was 0.26, and total-item correlation ranged between 0.66 and 0.19. A competency system for the nursing practicum, structured on the NIC, is a reliable method for assessing and evaluating clinical competencies. Further evaluations in other contexts are needed. The availability of standardized language systems in the nursing discipline supposes an ideal framework to develop the nursing curricula. © 2015 Sigma Theta Tau International.
Development of formula varsity race car chassis
NASA Astrophysics Data System (ADS)
Abdullah, M. A.; Mansur, M. R.; Tamaldin, N.; Thanaraj, K.
2013-12-01
Three chassis designs have been developed using commercial computer aided design (CAD) software. The design is based on the specifications of UTeM Formula VarsityTM 2012 (FV2012). The selection of the design is derived from weighted matrix which consists of reliability, cost, time consumption and weight. The score of the matrix is formulated based on relative weighted factor among the selections. All three designs are then fabricated using selected materials available. The actual cost, time consumption and weight of the chassis's are compared with the theoretical weighted scores. Standard processes of cuttings, fittings and welding are performed in chassis mock up and fabrication. The chassis is later assembled together with suspension systems, steering linkages, brake systems, engine system, and drive shaft systems. Once the chassis is assembled, the studies of driver's ergonomic and part accessibility are performed. The completion in final fittings and assembly of the race car and its reliability demonstrate an outstanding design for manufacturing (DFM) practices of the chassis.
Space Station Freedom electric power system availability study
NASA Technical Reports Server (NTRS)
Turnquist, Scott R.
1990-01-01
The results are detailed of follow-on availability analyses performed on the Space Station Freedom electric power system (EPS). The scope includes analyses of several EPS design variations, these are: the 4-photovoltaic (PV) module baseline EPS design, a 6-PV module EPS design, and a 3-solar dynamic module EPS design which included a 10 kW PV module. The analyses performed included: determining the discrete power levels that the EPS will operate at upon various component failures and the availability of each of these operating states; ranking EPS components by the relative contribution each component type gives to the power availability of the EPS; determining the availability impacts of including structural and long-life EPS components in the availability models used in the analyses; determining optimum sparing strategies, for storing space EPS components on-orbit, to maintain high average-power-capability with low lift-mass requirements; and analyses to determine the sensitivity of EPS-availability to uncertainties in the component reliability and maintainability data used.
Reliability of classification for post-traumatic ankle osteoarthritis.
Claessen, Femke M A P; Meijer, Diederik T; van den Bekerom, Michel P J; Gevers Deynoot, Barend D J; Mallee, Wouter H; Doornberg, Job N; van Dijk, C Niek
2016-04-01
The purpose of this study was to identify the most reliable classification system for clinical outcome studies to categorize post-traumatic-fracture-osteoarthritis. A total of 118 orthopaedic surgeons and residents-gathered in the Ankle Platform Study Collaborative Science of Variation Group-evaluated 128 anteroposterior and lateral radiographs of patients after a bi- or trimalleolar ankle fracture on a Web-based platform in order to rate post-traumatic osteoarthritis according to the classification systems coined by (1) van Dijk, (2) Kellgren, and (3) Takakura. Reliability was evaluated with the use of the Siegel and Castellan's multirater kappa measure. Differences between classification systems were compared using the two-sample Z-test. Interobserver agreement of surgeons who participated in the survey was fair for the van Dijk osteoarthritis scale (k = 0.24), and poor for the Takakura (k = 0.19) and the Kellgren systems (k = 0.18) according to the categorical rating of Landis and Koch. This difference in one categorical rating was found to be significant (p < 0.001, CI 0.046-0.053) with the high numbers of observers and cases available. This study documents fair interobserver agreement for the van Dijk osteoarthritis scale, and poor interobserver agreement for the Takakura and Kellgren osteoarthritis classification systems. Because of the low interobserver agreement for the van Dijk, Kellgren, and Takakura classification systems, those systems cannot be used for clinical decision-making. Development of diagnostic criteria on basis of consecutive patients, Level II.
75 FR 72664 - System Personnel Training Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-26
...Under section 215 of the Federal Power Act, the Commission approves two Personnel Performance, Training and Qualifications (PER) Reliability Standards, PER-004-2 (Reliability Coordination--Staffing) and PER-005-1 (System Personnel Training), submitted to the Commission for approval by the North American Electric Reliability Corporation, the Electric Reliability Organization certified by the Commission. The approved Reliability Standards require reliability coordinators, balancing authorities, and transmission operators to establish a training program for their system operators, verify each of their system operators' capability to perform tasks, and provide emergency operations training to every system operator. The Commission also approves NERC's proposal to retire two existing PER Reliability Standards that are replaced by the standards approved in this Final Rule.
Analysis Of The Effects Of Marine Corps M1A1 Abram’s Tank Age On Operational Availability
2014-06-01
effects of age, as measured by the time since the last depot- level rebuild, on equipment operational availability for the M1A1 MBT in the Marine Corps...prior M1A1 reliability studies. We reviewed depot- and unit- level maintenance records within the USMC’s System Operational Effectiveness database to... Level Maintenance 15. NUMBER OF PAGES 67 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia
2015-04-26
Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less
Source Data Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven; Ring, Robert
2016-01-01
Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system in which it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for suggesting epistemic component uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide one example for assigning environmental factors uncertainty when translating between operating environments for the microelectronic part-type components. The heuristic guidelines will be followed by uncertainty-importance routines to assess the need for more applicable data to reduce model uncertainty.
Striped tertiary storage arrays
NASA Technical Reports Server (NTRS)
Drapeau, Ann L.
1993-01-01
Data stripping is a technique for increasing the throughput and reducing the response time of large access to a storage system. In striped magnetic or optical disk arrays, a single file is striped or interleaved across several disks; in a striped tape system, files are interleaved across tape cartridges. Because a striped file can be accessed by several disk drives or tape recorders in parallel, the sustained bandwidth to the file is greater than in non-striped systems, where access to the file are restricted to a single device. It is argued that applying striping to tertiary storage systems will provide needed performance and reliability benefits. The performance benefits of striping for applications using large tertiary storage systems is discussed. It will introduce commonly available tape drives and libraries, and discuss their performance limitations, especially focusing on the long latency of tape accesses. This section will also describe an event-driven tertiary storage array simulator that is being used to understand the best ways of configuring these storage arrays. The reliability problems of magnetic tape devices are discussed, and plans for modeling the overall reliability of striped tertiary storage arrays to identify the amount of error correction required are described. Finally, work being done by other members of the Sequoia group to address latency of accesses, optimizing tertiary storage arrays that perform mostly writes, and compression is discussed.
TFTR neutral beam control and monitoring for DT operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
O`Connor, T.; Kamperschroer, J.; Chu, J.
1995-12-31
Record fusion power output has recently been obtained in TFTR with the injection of deuterium and tritium neutral beams. This significant achievement was due in part to the controls, software, and data processing capabilities added to the neutral beam system for DT operations. Chief among these improvements was the addition of SUN workstations and large dynamic data storage to the existing Central Instrumentation Control and Data Acquisition (CICADA) system. Essentially instantaneous look back over the recent shot history has been provided for most beam waveforms and analysis results. Gas regulation controls allowing remote switchover between deuterium and tritium were alsomore » added. With these tools, comparison of the waveforms and data of deuterium and tritium for four test conditioning pulses quickly produced reliable tritium setpoints. Thereafter, all beam conditioning was performed with deuterium, thus saving the tritium supply for the important DT injection shots. The lookback capability also led to modifications of the gas system to improve reliability and to control ceramic valve leakage by backbiasing. Other features added to improve the reliability and availability of DT neutral beam operations included master beamline controls and displays, a beamline thermocouple interlock system, a peak thermocouple display, automatic gas inventory and cryo panel gas loading monitoring, beam notching controls, a display of beam/plasma interlocks, and a feedback system to control beam power based on plasma conditions.« less
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Reliability of CGA/LGA/HDI Package Board/Assembly (Revision A)
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2013-01-01
This follow-up report presents reliability test results conducted by thermal cycling of five CGA assemblies evaluated under two extreme cycle profiles, representative of use for high-reliability applications. The thermal cycles ranged from a low temperature of 55 C to maximum temperatures of either 100 C or 125 C with slow ramp-up rate (3 C/min) and dwell times of about 15 minutes at the two extremes. Optical photomicrographs that illustrate key inspection findings of up to 200 thermal cycles are presented. Other information presented include an evaluation of the integrity of capacitors on CGA substrate after thermal cycling as well as process evaluation for direct assembly of an LGA onto PCB. The qualification guidelines, which are based on the test results for CGA/LGA/HDI packages and board assemblies, will facilitate NASA projects' use of very dense and newly available FPGA area array packages with known reliably and mitigation risks, allowing greater processing power in a smaller board footprint and lower system weight.
Reliability of SNOMED-CT Coding by Three Physicians using Two Terminology Browsers
Chiang, Michael F.; Hwang, John C.; Yu, Alexander C.; Casper, Daniel S.; Cimino, James J.; Starren, Justin
2006-01-01
SNOMED-CT has been promoted as a reference terminology for electronic health record (EHR) systems. Many important EHR functions are based on the assumption that medical concepts will be coded consistently by different users. This study is designed to measure agreement among three physicians using two SNOMED-CT terminology browsers to encode 242 concepts from five ophthalmology case presentations in a publicly-available clinical journal. Inter-coder reliability, based on exact coding match by each physician, was 44% using one browser and 53% using the other. Intra-coder reliability testing revealed that a different SNOMED-CT code was obtained up to 55% of the time when the two browsers were used by one user to encode the same concept. These results suggest that the reliability of SNOMED-CT coding is imperfect, and may be a function of browsing methodology. A combination of physician training, terminology refinement, and browser improvement may help increase the reproducibility of SNOMED-CT coding. PMID:17238317
Distributed Computing for the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Chudoba, J.
2015-12-01
Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.
Grooten, Wilhelmus Johannes Andreas; Sandberg, Lisa; Ressman, John; Diamantoglou, Nicolas; Johansson, Elin; Rasmussen-Barr, Eva
2018-01-08
Clinical examinations are subjective and often show a low validity and reliability. Objective and highly reliable quantitative assessments are available in laboratory settings using 3D motion analysis, but these systems are too expensive to use for simple clinical examinations. Qinematic™ is an interactive movement analyses system based on the Kinect camera and is an easy-to-use clinical measurement system for assessing posture, balance and side-bending. The aim of the study was to test the test-retest the reliability and construct validity of Qinematic™ in a healthy population, and to calculate the minimal clinical differences for the variables of interest. A further aim was to identify the discriminative validity of Qinematic™ in people with low-back pain (LBP). We performed a test-retest reliability study (n = 37) with around 1 week between the occasions, a construct validity study (n = 30) in which Qinematic™ was tested against a 3D motion capture system, and a discriminative validity study, in which a group of people with LBP (n = 20) was compared to healthy controls (n = 17). We tested a large range of psychometric properties of 18 variables in three sections: posture (head and pelvic position, weight distribution), balance (sway area and velocity in single- and double-leg stance), and side-bending. The majority of the variables in the posture and balance sections, showed poor/fair reliability (ICC < 0.4) and poor/fair validity (Spearman <0.4), with significant differences between occasions, between Qinematic™ and the 3D-motion capture system. In the clinical study, Qinematic™ did not differ between people with LPB and healthy for these variables. For one variable, side-bending to the left, there was excellent reliability (ICC =0.898), excellent validity (r = 0.943), and Qinematic™ could differentiate between LPB and healthy individuals (p = 0.012). This paper shows that a novel software program (Qinematic™) based on the Kinect camera for measuring balance, posture and side-bending has poor psychometric properties, indicating that the variables on balance and posture should not be used for monitoring individual changes over time or in research. Future research on the dynamic tasks of Qinematic™ is warranted.
Space flight risk data collection and analysis project: Risk and reliability database
NASA Technical Reports Server (NTRS)
1994-01-01
The focus of the NASA 'Space Flight Risk Data Collection and Analysis' project was to acquire and evaluate space flight data with the express purpose of establishing a database containing measurements of specific risk assessment - reliability - availability - maintainability - supportability (RRAMS) parameters. The developed comprehensive RRAMS database will support the performance of future NASA and aerospace industry risk and reliability studies. One of the primary goals has been to acquire unprocessed information relating to the reliability and availability of launch vehicles and the subsystems and components thereof from the 45th Space Wing (formerly Eastern Space and Missile Command -ESMC) at Patrick Air Force Base. After evaluating and analyzing this information, it was encoded in terms of parameters pertinent to ascertaining reliability and availability statistics, and then assembled into an appropriate database structure.
Developing Reliable Life Support for Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.
Intelligent Chemistry Management System (ICMS)--A new approach to steam generator chemistry control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barto, R.J.; Farrell, D.M.; Noto, F.A.
1986-04-01
The Intelligent Chemistry Management System (ICMS) is a new tool which assists in steam generator chemistry control. Utilizing diagnostic capabilities, the ICMS will provide utility and industrial boiler operators, system chemists, and plant engineers with a tool for monitoring, diagnosing, and controlling steam generator system chemistry. By reducing the number of forced outages through early identification of potentially detrimental conditions, suggestion of possible causes, and execution of corrective actions, improvements in unit availability and reliability will result. The system monitors water and steam quality at a number of critical locations in the plant.
Advanced Crew Rescue Vehicle/Personnel Launch System
NASA Astrophysics Data System (ADS)
Craig, Jerry W.
1993-02-01
The Advanced Crew Rescue Vehicle (ACRV) will be an essential element of the Space Station to respond to three specific missions, all of which have occurred during the history space exploration by the U.S. and the Soviets: (1) Mission DRM-1: Return of disabled crew members during medical emergencies; (2) Mission DRM-2: Return of crew members from accidents or as a result of failures of Space Station systems; and (3) Mission DRM-3: Return of crew members during interruption of Space Shuttle launches. The ACRV will have the ability to transport up to eight astronauts during a 24-hour mission. Not only would the ACRV serve as a lifeboat to provide transportation back to Earth, but it would also be available as a immediately available safe refuge in case the Space Station were severely damaged by space debris or other catastrophe. Upon return to Earth, existing world-wide search and rescue assets operated by the Coast Guard and Department of Defense would be able to retrieve personnel returned to Earth via the ACRV. The operational approach proposed for the ACRV is tailored to satisfying mission requirements for simplicity of operation (no piloting skills or specially trained personnel are required), continuous availability, high reliability and affordability. By using proven systems as the basis for many critical ACRV systems, the ACRV program is more likely to achieve each of these mission requirements. Nonetheless, the need for the ACRV to operate reliably with little preflight preparation after, perhaps, 5 to 10 years in orbit imposes challenges not faced by any previous space system of this complexity. Specific concerns exist regarding micrometeoroid impacts, battery life, and degradation of recovery parachutes while in storage.
Advanced Crew Rescue Vehicle/Personnel Launch System
NASA Technical Reports Server (NTRS)
Craig, Jerry W.
1993-01-01
The Advanced Crew Rescue Vehicle (ACRV) will be an essential element of the Space Station to respond to three specific missions, all of which have occurred during the history space exploration by the U.S. and the Soviets: (1) Mission DRM-1: Return of disabled crew members during medical emergencies; (2) Mission DRM-2: Return of crew members from accidents or as a result of failures of Space Station systems; and (3) Mission DRM-3: Return of crew members during interruption of Space Shuttle launches. The ACRV will have the ability to transport up to eight astronauts during a 24-hour mission. Not only would the ACRV serve as a lifeboat to provide transportation back to Earth, but it would also be available as a immediately available safe refuge in case the Space Station were severely damaged by space debris or other catastrophe. Upon return to Earth, existing world-wide search and rescue assets operated by the Coast Guard and Department of Defense would be able to retrieve personnel returned to Earth via the ACRV. The operational approach proposed for the ACRV is tailored to satisfying mission requirements for simplicity of operation (no piloting skills or specially trained personnel are required), continuous availability, high reliability and affordability. By using proven systems as the basis for many critical ACRV systems, the ACRV program is more likely to achieve each of these mission requirements. Nonetheless, the need for the ACRV to operate reliably with little preflight preparation after, perhaps, 5 to 10 years in orbit imposes challenges not faced by any previous space system of this complexity. Specific concerns exist regarding micrometeoroid impacts, battery life, and degradation of recovery parachutes while in storage.
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
Survey on Ranging Sensors and Cooperative Techniques for Relative Positioning of Vehicles
de Ponte Müller, Fabian
2017-01-01
Future driver assistance systems will rely on accurate, reliable and continuous knowledge on the position of other road participants, including pedestrians, bicycles and other vehicles. The usual approach to tackle this requirement is to use on-board ranging sensors inside the vehicle. Radar, laser scanners or vision-based systems are able to detect objects in their line-of-sight. In contrast to these non-cooperative ranging sensors, cooperative approaches follow a strategy in which other road participants actively support the estimation of the relative position. The limitations of on-board ranging sensors regarding their detection range and angle of view and the facility of blockage can be approached by using a cooperative approach based on vehicle-to-vehicle communication. The fusion of both, cooperative and non-cooperative strategies, seems to offer the largest benefits regarding accuracy, availability and robustness. This survey offers the reader a comprehensive review on different techniques for vehicle relative positioning. The reader will learn the important performance indicators when it comes to relative positioning of vehicles, the different technologies that are both commercially available and currently under research, their expected performance and their intrinsic limitations. Moreover, the latest research in the area of vision-based systems for vehicle detection, as well as the latest work on GNSS-based vehicle localization and vehicular communication for relative positioning of vehicles, are reviewed. The survey also includes the research work on the fusion of cooperative and non-cooperative approaches to increase the reliability and the availability. PMID:28146129
Methods and Costs to Achieve Ultra Reliable Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2012-01-01
A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.
NASA Astrophysics Data System (ADS)
Sembiring, N.; Ginting, E.; Darnello, T.
2017-12-01
Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.
Zuck, T F; Cumming, P D; Wallace, E L
2001-12-01
The safety of blood for transfusion depends, in part, on the reliability of the health history given by volunteer blood donors. To improve reliability, a pilot study evaluated the use of an interactive computer-based audiovisual donor interviewing system at a typical midwestern blood center in the United States. An interactive video screening system was tested in a community donor center environment on 395 volunteer blood donors. Of the donors using the system, 277 completed surveys regarding their acceptance of and opinions about the system. The study showed that an interactive computer-based audiovisual donor screening system was an effective means of conducting the donor health history. The majority of donors found the system understandable and favored the system over a face-to-face interview. Further, most donors indicated that they would be more likely to return if they were to be screened by such a system. Interactive computer-based audiovisual blood donor screening is useful and well accepted by donors; it may prevent a majority of errors and accidents that are reportable to the FDA; and it may contribute to increased safety and availability of the blood supply.
Developing of an automation for therapy dosimetry systems by using labview software
NASA Astrophysics Data System (ADS)
Aydin, Selim; Kam, Erol
2018-06-01
Traceability, accuracy and consistency of radiation measurements are essential in radiation dosimetry, particularly in radiotherapy, where the outcome of treatments is highly dependent on the radiation dose delivered to patients. Therefore it is very important to provide reliable, accurate and fast calibration services for therapy dosimeters since the radiation dose delivered to a radiotherapy patient is directly related to accuracy and reliability of these devices. In this study, we report the performance of in-house developed computer controlled data acquisition and monitoring software for the commercially available radiation therapy electrometers. LabVIEW® software suite is used to provide reliable, fast and accurate calibration services. The software also collects environmental data such as temperature, pressure and humidity in order to use to use these them in correction factor calculations. By using this software tool, a better control over the calibration process is achieved and the need for human intervention is reduced. This is the first software that can control frequently used dosimeter systems, in radiation thereapy field at hospitals, such as Unidos Webline, Unidos E, Dose-1 and PC Electrometers.
High Availability in Optical Networks
NASA Astrophysics Data System (ADS)
Grover, Wayne D.; Wosinska, Lena; Fumagalli, Andrea
2005-09-01
Call for Papers: High Availability in Optical Networks Submission Deadline: 1 January 2006 The Journal of Optical Networking (JON) is soliciting papers for a feature Issue pertaining to all aspects of reliable components and systems for optical networks and concepts, techniques, and experience leading to high availability of services provided by optical networks. Most nations now recognize that telecommunications in all its forms -- including voice, Internet, video, and so on -- are "critical infrastructure" for the society, commerce, government, and education. Yet all these services and applications are almost completely dependent on optical networks for their realization. "Always on" or apparently unbreakable communications connectivity is the expectation from most users and for some services is the actual requirement as well. Achieving the desired level of availability of services, and doing so with some elegance and efficiency, is a meritorious goal for current researchers. This requires development and use of high-reliability components and subsystems, but also concepts for active reconfiguration and capacity planning leading to high availability of service through unseen fast-acting survivability mechanisms. The feature issue is also intended to reflect some of the most important current directions and objectives in optical networking research, which include the aspects of integrated design and operation of multilevel survivability and realization of multiple Quality-of-Protection service classes. Dynamic survivable service provisioning, or batch re-provisioning is an important current theme, as well as methods that achieve high availability at far less investment in spare capacity than required by brute force service path duplication or 100% redundant rings, which is still the surprisingly prevalent practice. Papers of several types are envisioned in the feature issue, including outlook and forecasting types of treatments, optimization and analysis, new concepts for survivability, or papers on availability analysis methods or results. Customer, vendor, and researcher viewpoints and priorities will all be given consideration. Especially valuable to the community would be papers that include or provide measured data on actual reliability and availability performance of optical networking components or systems. The scope of the papers includes, but is not limited to, the following topics: Reliability and availability measurement techniques specific to optical network devices or services. Data on SRLG statistics and frequency of different actual failure causes. Real-life accounts or data on failure and repair rates or projected values for use in availability analysis. Availability analysis methods, especially for survivable networks with reconfigurable or adaptive failure-specific responses. Availability analysis and comparisons of basic schemes for survivability. Differentiated availability schemes. Design for Multiple Quality of Protection. Different schemes for on-demand survivable service provisioning. Basic comparisons or proposals of new survivability mechanisms and architectures. Concepts yielding higher than 1+1 protection switching availability at less than 100% redundancy. Survivable service provisioning in domains of optical transparency: dealing with signal impairments. To submit to this special issue, follow the normal procedure for submission to JON, indicating "Feature Issue: Optical Network Availability" in the "Comments" field of the online submission form. For all other questions relating to this feature issue, please send an e-mail to jon@osa.org, subject line "Feature Issue: Optical Network Availability." Additional information can be found on the JON website: http://www.osa-jon.org/submission/
Montone, Verona O; Fraisse, Clyde W; Peres, Natalia A; Sentelhas, Paulo C; Gleason, Mark; Ellis, Michael; Schnabel, Guido
2016-11-01
Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.
NASA Astrophysics Data System (ADS)
Montone, Verona O.; Fraisse, Clyde W.; Peres, Natalia A.; Sentelhas, Paulo C.; Gleason, Mark; Ellis, Michael; Schnabel, Guido
2016-11-01
Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.
Ruff, Jessica; Wang, Tiffany L; Quatman-Yates, Catherine C; Phieffer, Laura S; Quatman, Carmen E
2015-02-01
Commercially available gaming systems (CAGS) such as the Wii Balance Board (WBB) and Microsoft Xbox with Kinect (Xbox Kinect) are increasingly used as balance training and rehabilitation tools. The purpose of this review was to answer the question, "Are commercially available gaming systems valid and reliable instruments for use as clinical diagnostic and functional assessment tools in orthopaedic settings?" and provide a summary of relevant studies, identify their strengths and weaknesses, and generate conclusions regarding general validity/reliability of WBB and Xbox Kinect in orthopaedics. A systematic search was performed using MEDLINE (1996-2013) and Scopus (1996-2013). Inclusion criteria were minimum of 5 subjects, full manuscript provided in English or translated, and studies incorporating investigation of CAG measurement properties. Exclusion criteria included reviews, systematic reviews, summary/clinical commentaries, or case studies; conference proceedings/presentations; cadaveric studies; studies of non-reversible, non-orthopaedic-related musculoskeletal disease; non-human trials; and therapeutic studies not reporting comparative evaluation to already established functional assessment criteria. All studies meeting inclusion and exclusion criteria were appraised for quality by two independent reviewers. Evidence levels (I-V) were assigned to each study based on established methodological criteria. 3 Level II, 7 level III, and 1 Level IV studies met inclusion criteria and provided information related to the use of the WBB and Xbox Kinect as clinical assessment tools in the field of orthopaedics. Studies have used the WBB in a variety of clinical applications, including the measurement of center of pressure (COP), measurement of medial-to-lateral (M/L) or anterior-to-posterior (A/P) symmetry, assessment anatomic landmark positioning, and assessment of fall risk. However, no uniform protocols or outcomes were used to evaluate the quality of the WBB as a clinical assessment tool; therefore a wide range of sensitivities, specificities, accuracies, and validities were reported. Currently it is not possible to make a universal generalization about the clinical utility of CAGS in the field of orthopaedics. However, there is evidence to support using the WBB and the Xbox Kinect as tools to obtain reliable and valid COP measurements. The Wii Fit Game may specifically provide reliable and valid measurements for predicting fall risk. Copyright © 2014 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
High performance liquid chromatography of dabsyl derivatives of amino acids was employed for quantification of physiological amino acids in selected fruits and vegetables. This method was found to be particularly useful because the dabsyl derivatives of glutamine and citrulline were sufficiently se...
The Impact of Causality on Information-Theoretic Source and Channel Coding Problems
ERIC Educational Resources Information Center
Palaiyanur, Harikrishna R.
2011-01-01
This thesis studies several problems in information theory where the notion of causality comes into play. Causality in information theory refers to the timing of when information is available to parties in a coding system. The first part of the thesis studies the error exponent (or reliability function) for several communication problems over…
75 FR 14103 - Version One Regional Reliability Standard for Resource and Demand Balancing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-24
... meant to maintain scheduled frequency and avoid loss of firm load following transmission or generation... capacity is available at all times to maintain scheduled frequency, and avoid loss of firm load following... the possibility that firm load could be shed due to the loss of a single element on the system.\\40...
75 FR 12737 - Application To Export Electric Energy; Integrys Energy Services, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-17
... reliability of the U.S. electric power supply system. Copies of this application will be made available, upon... Federal Power Act. DATES: Comments, protests, or requests to intervene must be submitted on or before... energy from the United States to Canada as a power marketer for a period of five years. That...
Materials Research Capabilities
NASA Technical Reports Server (NTRS)
Stofan, Andrew J.
1986-01-01
Lewis Research Center, in partnership with U.S. industry and academia, has long been a major force in developing advanced aerospace propulsion and power systems. One key aspect that made many of these systems possible has been the availability of high-performance, reliable, and long-life materials. To assure a continuing flow of new materials and processing concepts, basic understanding to guide such innovation, and technological support for development of major NASA systems, Lewis has supported a strong in-house materials research activity. Our researchers have discovered new alloys, polymers, metallic composites, ceramics, coatings, processing techniques, etc., which are now also in use by U.S. industry. This brochure highlights selected past accomplishments of our materials research and technology staff. It also provides many examples of the facilities available with which we can conduct materials research. The nation is now beginning to consider integrating technology for high-performance supersonic/hypersonic aircraft, nuclear space power systems, a space station, and new research areas such as materials processing in space. As we proceed, I am confident that our materials research staff will continue to provide important contributions which will help our nation maintain a strong technology position in these areas of growing world competition. Lewis Research Center, in partnership with U.S. industry and academia, has long been a major force in developing advanced aerospace propulsion and power systems. One key aspect that made many of these systems possible has been the availability of high-performance, reliable, and long-life materials. To assure a continuing flow of new materials and processing concepts, basic understanding to guide such innovation, and technological support for development of major NASA systems, Lewis has supported a strong in-house materials research activity. Our researchers have discovered new alloys, polymers, metallic composites, ceramics, coatings, processing techniques, etc., which are now also in use by U.S. industry. This brochure highlights selected past accomplishments of our materials research and technology staff. It also provides many examples of the facilities available with which we can conduct materials research. The nation is now beginning to consider integrating technology for high-performance supersonic/hypersonic aircraft, nuclear space power systems, a space station, and new research areas such as materials processing in space.
The Need for Intelligent Control of Space Power Systems
NASA Technical Reports Server (NTRS)
May, Ryan David; Soeder, James F.; Beach, Raymond F.; McNelis, Nancy B.
2013-01-01
As manned spacecraft venture farther from Earth, the need for reliable, autonomous control of vehicle subsystems becomes critical. This is particularly true for the electrical power system which is critical to every other system. Autonomy can not be achieved by simple scripting techniques due to the communication latency times and the difficulty associated with failures (or combinations of failures) that need to be handled in as graceful a manner as possible to ensure system availability. Therefore an intelligent control system must be developed that can respond to disturbances and failures in a robust manner and ensure that critical system loads are served and all system constraints are respected.
Intra- and Interobserver Reliability of Three Classification Systems for Hallux Rigidus.
Dillard, Sarita; Schilero, Christina; Chiang, Sharon; Pham, Peter
2018-04-18
There are over ten classification systems currently used in the staging of hallux rigidus. This results in confusion and inconsistency with radiographic interpretation and treatment. The reliability of hallux rigidus classification systems has not yet been tested. The purpose of this study was to evaluate intra- and interobserver reliability using three commonly used classifications for hallux rigidus. Twenty-one plain radiograph sets were presented to ten ACFAS board-certified foot and ankle surgeons. Each physician classified each radiograph based on clinical experience and knowledge according to the Regnauld, Roukis, and Hattrup and Johnson classification systems. The two-way mixed single-measure consistency intraclass correlation was used to calculate intra- and interrater reliability. The intrarater reliability of individual sets for the Roukis and Hattrup and Johnson classification systems was "fair to good" (Roukis, 0.62±0.19; Hattrup and Johnson, 0.62±0.28), whereas the intrarater reliability of individual sets for the Regnauld system bordered between "fair to good" and "poor" (0.43±0.24). The interrater reliability of the mean classification was "excellent" for all three classification systems. Conclusions Reliable and reproducible classification systems are essential for treatment and prognostic implications in hallux rigidus. In our study, Roukis classification system had the best intrarater reliability. Although there are various classification systems for hallux rigidus, our results indicate that all three of these classification systems show reliability and reproducibility.
NASA Technical Reports Server (NTRS)
Orr, James K.; Peltier, Daryl
2010-01-01
Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.
Informatics in radiology (infoRAD): A complete continuous-availability PACS archive server.
Liu, Brent J; Huang, H K; Cao, Fei; Zhou, Michael Z; Zhang, Jianguo; Mogel, Greg
2004-01-01
The operational reliability of the picture archiving and communication system (PACS) server in a filmless hospital environment is always a major concern because server failure could cripple the entire PACS operation. A simple, low-cost, continuous-availability (CA) PACS archive server was designed and developed. The server makes use of a triple modular redundancy (TMR) system with a simple majority voting logic that automatically identifies a faulty module and removes it from service. The remaining two modules continue normal operation with no adverse effects on data flow or system performance. In addition, the server is integrated with two external mass storage devices for short- and long-term storage. Evaluation and testing of the server were conducted with laboratory experiments in which hardware failures were simulated to observe recovery time and the resumption of normal data flow. The server provides maximum uptime (99.999%) for end users while ensuring the transactional integrity of all clinical PACS data. Hardware failure has only minimal impact on performance, with no interruption of clinical data flow or loss of data. As hospital PACS become more widespread, the need for CA PACS solutions will increase. A TMR CA PACS archive server can reliably help achieve CA in this setting. Copyright RSNA, 2004
Instrumentation for accelerated life tests of concentrator solar cells.
Núñez, N; Vázquez, M; González, J R; Jiménez, F J; Bautista, J
2011-02-01
Concentrator photovoltaic is an emergent technology that may be a good economical and efficient alternative for the generation of electricity at a competitive cost. However, the reliability of these new solar cells and systems is still an open issue due to the high-irradiation level they are subjected to as well as the electrical and thermal stresses that they are expected to endure. To evaluate the reliability in a short period of time, accelerated aging tests are essential. Thermal aging tests for concentrator photovoltaic solar cells and systems under illumination are not available because no technical solution to the problem of reaching the working concentration inside a climatic chamber has been available. This work presents an automatic instrumentation system that overcomes the aforementioned limitation. Working conditions have been simulated by forward biasing the solar cells to the current they would handle at the working concentration (in this case, 700 and 1050 times the irradiance at one standard sun). The instrumentation system has been deployed for more than 10 000 h in a thermal aging test for III-V concentrator solar cells, in which the generated power evolution at different temperatures has been monitored. As a result of this test, the acceleration factor has been calculated, thus allowing for the degradation evolution at any temperature in addition to normal working conditions to be obtained.
2015 NREL Photovoltaic Reliability Workshops | Photovoltaic Research | NREL
5 NREL Photovoltaic Reliability Workshops 2015 NREL Photovoltaic Reliability Workshops The 2015 NREL Photovoltaic Reliability Workshop was held February 24-27, 2015, in Golden, Colorado. This event be available for download as soon as possible. The Photovoltaic Module Reliability Workshop is
Source Data Applicability Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven D.; Ring, Robert W.
2016-01-01
Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system where it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for assigning uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide a case study example by translating Ground Benign (GB) and Ground Mobile (GM) to the Airborne Uninhabited Fighter (AUF) environment for three electronic components often found in space launch vehicle control systems. The classification method will be followed by uncertainty-importance routines to assess the need to for more applicable data to reduce uncertainty.
NASA Technical Reports Server (NTRS)
Tapia, Moiez A.
1993-01-01
The study of a comparative analysis of distinct multiplex and fault-tolerant configurations for a PLC-based safety system from a reliability point of view is presented. It considers simplex, duplex and fault-tolerant triple redundancy configurations. The standby unit in case of a duplex configuration has a failure rate which is k times the failure rate of the standby unit, the value of k varying from 0 to 1. For distinct values of MTTR and MTTF of the main unit, MTBF and availability for these configurations are calculated. The effect of duplexing only the PLC module or only the sensors and the actuators module, on the MTBF of the configuration, is also presented. The results are summarized and merits and demerits of various configurations under distinct environments are discussed.
NASA Technical Reports Server (NTRS)
Donovan, William J.; Davis, John E.
1991-01-01
Rockwell International is conducting an ongoing program to develop avionics architectures that provide high intrinsic value while meeting all mission objectives. Studies are being conducted to determine alternative configurations that have low life-cycle cost and minimum development risk, and that minimize launch delays while providing the reliability level to assure a successful mission. This effort is based on four decades of providing ballistic missile avionics to the United States Air Force and has focused on the requirements of the NASA Cargo Transfer Vehicle (CTV) program in 1991. During the development of architectural concepts it became apparent that rendezvous strategy issues have an impact on the architecture of the avionics system. This is in addition to the expected impact on propulsion and electrical power duration, flight profiles, and trajectory during approach.
Abusive behavior is barrier to high-reliability health care systems, culture of patient safety.
Cassirer, C; Anderson, D; Hanson, S; Fraser, H
2000-11-01
Addressing abusive behavior in the medical workplace presents an important opportunity to deliver on the national commitment to improve patient safety. Fundamentally, the issue of patient safety and the issue of abusive behavior in the workplace are both about harm. Undiagnosed and untreated, abusive behavior is a barrier to creating high reliability service delivery systems that ensure patient safety. Health care managers and clinicians need to improve their awareness, knowledge, and understanding of the issue of workplace abuse. The available research suggests there is a high prevalence of workplace abuse in medicine. Both administrators at the blunt end and clinicians at the sharp end should consider learning new approaches to defining and treating the problem of workplace abuse. Eliminating abusive behavior has positive implications for preventing and controlling medical injury and improving organizational performance.
Three real-time architectures - A study using reward models
NASA Technical Reports Server (NTRS)
Sjogren, J. A.; Smith, R. M.
1990-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the evolutionary behavior of the computer system by a continuous-time Markov chain, and a reward rate is associated with each state. In reliability/availability models, upstates have reward rate 1, and down states have reward rate zero associated with them. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Steady-state expected reward rate and expected instantaneous reward rate are clearly useful measures which can be extracted from the Markov reward model. The diversity of areas where Markov reward models may be used is illustrated with a comparative study of three examples of interest to the fault tolerant computing community.
Reliability and Cost Impacts for Attritable Systems
2017-03-23
and cost risk metrics to convey the value of reliability and reparability trades. Investigation of the benefit of trading system reparability...illustrates the benefit that reliability engineering can have on total cost . 2.3.1 Contexts of System Reliability Hogge (2012) identifies two distinct...reliability and reparability trades. Investigation of the benefit of trading system reparability shows a marked increase in cost risk. Yet, trades in
Evaluation of reliability modeling tools for advanced fault tolerant systems
NASA Technical Reports Server (NTRS)
Baker, Robert; Scheper, Charlotte
1986-01-01
The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.
Bulk electric system reliability evaluation incorporating wind power and demand side management
NASA Astrophysics Data System (ADS)
Huang, Dange
Electric power systems are experiencing dramatic changes with respect to structure, operation and regulation and are facing increasing pressure due to environmental and societal constraints. Bulk electric system reliability is an important consideration in power system planning, design and operation particularly in the new competitive environment. A wide range of methods have been developed to perform bulk electric system reliability evaluation. Theoretically, sequential Monte Carlo simulation can include all aspects and contingencies in a power system and can be used to produce an informative set of reliability indices. It has become a practical and viable tool for large system reliability assessment technique due to the development of computing power and is used in the studies described in this thesis. The well-being approach used in this research provides the opportunity to integrate an accepted deterministic criterion into a probabilistic framework. This research work includes the investigation of important factors that impact bulk electric system adequacy evaluation and security constrained adequacy assessment using the well-being analysis framework. Load forecast uncertainty is an important consideration in an electrical power system. This research includes load forecast uncertainty considerations in bulk electric system reliability assessment and the effects on system, load point and well-being indices and reliability index probability distributions are examined. There has been increasing worldwide interest in the utilization of wind power as a renewable energy source over the last two decades due to enhanced public awareness of the environment. Increasing penetration of wind power has significant impacts on power system reliability, and security analyses become more uncertain due to the unpredictable nature of wind power. The effects of wind power additions in generating and bulk electric system reliability assessment considering site wind speed correlations and the interactive effects of wind power and load forecast uncertainty on system reliability are examined. The concept of the security cost associated with operating in the marginal state in the well-being framework is incorporated in the economic analyses associated with system expansion planning including wind power and load forecast uncertainty. Overall reliability cost/worth analyses including security cost concepts are applied to select an optimal wind power injection strategy in a bulk electric system. The effects of the various demand side management measures on system reliability are illustrated using the system, load point, and well-being indices, and the reliability index probability distributions. The reliability effects of demand side management procedures in a bulk electric system including wind power and load forecast uncertainty considerations are also investigated. The system reliability effects due to specific demand side management programs are quantified and examined in terms of their reliability benefits.
3-Dimensional Root Cause Diagnosis via Co-analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Ziming; Lan, Zhiling; Yu, Li
2012-01-01
With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less
[Pressure distribution measurements during use of wheelchairs].
Meiners, T; Friedrich, G; Krüger, A; Böhm, V
2001-04-01
There is a growing number of mobility-impaired and wheelchair-dependent patients caused by diseases and injuries of the central nervous system. The risk is high for pressure sores to develop due to disturbances of the motor, sensory, and autonomic nervous system. Numerous seating systems for prophylaxis and treatment of decubitus ulcer are available. To identify risk parameters, the literature on animal experiments regarding pressure ulcers was reviewed. A study on the reproducibility of the analysis method with capacitive sensors tested in ten paraplegics with 470 measurements is presented. It shows the reliability of the procedure.
Majuru, Batsirai; Jagals, Paul; Hunter, Paul R
2012-10-01
Although a number of studies have reported on water supply improvements, few have simultaneously taken into account the reliability of the water services. The study aimed to assess whether upgrading water supply systems in small rural communities improved access, availability and potability of water by assessing the water services against selected benchmarks from the World Health Organisation and South African Department of Water Affairs, and to determine the impact of unreliability on the services. These benchmarks were applied in three rural communities in Limpopo, South Africa where rudimentary water supply services were being upgraded to basic services. Data were collected through structured interviews, observations and measurement, and multi-level linear regression models were used to assess the impact of water service upgrades on key outcome measures of distance to source, daily per capita water quantity and Escherichia coli count. When the basic system was operational, 72% of households met the minimum benchmarks for distance and water quantity, but only 8% met both enhanced benchmarks. During non-operational periods of the basic service, daily per capita water consumption decreased by 5.19l (p<0.001, 95% CI 4.06-6.31) and distances to water sources were 639 m further (p ≤ 0.001, 95% CI 560-718). Although both rudimentary and basic systems delivered water that met potability criteria at the sources, the quality of stored water sampled in the home was still unacceptable throughout the various service levels. These results show that basic water services can make substantial improvements to water access, availability, potability, but only if such services are reliable. Copyright © 2012 Elsevier B.V. All rights reserved.
System and Software Reliability (C103)
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.
Reliability and availability analysis of a 10 kW@20 K helium refrigerator
NASA Astrophysics Data System (ADS)
Li, J.; Xiong, L. Y.; Liu, L. Q.; Wang, H. R.; Wang, B. M.
2017-02-01
A 10 kW@20 K helium refrigerator has been established in the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. To evaluate and improve this refrigerator’s reliability and availability, a reliability and availability analysis is performed. According to the mission profile of this refrigerator, a functional analysis is performed. The failure data of the refrigerator components are collected and failure rate distributions are fitted by software Weibull++ V10.0. A Failure Modes, Effects & Criticality Analysis (FMECA) is performed and the critical components with higher risks are pointed out. Software BlockSim V9.0 is used to calculate the reliability and the availability of this refrigerator. The result indicates that compressors, turbine and vacuum pump are the critical components and the key units of this refrigerator. The mitigation actions with respect to design, testing, maintenance and operation are proposed to decrease those major and medium risks.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... data necessary to analyze and monitor Interconnection Reliability Operating Limits (IROL) within its... Interconnection Reliability Operating Limits, Order No. 748, 134 FERC ] 61,213 (2011). \\2\\ The term ``Wide-Area...
Li, Xingxing; Zhang, Xiaohong; Ren, Xiaodong; Fritsche, Mathias; Wickert, Jens; Schuh, Harald
2015-01-01
The world of satellite navigation is undergoing dramatic changes with the rapid development of multi-constellation Global Navigation Satellite Systems (GNSSs). At the moment more than 70 satellites are already in view, and about 120 satellites will be available once all four systems (BeiDou + Galileo + GLONASS + GPS) are fully deployed in the next few years. This will bring great opportunities and challenges for both scientific and engineering applications. In this paper we develop a four-system positioning model to make full use of all available observations from different GNSSs. The significant improvement of satellite visibility, spatial geometry, dilution of precision, convergence, accuracy, continuity and reliability that a combining utilization of multi-GNSS brings to precise positioning are carefully analyzed and evaluated, especially in constrained environments. PMID:25659949
A Method for Evaluating the Safety Impacts of Air Traffic Automation
NASA Technical Reports Server (NTRS)
Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Bonesteel, Charles
1998-01-01
This report describes a methodology for analyzing the safety and operational impacts of emerging air traffic technologies. The approach integrates traditional reliability models of the system infrastructure with models that analyze the environment within which the system operates, and models of how the system responds to different scenarios. Products of the analysis include safety measures such as predicted incident rates, predicted accident statistics, and false alarm rates; and operational availability data. The report demonstrates the methodology with an analysis of the operation of the Center-TRACON Automation System at Dallas-Fort Worth International Airport.
Dual linear accelerator system for use in sterilization of medical disposable supplies
NASA Astrophysics Data System (ADS)
Sadat, Theo
1991-05-01
Accelerators can be used for sterilization or decontamination (medical disposables, food, plastics, hospital waste, etc.). Most of these accelerators are located in an industrial environment and must have a high availability. A dual accelerator system (composed of two accelerators) offers optimal flexibility and reliability. The main advantage of this system is "all-in all-out" because it does not need a turnover of products. Such a dual system, composed of two 10 MeV 20 kW linear accelerators (instead of one 40 kW linac), has been chosen by a Swedish company (Mölnlycke).
NASA Astrophysics Data System (ADS)
Murphy, K. L.; Rygalov, V. Ye.; Johnson, S. B.
2009-04-01
All artificial systems and components in space degrade at higher rates than on Earth, depending in part on environmental conditions, design approach, assembly technologies, and the materials used. This degradation involves not only the hardware and software systems but the humans that interact with those systems. All technological functions and systems can be expressed through functional dependence: [Function]˜[ERU]∗[RUIS]∗[ISR]/[DR];where [ERU]efficiency (rate) of environmental resource utilization[RUIS]resource utilization infrastructure[ISR]in situ resources[DR]degradation rateThe limited resources of spaceflight and open space for autonomous missions require a high reliability (maximum possible, approaching 100%) for system functioning and operation, and must minimize the rate of any system degradation. To date, only a continuous human presence with a system in the spaceflight environment can absolutely mitigate those degradations. This mitigation is based on environmental amelioration for both the technology systems, as repair of data and spare parts, and the humans, as exercise and psychological support. Such maintenance now requires huge infrastructures, including research and development complexes and management agencies, which currently cannot move beyond the Earth. When considering what is required to move manned spaceflight from near Earth stations to remote locations such as Mars, what are the minimal technologies and infrastructures necessary for autonomous restoration of a degrading system in space? In all of the known system factors of a mission to Mars that reduce the mass load, increase the reliability, and reduce the mission’s overall risk, the current common denominator is the use of undeveloped or untested technologies. None of the technologies required to significantly reduce the risk for critical systems are currently available at acceptable readiness levels. Long term interplanetary missions require that space programs produce a craft with all systems integrated so that they are of the highest reliability. Right now, with current technologies, we cannot guarantee this reliability for a crew of six for 1000 days to Mars and back. Investigation of the technologies to answer this need and a focus of resources and research on their advancement would significantly improve chances for a safe and successful mission.
Amini, Michael H; Sykes, Joshua B; Olson, Stephen T; Smith, Richard A; Mauck, Benjamin M; Azar, Frederick M; Throckmorton, Thomas W
2015-03-01
The severity of elbow arthritis is one of many factors that surgeons must evaluate when considering treatment options for a given patient. Elbow surgeons have historically used the Broberg and Morrey (BM) and Hastings and Rettig (HR) classification systems to radiographically stage the severity of post-traumatic arthritis (PTA) and primary osteoarthritis (OA). We proposed to compare the intraobserver and interobserver reliability between systems for patients with either PTA or OA. The radiographs of 45 patients were evaluated at least 2 weeks apart by 6 evaluators of different levels of training. Intraobserver and interobserver reliability were calculated by Spearman correlation coefficients with 95% confidence intervals. Agreement was considered almost perfect for coefficients >0.80 and substantial for coefficients of 0.61 to 0.80. In patients with both PTA and OA, intraobserver reliability and interobserver reliability were substantial, with no difference between classification systems. There were no significant differences in intraobserver or interobserver reliability between attending physicians and trainees for either classification system (all P > .10). The presence of fracture implants did not affect reliability in the BM system but did substantially worsen reliability in the HR system (intraobserver P = .04 and interobserver P = .001). The BM and HR classifications both showed substantial intraobserver and interobserver reliability for PTA and OA. Training level differences did not affect reliability for either system. Both trainees and fellowship-trained surgeons may easily and reliably apply each classification system to the evaluation of primary elbow OA and PTA, although the HR system was less reliable in the presence of fracture implants. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Reliable actuators for twin rotor MIMO system
NASA Astrophysics Data System (ADS)
Rao, Vidya S.; V. I, George; Kamath, Surekha; Shreesha, C.
2017-11-01
Twin Rotor MIMO System (TRMS) is a bench mark system to test flight control algorithms. One of the perturbations on TRMS which is likely to affect the control system is actuator failure. Therefore, there is a need for a reliable control system, which includes H infinity controller along with redundant actuators. Reliable control refers to the design of a control system to tolerate failures of a certain set of actuators or sensors while retaining desired control system properties. Output of reliable controller has to be transferred to the redundant actuator effectively to make the TRMS reliable even under actual actuator failure.
Development of an integrated control and measurement system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manges, W.W.
1984-03-01
This thesis presents a tutorial on the issues involved in the development of a minicomputer-based, distributed intelligence data acquisition and process control system to support complex experimental facilities. The particular system discussed in this thesis is under development for the Atomic Vapor Laser Isotope Separation (AVLIS) Program at the Oak Ridge Gaseous Diffusion Plant (ORGDP). In the AVLIS program, we were careful to integrate the computer sections of the implementation into the instrumentation system rather than adding them as an appendage. We then addressed the reliability and availability of the system as a separate concern. Thus, our concept of anmore » integrated control and measurement (ICAM) system forms the basis for this thesis. This thesis details the logic and philosophy that went into the development of this system and explains why the commercially available turn-key systems generally are not suitable. Also, the issues involved in the specification of the components for such an integrated system are emphasized.« less
Sleeper, Mark D; Kenyon, Lisa K; Elliott, James M; Cheng, M Samuel
2016-12-01
Despite the availability of various field-tests for many competitive sports, a reliable and valid test specifically developed for use in men's gymnastics has not yet been developed. The Men's Gymnastics Functional Measurement Tool (MGFMT) was designed to assess sport-specific physical abilities in male competitive gymnasts. The purpose of this study was to develop the MGFMT by establishing a scoring system for individual test items and to initiate the process of establishing test-retest reliability and construct validity. A total of 83 competitive male gymnasts ages 7-18 underwent testing using the MGFMT. Thirty of these subjects underwent re-testing one week later in order to assess test-retest reliability. Construct validity was assessed using a simple regression analysis between total MGFMT scores and the gymnasts' USA-Gymnastics competitive level to calculate the coefficient of determination (r 2 ). Test-retest reliability was analyzed using Model 1 Intraclass correlation coefficients (ICC). Statistical significance was set at the p<0.05 level. The relationship between total MGFMT scores and subjects' current USA-Gymnastics competitive level was found to be good (r 2 = 0.63). Reliability testing of the MGFMT composite test score showed excellent test-retest reliability over a one-week period (ICC = 0.97). Test-retest reliability of the individual component tests ranged from good to excellent (ICC = 0.75-0.97). The results of this study provide initial support for the construct validity and test-retest reliability of the MGFMT. Level 3.
Time Triggered Protocol (TTP) for Integrated Modular Avionics
NASA Technical Reports Server (NTRS)
Motzet, Guenter; Gwaltney, David A.; Bauer, Guenther; Jakovljevic, Mirko; Gagea, Leonard
2006-01-01
Traditional avionics computing systems are federated, with each system provided on a number of dedicated hardware units. Federated applications are physically separated from one another and analysis of the systems is undertaken individually. Integrated Modular Avionics (IMA) takes these federated functions and integrates them on a common computing platform in a tightly deterministic distributed real-time network of computing modules in which the different applications can run. IMA supports different levels of criticality in the same computing resource and provides a platform for implementation of fault tolerance through hardware and application redundancy. Modular implementation has distinct benefits in design, testing and system maintainability. This paper covers the requirements for fault tolerant bus systems used to provide reliable communication between IMA computing modules. An overview of the Time Triggered Protocol (TTP) specification and implementation as a reliable solution for IMA systems is presented. Application examples in aircraft avionics and a development system for future space application are covered. The commercially available TTP controller can be also be implemented in an FPGA and the results from implementation studies are covered. Finally future direction for the application of TTP and related development activities are presented.
The development of the Nucleus Freedom Cochlear implant system.
Patrick, James F; Busby, Peter A; Gibson, Peter J
2006-12-01
Cochlear Limited (Cochlear) released the fourth-generation cochlear implant system, Nucleus Freedom, in 2005. Freedom is based on 25 years of experience in cochlear implant research and development and incorporates advances in medicine, implantable materials, electronic technology, and sound coding. This article presents the development of Cochlear's implant systems, with an overview of the first 3 generations, and details of the Freedom system: the CI24RE receiver-stimulator, the Contour Advance electrode, the modular Freedom processor, the available speech coding strategies, the input processing options of Smart Sound to improve the signal before coding as electrical signals, and the programming software. Preliminary results from multicenter studies with the Freedom system are reported, demonstrating better levels of performance compared with the previous systems. The final section presents the most recent implant reliability data, with the early findings at 18 months showing improved reliability of the Freedom implant compared with the earlier Nucleus 3 System. Also reported are some of the findings of Cochlear's collaborative research programs to improve recipient outcomes. Included are studies showing the benefits from bilateral implants, electroacoustic stimulation using an ipsilateral and/or contralateral hearing aid, advanced speech coding, and streamlined speech processor programming.
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
Theory of reliable systems. [systems analysis and design
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1973-01-01
The analysis and design of reliable systems are discussed. The attributes of system reliability studied are fault tolerance, diagnosability, and reconfigurability. Objectives of the study include: to determine properties of system structure that are conducive to a particular attribute; to determine methods for obtaining reliable realizations of a given system; and to determine how properties of system behavior relate to the complexity of fault tolerant realizations. A list of 34 references is included.
CRAX/Cassandra Reliability Analysis Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D.
1999-02-10
Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less
76 FR 64082 - Mandatory Reliability Standards for the Bulk-Power System; Notice of Staff Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-17
... Reliability Standards for the Bulk-Power System; Notice of Staff Meeting Take notice that the Federal Energy... reliability implications to the interconnected transmission system associated with a single point of failure... R1.3.10 of Commission-approved transmission planning Reliability Standard TPL-002- 0 (System...
NASA Technical Reports Server (NTRS)
Mcneely, J. B.; Negley, G. H.; Barnett, A. M.
1985-01-01
GaAsP on GaP top solar cells as an attachment to silicon bottom solar cells are being developed. The GaAsP on GaP system offers several advantages for this top solar cell. The most important is that the gallium phosphide substrate provides a rugged, transparent mechanical substrate which does not have to be removed or thinned during processing. Additional advantages are that: (1) gallium phosphide is more oxidation resistant than the III-V aluminum compounds, (2) a range of energy band gaps higher than 1.75 eV is readily available for system efficiency optimization, (3) reliable ohmic contact technology is available from the light-emitting diode industry, and (4) the system readily lends itself to graded band gap structures for additional increases in efficiency.
Report on Wind Turbine Subsystem Reliability - A Survey of Various Databases (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, S.
2013-07-01
Wind industry has been challenged by premature subsystem/component failures. Various reliability data collection efforts have demonstrated their values in supporting wind turbine reliability and availability research & development and industrial activities. However, most information on these data collection efforts are scattered and not in a centralized place. With the objective of getting updated reliability statistics of wind turbines and/or subsystems so as to benefit future wind reliability and availability activities, this report is put together based on a survey of various reliability databases that are accessible directly or indirectly by NREL. For each database, whenever feasible, a brief description summarizingmore » database population, life span, and data collected is given along with its features & status. Then selective results deemed beneficial to the industry and generated based on the database are highlighted. This report concludes with several observations obtained throughout the survey and several reliability data collection opportunities in the future.« less
NASA Technical Reports Server (NTRS)
Traversi, M.; Piccolo, R.
1980-01-01
Tradeoff study activities and the analysis process used are described with emphasis on (1) review of the alternatives; (2) vehicle architecture; and (3) evaluation of the propulsion system alternatives; interim results are presented for the basic hybrid vehicle characterization; vehicle scheme development; propulsion system power and transmission ratios; vehicle weight; energy consumption and emissions; performance; production costs; reliability, availability and maintainability; life cycle costs, and operational quality. The final vehicle conceptual design is examined.
NASA Technical Reports Server (NTRS)
Thaller, L. H.
1981-01-01
The use of interactive computer graphics is suggested as an aid in battery system development. Mathematical representations of simplistic but fully representative functions of many electrochemical concepts of current practical interest will permit battery level charge and discharge phenomena to be analyzed in a qualitative manner prior to the assembly and testing of actual hardware. This technique is a useful addition to the variety of tools available to the battery system designer as he bridges the gap between interesting single cell life test data and reliable energy storage subsystems.
77 FR 7526 - Interpretation of Protection System Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-13
... reh'g & compliance, 117 FERC ] 61,126 (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D.C... opportunity to view and/or print the contents of this document via the Internet through FERC's Home Page... available on eLibrary in PDF and Microsoft Word format for viewing, printing, and/or downloading. To access...
Western Grid Can Handle High Renewables in Challenging Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-11-01
Fact sheet outlining the key findings of Phase 3 of the Western Wind and Solar Integration Study (WWSIS-3). NREL and GE find that with good system planning, sound engineering practices, and commercially available technologies, the Western grid can maintain reliability and stability during the crucial first minute after grid disturbances with high penetrations of wind and solar power.
USDA Forest Service mobile satellite communications applications
NASA Technical Reports Server (NTRS)
Warren, John R.
1990-01-01
The airborne IR signal processing system being developed will require the use of mobile satellite communications to achieve its full capability and improvement in delivery timeliness of processed IR data to the Fire Staff. There are numerous other beneficial uses, both during wildland fire management operations or in daily routine tasks, which will also benefit from the availability of reliable communications from remote areas.
Roland: A Case for or Against NATO Standardization?
1980-05-01
with often competing, even opposing, objectives in testing, financial auditing , cost estimating, reliability, value engineering, maintenance, training...supposedly mature system. Multilocation tests, early in the program when test beds and spare parts availability would be at a minimum, would require...Similar institutionalized conflicts resided in the audit community, which, under the Armed Services Procurement Regulation, was required to audit and
Hong, Tran Thi; Phuong Hoa, Nguyen; Walker, Sue M; Hill, Peter S; Rao, Chalapati
2018-01-01
Mortality statistics form a crucial component of national Health Management Information Systems (HMIS). However, there are limitations in the availability and quality of mortality data at national level in Viet Nam. This study assessed the completeness of recorded deaths and the reliability of recorded causes of death (COD) in the A6 death registers in the national routine HMIS in Viet Nam. 1477 identified deaths in 2014 were reviewed in two provinces. A capture-recapture method was applied to assess the completeness of the A6 death registers. 1365 household verbal autopsy (VA) interviews were successfully conducted, and these were reviewed by physicians who assigned multiple and underlying cause of death (UCOD). These UCODs from VA were then compared with the CODs recorded in the A6 death registers, using kappa scores to assess the reliability of the A6 death register diagnoses. The overall completeness of the A6 death registers in the two provinces was 89.3% (95%CI: 87.8-90.8). No COD recorded in the A6 death registers demonstrated good reliability. There is very low reliability in recording of cardiovascular deaths (kappa for stroke = 0.47 and kappa for ischaemic heart diseases = 0.42) and diabetes (kappa = 0.33). The reporting of deaths due to road traffic accidents, HIV and some cancers are at a moderate level of reliability with kappa scores ranging between 0.57-0.69 (p<0.01). VA methods identify more specific COD than the A6 death registers, and also allow identification of multiple CODs. The study results suggest that data completeness in HMIS A6 death registers in the study sample of communes was relatively high (nearly 90%), but triangulation with death records from other sources would improve the completeness of this system. Further, there is an urgent need to enhance the reliability of COD recorded in the A6 death registers, for which VA methods could be effective. Focussed consultation among stakeholders is needed to develop a suitable mechanism and process for integrating VA methods into the national routine HMIS A6 death registers in Viet Nam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Dr. Li; Cui, Xiaohui; Cemerlic, Alma
Ad hoc networks are very helpful in situations when no fixed network infrastructure is available, such as natural disasters and military conflicts. In such a network, all wireless nodes are equal peers simultaneously serving as both senders and routers for other nodes. Therefore, how to route packets through reliable paths becomes a fundamental problems when behaviors of certain nodes deviate from wireless ad hoc routing protocols. We proposed a novel Dirichlet reputation model based on Bayesian inference theory which evaluates reliability of each node in terms of packet delivery. Our system offers a way to predict and select a reliablemore » path through combination of first-hand observation and second-hand reputation reports. We also proposed moving window mechanism which helps to adjust ours responsiveness of our system to changes of node behaviors. We integrated the Dirichlet reputation into routing protocol of wireless ad hoc networks. Our extensive simulation indicates that our proposed reputation system can improve good throughput of the network and reduce negative impacts caused by misbehaving nodes.« less
Hsin, Kun-Yi; Ghosh, Samik; Kitano, Hiroaki
2013-01-01
Increased availability of bioinformatics resources is creating opportunities for the application of network pharmacology to predict drug effects and toxicity resulting from multi-target interactions. Here we present a high-precision computational prediction approach that combines two elaborately built machine learning systems and multiple molecular docking tools to assess binding potentials of a test compound against proteins involved in a complex molecular network. One of the two machine learning systems is a re-scoring function to evaluate binding modes generated by docking tools. The second is a binding mode selection function to identify the most predictive binding mode. Results from a series of benchmark validations and a case study show that this approach surpasses the prediction reliability of other techniques and that it also identifies either primary or off-targets of kinase inhibitors. Integrating this approach with molecular network maps makes it possible to address drug safety issues by comprehensively investigating network-dependent effects of a drug or drug candidate. PMID:24391846
Bader, Michael D. M.; Mooney, Stephen J.; Lee, Yeon Jin; Sheehan, Daniel; Neckerman, Kathryn M.; Rundle, Andrew G.; Teitler, Julien O.
2014-01-01
Public health research has shown that neighborhood conditions are associated with health behaviors and outcomes. Systematic neighborhood audits have helped researchers measure neighborhood conditions that they deem theoretically relevant but not available in existing administrative data. Systematic audits, however, are expensive to conduct and rarely comparable across geographic regions. We describe the development of an online application, the Computer Assisted Neighborhood Visual Assessment System (CANVAS), that uses Google Street View to conduct virtual audits of neighborhood environments. We use this system to assess the inter-rater reliability of 187 items related to walkability and physical disorder on a national sample of 150 street segments in the United States. We find that many items are reliably measured across auditors using CANVAS and that agreement between auditors appears to be uncorrelated with neighborhood demographic characteristics. Based on our results we conclude that Google Street View and CANVAS offer opportunities to develop greater comparability across neighborhood audit studies. PMID:25545769
Chapter 15: Reliability of Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Shuangwen; O'Connor, Ryan
The global wind industry has witnessed exciting developments in recent years. The future will be even brighter with further reductions in capital and operation and maintenance costs, which can be accomplished with improved turbine reliability, especially when turbines are installed offshore. One opportunity for the industry to improve wind turbine reliability is through the exploration of reliability engineering life data analysis based on readily available data or maintenance records collected at typical wind plants. If adopted and conducted appropriately, these analyses can quickly save operation and maintenance costs in a potentially impactful manner. This chapter discusses wind turbine reliability bymore » highlighting the methodology of reliability engineering life data analysis. It first briefly discusses fundamentals for wind turbine reliability and the current industry status. Then, the reliability engineering method for life analysis, including data collection, model development, and forecasting, is presented in detail and illustrated through two case studies. The chapter concludes with some remarks on potential opportunities to improve wind turbine reliability. An owner and operator's perspective is taken and mechanical components are used to exemplify the potential benefits of reliability engineering analysis to improve wind turbine reliability and availability.« less
Ultra Reliable Closed Loop Life Support for Long Space Missions
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Ewert, Michael K.
2010-01-01
Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.
NASA Technical Reports Server (NTRS)
Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.
2005-01-01
This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).
Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan
2016-01-01
The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emmanuel Ohene Opare, Jr.; Charles V. Park
The Next Generation Nuclear Plant (NGNP) Project, managed by the Idaho National Laboratory (INL), is authored by the Energy Policy Act of 2005, to research, develop, design, construct, and operate a prototype fourth generation nuclear reactor to meet the needs of the 21st Century. A section in this document proposes that the NGNP will provide heat for process heat applications. As with all large projects developing and deploying new technologies, the NGNP is expected to meet high performance and availability targets relative to current state of the art systems and technology. One requirement for the NGNP is to provide heatmore » for the generation of hydrogen for large scale productions and this process heat application is required to be at least 90% or more available relative to other technologies currently on the market. To reach this goal, a RAM Roadmap was developed highlighting the actions to be taken to ensure that various milestones in system development and maturation concurrently meet required availability requirements. Integral to the RAM Roadmap was the use of a RAM analytical/simulation tool which was used to estimate the availability of the system when deployed based on current design configuration and the maturation level of the system.« less
Monitoring of services with non-relational databases and map-reduce framework
NASA Astrophysics Data System (ADS)
Babik, M.; Souto, F.
2012-12-01
Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.
Beyond the buildingcentric approach: A vision for an integrated evaluation of sustainable buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conte, Emilia, E-mail: conte@poliba.it; Monno, Valeria, E-mail: vmonno@poliba.it
2012-04-15
The available sustainable building evaluation systems have produced a new environmental design paradigm. However, there is an increasing need to overcome the buildingcentric approach of these systems, in order to further exploit their innovate potential for sustainable building practices. The paper takes this challenge by developing a cross-scale evaluation approach focusing on the reliability of sustainable building design solutions for the context in which the building is situated. An integrated building-urban evaluation model is proposed based on the urban matrix, which is a conceptualisation of the built environment as a social-ecological system. The model aims at evaluating the sustainability ofmore » a building considering it as an active entity contributing to the resilience of the urban matrix. Few holistic performance indicators are used for evaluating such contribution, so expressing the building reliability. The discussion on the efficacy of the model shows that it works as a heuristic tool, supporting the acquisition of a better insight into the complexity which characterises the relationships between the building and the built environment sustainability. Shading new lights on the meaning of sustainable buildings, the model can play a positive role in innovating sustainable building design practices, thus complementing current evaluation systems. - Highlights: Black-Right-Pointing-Pointer We model an integrated building-urban evaluation approach. Black-Right-Pointing-Pointer The urban matrix represents the social-ecological functioning of the urban context. Black-Right-Pointing-Pointer We introduce the concept of reliability to evaluate sustainable buildings. Black-Right-Pointing-Pointer Holistic indicators express the building reliability. Black-Right-Pointing-Pointer The evaluation model works as heuristic tool and complements other tools.« less
NASA Astrophysics Data System (ADS)
Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.
2014-04-01
Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.
Development and Reliability Testing of a Fast-Food Restaurant Observation Form.
Rimkus, Leah; Ohri-Vachaspati, Punam; Powell, Lisa M; Zenk, Shannon N; Quinn, Christopher M; Barker, Dianne C; Pugach, Oksana; Resnick, Elissa A; Chaloupka, Frank J
2015-01-01
To develop a reliable observational data collection instrument to measure characteristics of the fast-food restaurant environment likely to influence consumer behaviors, including product availability, pricing, and promotion. The study used observational data collection. Restaurants were in the Chicago Metropolitan Statistical Area. A total of 131 chain fast-food restaurant outlets were included. Interrater reliability was measured for product availability, pricing, and promotion measures on a fast-food restaurant observational data collection instrument. Analysis was done with Cohen's κ coefficient and proportion of overall agreement for categorical variables and intraclass correlation coefficient (ICC) for continuous variables. Interrater reliability, as measured by average κ coefficient, was .79 for menu characteristics, .84 for kids' menu characteristics, .92 for food availability and sizes, .85 for beverage availability and sizes, .78 for measures on the availability of nutrition information,.75 for characteristics of exterior advertisements, and .62 and .90 for exterior and interior characteristics measures, respectively. For continuous measures, average ICC was .88 for food pricing measures, .83 for beverage prices, and .65 for counts of exterior advertisements. Over 85% of measures demonstrated substantial or almost perfect agreement. Although some measures required revision or protocol clarification, results from this study suggest that the instrument may be used to reliably measure the fast-food restaurant environment.
Jiang, Yazhou; Liu, Chen -Ching; Xu, Yin
2016-04-19
The increasing importance of system reliability and resilience is changing the way distribution systems are planned and operated. To achieve a distribution system self-healing against power outages, emerging technologies and devices, such as remote-controlled switches (RCSs) and smart meters, are being deployed. The higher level of automation is transforming traditional distribution systems into the smart distribution systems (SDSs) of the future. The availability of data and remote control capability in SDSs provides distribution operators with an opportunity to optimize system operation and control. In this paper, the development of SDSs and resulting benefits of enhanced system capabilities are discussed. Amore » comprehensive survey is conducted on the state-of-the-art applications of RCSs and smart meters in SDSs. Specifically, a new method, called Temporal Causal Diagram (TCD), is used to incorporate outage notifications from smart meters for enhanced outage management. To fully utilize the fast operation of RCSs, the spanning tree search algorithm is used to develop service restoration strategies. Optimal placement of RCSs and the resulting enhancement of system reliability are discussed. Distribution system resilience with respect to extreme events is presented. Furthermore, test cases are used to demonstrate the benefit of SDSs. Active management of distributed generators (DGs) is introduced. Future research in a smart distribution environment is proposed.« less
Development and Testing of a Prototype Grid-Tied Photovoltaic Power System
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2009-01-01
The NASA Glenn Research Center (GRC) has developed and tested a prototype 2 kW DC grid-tied photovoltaic (PV) power system at the Center. The PV system has generated in excess of 6700 kWh since operation commenced in July 2006. The PV system is providing power to the GRC grid for use by all. Operation of the prototype PV system has been completely trouble free. A grid-tied PV power system is connected directly to the utility distribution grid. Facility power can be obtained from the utility system as normal. The PV system is synchronized with the utility system to provide power for the facility, and excess power is provided to the utility. The project transfers space technology to terrestrial use via nontraditional partners. GRC personnel glean valuable experience with PV power systems that are directly applicable to various space power systems, and provide valuable space program test data. PV power systems help to reduce harmful emissions and reduce the Nation s dependence on fossil fuels. Power generated by the PV system reduces the GRC utility demand, and the surplus power aids the community. Present global energy concerns reinforce the need for the development of alternative energy systems. Modern PV panels are readily available, reliable, efficient, and economical with a life expectancy of at least 25 years. Modern electronics has been the enabling technology behind grid-tied power systems, making them safe, reliable, efficient, and economical with a life expectancy of at least 25 years. Based upon the success of the prototype PV system, additional PV power system expansion at GRC is under consideration. The prototype grid-tied PV power system was successfully designed and developed which served to validate the basic principles described, and the theoretical work that was performed. The report concludes that grid-tied photovoltaic power systems are reliable, maintenance free, long life power systems, and are of significant value to NASA and the community.
Systems Reliability Framework for Surface Water Sustainability and Risk Management
NASA Astrophysics Data System (ADS)
Myers, J. R.; Yeghiazarian, L.
2016-12-01
With microbial contamination posing a serious threat to the availability of clean water across the world, it is necessary to develop a framework that evaluates the safety and sustainability of water systems in respect to non-point source fecal microbial contamination. The concept of water safety is closely related to the concept of failure in reliability theory. In water quality problems, the event of failure can be defined as the concentration of microbial contamination exceeding a certain standard for usability of water. It is pertinent in watershed management to know the likelihood of such an event of failure occurring at a particular point in space and time. Microbial fate and transport are driven by environmental processes taking place in complex, multi-component, interdependent environmental systems that are dynamic and spatially heterogeneous, which means these processes and therefore their influences upon microbial transport must be considered stochastic and variable through space and time. A physics-based stochastic model of microbial dynamics is presented that propagates uncertainty using a unique sampling method based on artificial neural networks to produce a correlation between watershed characteristics and spatial-temporal probabilistic patterns of microbial contamination. These results are used to address the question of water safety through several sustainability metrics: reliability, vulnerability, resilience and a composite sustainability index. System reliability is described uniquely though the temporal evolution of risk along watershed points or pathways. Probabilistic resilience describes how long the system is above a certain probability of failure, and the vulnerability metric describes how the temporal evolution of risk changes throughout a hierarchy of failure levels. Additionally our approach allows for the identification of contributions in microbial contamination and uncertainty from specific pathways and sources. We expect that this framework will significantly improve the efficiency and precision of sustainable watershed management strategies through providing a better understanding of how watershed characteristics and environmental parameters affect surface water quality and sustainability. With microbial contamination posing a serious threat to the availability of clean water across the world, it is necessary to develop a framework that evaluates the safety and sustainability of water systems in respect to non-point source fecal microbial contamination. The concept of water safety is closely related to the concept of failure in reliability theory. In water quality problems, the event of failure can be defined as the concentration of microbial contamination exceeding a certain standard for usability of water. It is pertinent in watershed management to know the likelihood of such an event of failure occurring at a particular point in space and time. Microbial fate and transport are driven by environmental processes taking place in complex, multi-component, interdependent environmental systems that are dynamic and spatially heterogeneous, which means these processes and therefore their influences upon microbial transport must be considered stochastic and variable through space and time. A physics-based stochastic model of microbial dynamics is presented that propagates uncertainty using a unique sampling method based on artificial neural networks to produce a correlation between watershed characteristics and spatial-temporal probabilistic patterns of microbial contamination. These results are used to address the question of water safety through several sustainability metrics: reliability, vulnerability, resilience and a composite sustainability index. System reliability is described uniquely though the temporal evolution of risk along watershed points or pathways. Probabilistic resilience describes how long the system is above a certain probability of failure, and the vulnerability metric describes how the temporal evolution of risk changes throughout a hierarchy of failure levels. Additionally our approach allows for the identification of contributions in microbial contamination and uncertainty from specific pathways and sources. We expect that this framework will significantly improve the efficiency and precision of sustainable watershed management strategies through providing a better understanding of how watershed characteristics and environmental parameters affect surface water quality and sustainability.
Minimum Control Requirements for Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Boulange, Richard; Jones, Harry; Jones, Harry
2002-01-01
Advanced control technologies are not necessary for the safe, reliable and continuous operation of Advanced Life Support (ALS) systems. ALS systems can and are adequately controlled by simple, reliable, low-level methodologies and algorithms. The automation provided by advanced control technologies is claimed to decrease system mass and necessary crew time by reducing buffer size and minimizing crew involvement. In truth, these approaches increase control system complexity without clearly demonstrating an increase in reliability across the ALS system. Unless these systems are as reliable as the hardware they control, there is no savings to be had. A baseline ALS system is presented with the minimal control system required for its continuous safe reliable operation. This baseline control system uses simple algorithms and scheduling methodologies and relies on human intervention only in the event of failure of the redundant backup equipment. This ALS system architecture is designed for reliable operation, with minimal components and minimal control system complexity. The fundamental design precept followed is "If it isn't there, it can't fail".
A survey of quality measures for gray-scale image compression
NASA Technical Reports Server (NTRS)
Eskicioglu, Ahmet M.; Fisher, Paul S.
1993-01-01
Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.
NASA Astrophysics Data System (ADS)
Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping
2018-03-01
System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.
Reliability models applicable to space telescope solar array assembly system
NASA Technical Reports Server (NTRS)
Patil, S. A.
1986-01-01
A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.
NASA Astrophysics Data System (ADS)
Black, Christopher; Voigts, Jakob; Agrawal, Uday; Ladow, Max; Santoyo, Juan; Moore, Christopher; Jones, Stephanie
2017-06-01
Objective. Electroencephalography (EEG) offers a unique opportunity to study human neural activity non-invasively with millisecond resolution using minimal equipment in or outside of a lab setting. EEG can be combined with a number of techniques for closed-loop experiments, where external devices are driven by specific neural signals. However, reliable, commercially available EEG systems are expensive, often making them impractical for individual use and research development. Moreover, by design, a majority of these systems cannot be easily altered to the specification needed by the end user. We focused on mitigating these issues by implementing open-source tools to develop a new EEG platform to drive down research costs and promote collaboration and innovation. Approach. Here, we present methods to expand the open-source electrophysiology system, Open Ephys (www.openephys.org), to include human EEG recordings. We describe the equipment and protocol necessary to interface various EEG caps with the Open Ephys acquisition board, and detail methods for processing data. We present applications of Open Ephys + EEG as a research tool and discuss how this innovative EEG technology lays a framework for improved closed-loop paradigms and novel brain-computer interface experiments. Main results. The Open Ephys + EEG system can record reliable human EEG data, as well as human EMG data. A side-by-side comparison of eyes closed 8-14 Hz activity between the Open Ephys + EEG system and the Brainvision ActiCHamp EEG system showed similar average power and signal to noise. Significance. Open Ephys + EEG enables users to acquire high-quality human EEG data comparable to that of commercially available systems, while maintaining the price point and extensibility inherent to open-source systems.
Black, Christopher; Voigts, Jakob; Agrawal, Uday; Ladow, Max; Santoyo, Juan; Moore, Christopher; Jones, Stephanie
2017-06-01
Electroencephalography (EEG) offers a unique opportunity to study human neural activity non-invasively with millisecond resolution using minimal equipment in or outside of a lab setting. EEG can be combined with a number of techniques for closed-loop experiments, where external devices are driven by specific neural signals. However, reliable, commercially available EEG systems are expensive, often making them impractical for individual use and research development. Moreover, by design, a majority of these systems cannot be easily altered to the specification needed by the end user. We focused on mitigating these issues by implementing open-source tools to develop a new EEG platform to drive down research costs and promote collaboration and innovation. Here, we present methods to expand the open-source electrophysiology system, Open Ephys (www.openephys.org), to include human EEG recordings. We describe the equipment and protocol necessary to interface various EEG caps with the Open Ephys acquisition board, and detail methods for processing data. We present applications of Open Ephys + EEG as a research tool and discuss how this innovative EEG technology lays a framework for improved closed-loop paradigms and novel brain-computer interface experiments. The Open Ephys + EEG system can record reliable human EEG data, as well as human EMG data. A side-by-side comparison of eyes closed 8-14 Hz activity between the Open Ephys + EEG system and the Brainvision ActiCHamp EEG system showed similar average power and signal to noise. Open Ephys + EEG enables users to acquire high-quality human EEG data comparable to that of commercially available systems, while maintaining the price point and extensibility inherent to open-source systems.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-05
..., RM13-14-000 and RM13-15-000] Monitoring System Conditions--Transmission Operations Reliability...) 502-6817, [email protected] . Robert T. Stroh (Legal Information), Office of the General... Reliability Standards ``address the important reliability goal of ensuring that the transmission system is...
Space station electrical power system availability study
NASA Technical Reports Server (NTRS)
Turnquist, Scott R.; Twombly, Mark A.
1988-01-01
ARINC Research Corporation performed a preliminary reliability, and maintainability (RAM) anlaysis of the NASA space station Electric Power Station (EPS). The analysis was performed using the ARINC Research developed UNIRAM RAM assessment methodology and software program. The analysis was performed in two phases: EPS modeling and EPS RAM assessment. The EPS was modeled in four parts: the insolar power generation system, the eclipse power generation system, the power management and distribution system (both ring and radial power distribution control unit (PDCU) architectures), and the power distribution to the inner keel PDCUs. The EPS RAM assessment was conducted in five steps: the use of UNIRAM to perform baseline EPS model analyses and to determine the orbital replacement unit (ORU) criticalities; the determination of EPS sensitivity to on-orbit spared of ORUs and the provision of an indication of which ORUs may need to be spared on-orbit; the determination of EPS sensitivity to changes in ORU reliability; the determination of the expected annual number of ORU failures; and the integration of the power generator system model results with the distribution system model results to assess the full EPS. Conclusions were drawn and recommendations were made.
NASA Astrophysics Data System (ADS)
Kwok, Yu Fat
The main objective of this study is to develop a model for the determination of optimum testing interval (OTI) of non-redundant standby plants. This study focuses on the emergency power generators in tall buildings in Hong Kong. The model for the reliability, which is developed, is applicable to any non-duplicated standby plant. In a tall building, the mobilisation of occupants is constrained by its height and the building internal layout. Occupant's safety, amongst other safety considerations, highly depends on the reliability of the fire detection and protection system, which in turn is dependent on the reliability of the emergency power generation plants. A thorough literature survey shows that the practice used in determining OTI in nuclear plants is generally applicable. Historically, the OTI in these plants is determined by balancing the testing downtime and reliability gained from frequent testing. However, testing downtime does not exist in plants like emergency power generator. Subsequently, sophisticated models have taken repairing downtime into consideration. In this study, the algorithms for the determination of OTI, and hence reliability of standby plants, are reconsidered. A new concept is introduced into the subject. A new model is developed for such purposes which embraces more realistic factors found in practice. System aging and the finite life cycle of the standby plant are considered. Somewhat more pragmatic is that the Optimum Overhauling Interval can also be determined from this new model. System unavailability grow with time, but can be reset by test or overhaul. Contrary to fixed testing intervals, OTI is determined whenever system point unavailability exceeds certain level, which depends on the reliability requirement of the standby system. An optimum testing plan for lowering this level to the 'minimum useful unavailability' level (see section 9.1 for more elaboration) can be determined by the new model presented. Cost effectiveness is accounted for by a new parameter 'tau min', the minimum testing interval (MTI). The MTI optimises the total number of tests and the total number of overhauls, when the costs for each are available. The model sets up criteria for test and overhaul and to 'announce' end of system life. The usefulness of the model is validated by a detailed analysis of the operating parameters from 8,500 maintenance records collected for emergency power generation plants in high rise buildings in Hong Kong. (Abstract shortened by UMI.)
Malt, U
1986-01-01
The reliability of the DSM-III is superior to other classification systems available in psychiatry. However, reliability depends on proper knowledge of the system. Some pitfalls reducing reliability of axis 1 diagnosis which commonly are overlooked are discussed. Secondly, some problems of validity of axis 1 and 2 are considered. This is done by discussing the differential diagnosis of organic mental disorders and other psychiatric disorders with concomittant physical dysfunction, and the diagnoses of post-traumatic stress disorders and adjustment disorders among others. The emphasis on health care seeking behaviour as a diagnostic criteria in the DSM-III system, may cause a social, racial and sexual bias in DSM-III diagnoses. The present discussion of the DSM-III system from a clinical point of view indicates the need for validation studies based on clinical experience with the DSM-III. These studies should include more out-patients and patients with psychopathology who do not seek psychiatric treatment. Such studies must also apply alternative diagnostic standards like the ICD-9 and not only rely on structured psychiatric interviews constructed for DSM-III diagnoses. The discussion of axis 4 points to the problem of wanting to combine reliable rating with clinically meaningful information. It is concluded that the most important issue to be settled regarding axis 4 in the future revisions is the aim of including this axis. The discussion of axis 5 concludes that axis 5 is biased toward poor functioning and thus may be less usefull when applied on patients seen outside hospitals. Despite these problems of the DSM-III, our experiences indicate that the use of the DSM-III is fruitful both for the patient, the clinician and the researcher. Thus, the cost of time and effort needed to learn to use the DSM-III properly are small compared to the benefits achieved by using the system.
Tailoring a Human Reliability Analysis to Your Industry Needs
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2016-01-01
Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.
General Aviation Aircraft Reliability Study
NASA Technical Reports Server (NTRS)
Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)
2001-01-01
This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.
Advanced telemetry systems for payloads. Technology needs, objectives and issues
NASA Technical Reports Server (NTRS)
1990-01-01
The current trends in advanced payload telemetry are the new developments in advanced modulation/coding, the applications of intelligent techniques, data distribution processing, and advanced signal processing methodologies. Concerted efforts will be required to design ultra-reliable man-rated software to cope with these applications. The intelligence embedded and distributed throughout various segments of the telemetry system will need to be overridden by an operator in case of life-threatening situations, making it a real-time integration issue. Suitable MIL standards on physical interfaces and protocols will be adopted to suit the payload telemetry system. New technologies and techniques will be developed for fast retrieval of mass data. Currently, these technology issues are being addressed to provide more efficient, reliable, and reconfigurable systems. There is a need, however, to change the operation culture. The current role of NASA as a leader in developing all the new innovative hardware should be altered to save both time and money. We should use all the available hardware/software developed by the industry and use the existing standards rather than inventing our own.
Applying reliability analysis to design electric power systems for More-electric aircraft
NASA Astrophysics Data System (ADS)
Zhang, Baozhu
The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.
Automatic control design procedures for restructurable aircraft control
NASA Technical Reports Server (NTRS)
Looze, D. P.; Krolewski, S.; Weiss, J.; Barrett, N.; Eterno, J.
1985-01-01
A simple, reliable automatic redesign procedure for restructurable control is discussed. This procedure is based on Linear Quadratic (LQ) design methodologies. It employs a robust control system design for the unfailed aircraft to minimize the effects of failed surfaces and to extend the time available for restructuring the Flight Control System. The procedure uses the LQ design parameters for the unfailed system as a basis for choosing the design parameters of the failed system. This philosophy alloys the engineering trade-offs that were present in the nominal design to the inherited by the restructurable design. In particular, it alloys bandwidth limitations and performance trade-offs to be incorporated in the redesigned system. The procedure also has several other desirable features. It effectively redistributes authority among the available control effectors to maximize the system performance subject to actuator limitations and constraints. It provides a graceful performance degradation as the amount of control authority lessens. When given the parameters of the unfailed aircraft, the automatic redesign procedure reproduces the nominal control system design.
NASA Astrophysics Data System (ADS)
O'Connell, M.; Macknick, J.; Voisin, N.; Fu, T.
2017-12-01
The western US electric grid is highly dependent upon water resources for reliable operation. Hydropower and water-cooled thermoelectric technologies represent 67% of generating capacity in the western region of the US. While water resources provide a significant amount of generation and reliability for the grid, these same resources can represent vulnerabilities during times of drought or low flow conditions. A lack of water affects water-dependent technologies and can result in more expensive generators needing to run in order to meet electric grid demand, resulting in higher electricity prices and a higher cost to operate the grid. A companion study assesses the impact of changes in water availability and air temperatures on power operations by directly derating hydro and thermo-electric generators. In this study we assess the sensitivities and tipping points of water availability compared with higher fuel prices in electricity sector operations. We evaluate the impacts of varying electricity prices by modifying fuel prices for coal and natural gas. We then analyze the difference in simulation results between changes in fuel prices in combination with water availability and air temperature variability. We simulate three fuel price scenarios for a 2010 baseline scenario along with 100 historical and future hydro-climate conditions. We use the PLEXOS electricity production cost model to optimize power system dispatch and cost decisions under each combination of fuel price and water constraint. Some of the metrics evaluated are total production cost, generation type mix, emissions, transmission congestion, and reserve procurement. These metrics give insight to how strained the system is, how much flexibility it still has, and to what extent water resource availability or fuel prices drive changes in the electricity sector operations. This work will provide insights into current electricity operations as well as future cases of increased penetration of variable renewable generation technologies such as wind and solar.
A review of health resource tracking in developing countries.
Powell-Jackson, Timothy; Mills, Anne
2007-11-01
Timely, reliable and complete information on financial resources in the health sector is critical for sound policy making and planning, particularly in developing countries where resources are both scarce and unpredictable. Health resource tracking has a long history and has seen renewed interest more recently as pressure has mounted to improve accountability for the attainment of the health Millennium Development Goals. We review the methods used to track health resources and recent experiences of their application, with a view to identifying the major challenges that must be overcome if data availability and reliability are to improve. At the country level, there have been important advances in the refinement of the National Health Accounts (NHA) methodology, which is now regarded as the international standard. Significant efforts have also been put into the development of methods to track disease-specific expenditures. However, NHA as a framework can do little to address the underlying problem of weak government public expenditure management and information systems that provide much of the raw data. The experience of institutionalizing NHA suggests progress has been uneven and there is a potential for stand-alone disease accounts to make the situation worse by undermining capacity and confusing technicians. Global level tracking of donor assistance to health relies to a large extent on the OECD's Creditor Reporting System. Despite improvements in its coverage and reliability, the demand for estimates of aid to control of specific diseases is resulting in multiple, uncoordinated data requests to donor agencies, placing additional workload on the providers of information. The emergence of budget support aid modalities poses a methodological challenge to health resource tracking, as such support is difficult to attribute to any particular sector or health programme. Attention should focus on improving underlying financial and information systems at the country level, which will facilitate more reliable and timely reporting of NHA estimates. Effective implementation of a framework to make donors more accountable to recipient countries and the international community will improve the availability of financial data on their activities.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
Durability reliability analysis for corroding concrete structures under uncertainty
NASA Astrophysics Data System (ADS)
Zhang, Hao
2018-02-01
This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodhouse, Michael; Jones-Albertus, Rebecca; Feldman, David
2016-05-01
This report examines the remaining challenges to achieving the competitive photovoltaic (PV) costs and large-scale deployment envisioned under the U.S. Department of Energy's SunShot Initiative. Solar-energy cost reductions can be realized through lower PV module and balance-of-system (BOS) costs as well as improved system efficiency and reliability. Numerous combinations of PV improvements could help achieve the levelized cost of electricity (LCOE) goals because of the tradeoffs among key metrics like module price, efficiency, and degradation rate as well as system price and lifetime. Using LCOE modeling based on bottom-up cost analysis, two specific pathways are mapped to exemplify the manymore » possible approaches to module cost reductions of 29%-38% between 2015 and 2020. BOS hardware and soft cost reductions, ranging from 54%-77% of total cost reductions, are also modeled. The residential sector's high supply-chain costs, labor requirements, and customer-acquisition costs give it the greatest BOS cost-reduction opportunities, followed by the commercial sector, although opportunities are available to the utility-scale sector as well. Finally, a future scenario is considered in which very high PV penetration requires additional costs to facilitate grid integration and increased power-system flexibility--which might necessitate even lower solar LCOEs. The analysis of a pathway to 3-5 cents/kWh PV systems underscores the importance of combining robust improvements in PV module and BOS costs as well as PV system efficiency and reliability if such aggressive long-term targets are to be achieved.« less
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-03
..., EPRI/NRC- RES Fire Human Reliability Analysis Guidelines, Draft Report for Comment AGENCY: Nuclear... Human Reliability Analysis Guidelines, Draft Report for Comment'' (December 11, 2009; 74 FR 65810). This... Human Reliability Analysis Guidelines'' is available electronically under ADAMS Accession Number...
Reliability of Fault Tolerant Control Systems. Part 1
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.
NASA Astrophysics Data System (ADS)
Zhang, Ding; Zhang, Yingjie
2017-09-01
A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.
Hu, Peter F; Yang, Shiming; Li, Hsiao-Chi; Stansbury, Lynn G; Yang, Fan; Hagegeorge, George; Miller, Catriona; Rock, Peter; Stein, Deborah M; Mackenzie, Colin F
2017-01-01
Research and practice based on automated electronic patient monitoring and data collection systems is significantly limited by system down time. We asked whether a triple-redundant Monitor of Monitors System (MoMs) to collect and summarize key information from system-wide data sources could achieve high fault tolerance, early diagnosis of system failure, and improve data collection rates. In our Level I trauma center, patient vital signs(VS) monitors were networked to collect real time patient physiologic data streams from 94 bed units in our various resuscitation, operating, and critical care units. To minimize the impact of server collection failure, three BedMaster® VS servers were used in parallel to collect data from all bed units. To locate and diagnose system failures, we summarized critical information from high throughput datastreams in real-time in a dashboard viewer and compared the before and post MoMs phases to evaluate data collection performance as availability time, active collection rates, and gap duration, occurrence, and categories. Single-server collection rates in the 3-month period before MoMs deployment ranged from 27.8 % to 40.5 % with combined 79.1 % collection rate. Reasons for gaps included collection server failure, software instability, individual bed setting inconsistency, and monitor servicing. In the 6-month post MoMs deployment period, average collection rates were 99.9 %. A triple redundant patient data collection system with real-time diagnostic information summarization and representation improved the reliability of massive clinical data collection to nearly 100 % in a Level I trauma center. Such data collection framework may also increase the automation level of hospital-wise information aggregation for optimal allocation of health care resources.
A reliability analysis tool for SpaceWire network
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
NASA Technical Reports Server (NTRS)
Sargusingh, Miriam J.; Nelson, Jason R.
2014-01-01
NASA has highlighted reliability as critical to future human space exploration, particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However, no consensus has been reached on what is meant by improving on reliability, or on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the spring of 2013, the AES Water Recovery Project hosted a series of events at Johnson Space Center with the intended goal of establishing a common language and understanding of NASA's reliability goals, and equipping the projects with acceptable means of assessing the respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools, and techniques associated with evaluating and designing for system reliability. The campaign culminated in a workshop that included members of the Environmental Control and Life Support System and AES communities. The goal of this workshop was to develop a consensus on what reliability means to AES and identify methods for assessing low- to mid-technology readiness level technologies for reliability. This paper details the results of that workshop.
Validity, Reliability, and Inertia of Four Different Temperature Capsule Systems.
Bongers, Coen C W G; Daanen, Hein A M; Bogerd, Cornelis P; Hopman, Maria T E; Eijsvogels, Thijs M H
2018-01-01
Telemetric temperature capsule systems are wireless, relatively noninvasive, and easily applicable in field conditions and have therefore great advantages for monitoring core body temperature. However, the accuracy and responsiveness of available capsule systems have not been compared previously. Therefore, the aim of this study was to examine the validity, reliability, and inertia characteristics of four ingestible temperature capsule systems (i.e., CorTemp, e-Celsius, myTemp, and VitalSense). Ten temperature capsules were examined for each system in a temperature-controlled water bath during three trials. The water bath temperature gradually increased from 33°C to 44°C in trials 1 and 2 to assess the validity and reliability, and from 36°C to 42°C in trial 3 to assess the inertia characteristics of the temperature capsules. A systematic difference between capsule and water bath temperature was found for CorTemp (0.077°C ± 0.040°C), e-Celsius (-0.081°C ± 0.055°C), myTemp (-0.003°C ± 0.006°C), and VitalSense (-0.017°C ± 0.023°C; P < 0.010), with the lowest bias for the myTemp system (P < 0.001). A systematic difference was found between trial 1 and trial 2 for CorTemp (0.017°C ± 0.083°C; P = 0.030) and e-Celsius (-0.007°C ± 0.033°C; P = 0.019), whereas temperature values of myTemp (0.001°C ± 0.008°C) and VitalSense (0.002°C ± 0.014°C) did not differ (P > 0.05). Comparable inertia characteristics were found for CorTemp (25 ± 4 s), e-Celsius (21 ± 13 s), and myTemp (19 ± 2 s), whereas the VitalSense system responded more slowly (39 ± 6 s) to changes in water bath temperature (P < 0.001). Although differences in temperature and inertia were observed between capsule systems, an excellent validity, test-retest reliability, and inertia was found for each system between 36°C and 44°C after removal of outliers.
Space Shuttle Propulsion System Reliability
NASA Technical Reports Server (NTRS)
Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David
2011-01-01
This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.
Assured crew return capability Crew Emergency Return Vehicle (CERV) avionics
NASA Technical Reports Server (NTRS)
Myers, Harvey Dean
1990-01-01
The Crew Emergency Return Vehicle (CERV) is being defined to provide Assured Crew Return Capability (ACRC) for Space Station Freedom. The CERV, in providing the standby lifeboat capability, would remain in a dormat mode over long periods of time as would a lifeboat on a ship at sea. The vehicle must be simple, reliable, and constantly available to assure the crew's safety. The CERV must also provide this capability in a cost effective and affordable manner. The CERV Project philosophy of a simple vehicle is to maximize its useability by a physically deconditioned crew. The vehicle reliability goes unquestioned since, when needed, it is the vehicle of last resort. Therefore, its systems and subsystems must be simple, proven, state-of-the-art technology with sufficient redundancy to make it available for use as required for the life of the program. The CERV Project Phase 1'/2 Request for Proposal (RFP) is currently scheduled for release on October 2, 1989. The Phase 1'/2 effort will affirm the existing project requirements or amend and modify them based on a thorough evaluation of the contractor(s) recommendations. The system definition phase, Phase 2, will serve to define CERV systems and subsystems. The current CERV Project schedule has Phase 2 scheduled to begin October 1990. Since a firm CERV avionics design is not in place at this time, the treatment of the CERV avionics complement for the reference configuration is not intended to express a preference with regard to a system or subsystem.
Henrickson Parker, Sarah; Flin, Rhona; McKinley, Aileen; Yule, Steven
2013-06-01
Surgeons must demonstrate leadership to optimize performance and maximize patient safety in the operating room, but no behavior rating tool is available to measure leadership. Ten focus groups with members of the operating room team discussed surgeons' intraoperative leadership. Surgeons' leadership behaviors were extracted and used to finalize the Surgeons' Leadership Inventory (SLI), which was checked by surgeons (n = 6) for accuracy and face validity. The SLI was used to code video recordings (n = 5) of operations to test reliability. Eight elements of surgeons' leadership were included in the SLI: (1) maintaining standards, (2) managing resources, (3) making decisions, (4) directing, (5) training, (6) supporting others, (7) communicating, and (8) coping with pressure. Interrater reliability to code videos of surgeons' behaviors while operating using this tool was acceptable (κ = .70). The SLI is empirically grounded in focus group data and both the leadership and surgical literature. The interrater reliability of the system was acceptable. The inventory could be used for rating surgeons' leadership in the operating room for research or as a basis for postoperative feedback on performance. Copyright © 2013 Elsevier Inc. All rights reserved.
Assessing Performance of Multipurpose Reservoir System Using Two-Point Linear Hedging Rule
NASA Astrophysics Data System (ADS)
Sasireka, K.; Neelakantan, T. R.
2017-07-01
Reservoir operation is the one of the important filed of water resource management. Innovative techniques in water resource management are focussed at optimizing the available water and in decreasing the environmental impact of water utilization on the natural environment. In the operation of multi reservoir system, efficient regulation of the release to satisfy the demand for various purpose like domestic, irrigation and hydropower can lead to increase the benefit from the reservoir as well as significantly reduces the damage due to floods. Hedging rule is one of the emerging techniques in reservoir operation, which reduce the severity of drought by accepting number of smaller shortages. The key objective of this paper is to maximize the minimum power production and improve the reliability of water supply for municipal and irrigation purpose by using hedging rule. In this paper, Type II two-point linear hedging rule is attempted to improve the operation of Bargi reservoir in the Narmada basin in India. The results obtained from simulation of hedging rule is compared with results from Standard Operating Policy, the result shows that the application of hedging rule significantly improved the reliability of water supply and reliability of irrigation release and firm power production.
2009-02-17
Identification of Classified Information in Unclassified DoD Systems During the Audit of Internal Controls and Data Reliability in the Deployable...TITLE AND SUBTITLE Identification of Classified Information in Unclassified DoD Systems During the Audit of Internal Controls and Data Reliability...Systems During the Audit ofInternal Controls and Data Reliability in the Deployable Disbursing System (Report No. D-2009-054) Weare providing this
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.
Design and Analysis of a Flexible, Reliable Deep Space Life Support System
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2012-01-01
This report describes a flexible, reliable, deep space life support system design approach that uses either storage or recycling or both together. The design goal is to provide the needed life support performance with the required ultra reliability for the minimum Equivalent System Mass (ESM). Recycling life support systems used with multiple redundancy can have sufficient reliability for deep space missions but they usually do not save mass compared to mixed storage and recycling systems. The best deep space life support system design uses water recycling with sufficient water storage to prevent loss of crew if recycling fails. Since the amount of water needed for crew survival is a small part of the total water requirement, the required amount of stored water is significantly less than the total to be consumed. Water recycling with water, oxygen, and carbon dioxide removal material storage can achieve the high reliability of full storage systems with only half the mass of full storage and with less mass than the highly redundant recycling systems needed to achieve acceptable reliability. Improved recycling systems with lower mass and higher reliability could perform better than systems using storage.
Securing Ground Data System Applications for Space Operations
NASA Technical Reports Server (NTRS)
Pajevski, Michael J.; Tso, Kam S.; Johnson, Bryan
2014-01-01
The increasing prevalence and sophistication of cyber attacks has prompted the Multimission Ground Systems and Services (MGSS) Program Office at Jet Propulsion Laboratory (JPL) to initiate the Common Access Manager (CAM) effort to protect software applications used in Ground Data Systems (GDSs) at JPL and other NASA Centers. The CAM software provides centralized services and software components used by GDS subsystems to meet access control requirements and ensure data integrity, confidentiality, and availability. In this paper we describe the CAM software; examples of its integration with spacecraft commanding software applications and an information management service; and measurements of its performance and reliability.
Advanced Data Acquisition Systems
NASA Technical Reports Server (NTRS)
Perotti, J.
2003-01-01
Current and future requirements of the aerospace sensors and transducers field make it necessary for the design and development of new data acquisition devices and instrumentation systems. New designs are sought to incorporate self-health, self-calibrating, self-repair capabilities, allowing greater measurement reliability and extended calibration cycles. With the addition of power management schemes, state-of-the-art data acquisition systems allow data to be processed and presented to the users with increased efficiency and accuracy. The design architecture presented in this paper displays an innovative approach to data acquisition systems. The design incorporates: electronic health self-check, device/system self-calibration, electronics and function self-repair, failure detection and prediction, and power management (reduced power consumption). These requirements are driven by the aerospace industry need to reduce operations and maintenance costs, to accelerate processing time and to provide reliable hardware with minimum costs. The project's design architecture incorporates some commercially available components identified during the market research investigation like: Field Programmable Gate Arrays (FPGA) Programmable Analog Integrated Circuits (PAC IC) and Field Programmable Analog Arrays (FPAA); Digital Signal Processing (DSP) electronic/system control and investigation of specific characteristics found in technologies like: Electronic Component Mean Time Between Failure (MTBF); and Radiation Hardened Component Availability. There are three main sections discussed in the design architecture presented in this document. They are the following: (a) Analog Signal Module Section, (b) Digital Signal/Control Module Section and (c) Power Management Module Section. These sections are discussed in detail in the following pages. This approach to data acquisition systems has resulted in the assignment of patent rights to Kennedy Space Center under U.S. patent # 6,462,684. Furthermore, NASA KSC commercialization office has issued licensing rights to Circuit Avenue Netrepreneurs, LLC , a minority-owned business founded in 1999 located in Camden, NJ.
NASA Technical Reports Server (NTRS)
Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John
1994-01-01
This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.
Piromalis, Dimitrios; Arvanitis, Konstantinos
2016-08-04
Wireless Sensor and Actuators Networks (WSANs) constitute one of the most challenging technologies with tremendous socio-economic impact for the next decade. Functionally and energy optimized hardware systems and development tools maybe is the most critical facet of this technology for the achievement of such prospects. Especially, in the area of agriculture, where the hostile operating environment comes to add to the general technological and technical issues, reliable and robust WSAN systems are mandatory. This paper focuses on the hardware design architectures of the WSANs for real-world agricultural applications. It presents the available alternatives in hardware design and identifies their difficulties and problems for real-life implementations. The paper introduces SensoTube, a new WSAN hardware architecture, which is proposed as a solution to the various existing design constraints of WSANs. The establishment of the proposed architecture is based, firstly on an abstraction approach in the functional requirements context, and secondly, on the standardization of the subsystems connectivity, in order to allow for an open, expandable, flexible, reconfigurable, energy optimized, reliable and robust hardware system. The SensoTube implementation reference model together with its encapsulation design and installation are analyzed and presented in details. Furthermore, as a proof of concept, certain use cases have been studied in order to demonstrate the benefits of migrating existing designs based on the available open-source hardware platforms to SensoTube architecture.
In vitro model to evaluate reliability and accuracy of a dental shade-matching instrument.
Kim-Pusateri, Seungyee; Brewer, Jane D; Dunford, Robert G; Wee, Alvin G
2007-11-01
There are several electronic shade-matching instruments available for clinical use; unfortunately, there are limited acceptable in vitro models to evaluate their reliability and accuracy. The purpose of this in vitro study was to evaluate the reliability and accuracy of a dental clinical shade-matching instrument. Using the shade-matching instrument (ShadeScan), color measurements were made of 3 commercial shade guides (VITA Classical, VITA 3D-Master, and Chromascop). Shade tabs were selected and placed in the middle of a gingival matrix (Shofu Gummy), with tabs of the same nominal shade from additional shade guides placed on both sides. Measurements were made of the central region of the shade tab inside a black box. For the reliability assessment, each shade tab from each of the 3 shade guide types was measured 10 times. For the accuracy assessment, each shade tab from 10 guides of each of the 3 types evaluated was measured once. Reliability, accuracy, and 95% confidence intervals were calculated for each shade tab. Differences were determined by 1-way ANOVA followed by the Bonferroni multiple comparison procedure. Reliability of ShadeScan was as follows: VITA Classical = 95.0%, VITA 3D-Master = 91.2%, and Chromascop = 76.5%. Accuracy of ShadeScan was as follows: VITA Classical = 65.0%, VITA 3D-Master = 54.2%, Chromascop = 84.5%. This in vitro study showed a varying degree of reliability and accuracy for ShadeScan, depending on the type of shade guide system used.
NASA Astrophysics Data System (ADS)
Reza, S. M. Mohsin
Design options have been evaluated for the Modular Helium Reactor (MHR) for higher temperature operation. An alternative configuration for the MHR coolant inlet flow path is developed to reduce the peak vessel temperature (PVT). The coolant inlet path is shifted from the annular path between reactor core barrel and vessel wall through the permanent side reflector (PSR). The number and dimensions of coolant holes are varied to optimize the pressure drop, the inlet velocity, and the percentage of graphite removed from the PSR to create this inlet path. With the removal of ˜10% of the graphite from PSR the PVT is reduced from 541°C to 421°C. A new design for the graphite block core has been evaluated and optimized to reduce the inlet coolant temperature with the aim of further reduction of PVT. The dimensions and number of fuel rods and coolant holes, and the triangular pitch have been changed and optimized. Different packing fractions for the new core design have been used to conserve the number of fuel particles. Thermal properties for the fuel elements are calculated and incorporated into these analyses. The inlet temperature, mass flow and bypass flow are optimized to limit the peak fuel temperature (PFT) within an acceptable range. Using both of these modifications together, the PVT is reduced to ˜350°C while keeping the outlet temperature at 950°C and maintaining the PFT within acceptable limits. The vessel and fuel temperatures during low pressure conduction cooldown and high pressure conduction cooldown transients are found to be well below the design limits. The reliability and availability studies for coupled nuclear hydrogen production processes based on the sulfur iodine thermochemical process and high temperature electrolysis process have been accomplished. The fault tree models for both these processes are developed. Using information obtained on system configuration, component failure probability, component repair time and system operating modes and conditions, the system reliability and availability are assessed. Required redundancies are made to improve system reliability and to optimize the plant design for economic performance. The failure rates and outage factors of both processes are found to be well below the maximum acceptable range.
18 CFR 39.3 - Electric Reliability Organization certification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... operators of the Bulk-Power System, and other interested parties for improvement of the Electric Reliability... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Electric Reliability..., Reliability Standards that provide for an adequate level of reliability of the Bulk-Power System, and (2) Has...
The impact of rare earth cobalt permanent magnets on electromechanical device design
NASA Technical Reports Server (NTRS)
Fisher, R. L.; Studer, P. A.
1979-01-01
Specific motor designs which employ rare earth cobalt magnets are discussed with special emphasis on their unique properties and magnetic field geometry. In addition to performance improvements and power savings, high reliability devices are attainable. Both the mechanism and systems engineering should be aware of the new performance levels which are currently becoming available as a result of the rare earth cobalt magnets.
Navy Applications Experience with Small Wind Power Systems
1985-05-01
present state-of-the-art in small WECS technology, including environmental concerns, is reviewed. Also presented is how the technology is advancing to...environmental concerns, is reviewed. Also presented is how the technology is advancing to improve reliability and avail- ability for effectively using...VAWT technology is still in its early stages of development. The horizontal-axis wind turbine (HAWT) technology has advanced to third and fourth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Manajit; Habte, Aron; Gueymard, Christian
As the world looks for low-carbon sources of energy, solar power stands out as the single most abundant energy resource on Earth. Harnessing this energy is the challenge for this century. Photovoltaics, solar heating and cooling, and concentrating solar power (CSP) are primary forms of energy applications using sunlight. These solar energy systems use different technologies, collect different fractions of the solar resource, and have different siting requirements and production capabilities. Reliable information about the solar resource is required for every solar energy application. This holds true for small installations on a rooftop as well as for large solar powermore » plants; however, solar resource information is of particular interest for large installations, because they require substantial investment, sometimes exceeding 1 billion dollars in construction costs. Before such a project is undertaken, the best possible information about the quality and reliability of the fuel source must be made available. That is, project developers need reliable data about the solar resource available at specific locations, including historic trends with seasonal, daily, hourly, and (preferably) subhourly variability to predict the daily and annual performance of a proposed power plant. Without this data, an accurate financial analysis is not possible. Additionally, with the deployment of large amounts of distributed photovoltaics, there is an urgent need to integrate this source of generation to ensure the reliability and stability of the grid. Forecasting generation from the various sources will allow for larger penetrations of these generation sources because utilities and system operators can then ensure stable grid operations. Developed by the foremost experts in the field who have come together under the umbrella of the International Energy Agency's Solar Heating and Cooling Task 46, this handbook summarizes state-of-the-art information about all the above topics.« less
78 FR 44475 - Protection System Maintenance Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-24
... Protection System Maintenance--Phase 2 (Reclosing Relays)). 12. NERC states that the proposed Reliability... of the relay inputs and outputs that are essential to proper functioning of the protection system...] Protection System Maintenance Reliability Standard AGENCY: Federal Energy Regulatory Commission, Energy...
NASA Astrophysics Data System (ADS)
Varlataya, S. K.; Evdokimov, V. E.; Urzov, A. Y.
2017-11-01
This article describes a process of calculating a certain complex information security system (CISS) reliability using the example of the technospheric security management model as well as ability to determine the frequency of its maintenance using the system reliability parameter which allows one to assess man-made risks and to forecast natural and man-made emergencies. The relevance of this article is explained by the fact the CISS reliability is closely related to information security (IS) risks. Since reliability (or resiliency) is a probabilistic characteristic of the system showing the possibility of its failure (and as a consequence - threats to the protected information assets emergence), it is seen as a component of the overall IS risk in the system. As it is known, there is a certain acceptable level of IS risk assigned by experts for a particular information system; in case of reliability being a risk-forming factor maintaining an acceptable risk level should be carried out by the routine analysis of the condition of CISS and its elements and their timely service. The article presents a reliability parameter calculation for the CISS with a mixed type of element connection, a formula of the dynamics of such system reliability is written. The chart of CISS reliability change is a S-shaped curve which can be divided into 3 periods: almost invariable high level of reliability, uniform reliability reduction, almost invariable low level of reliability. Setting the minimum acceptable level of reliability, the graph (or formula) can be used to determine the period of time during which the system would meet requirements. Ideally, this period should not be longer than the first period of the graph. Thus, the proposed method of calculating the CISS maintenance frequency helps to solve a voluminous and critical task of the information assets risk management.
Kenyon, Lisa K.; Elliott, James M; Cheng, M. Samuel
2016-01-01
Purpose/Background Despite the availability of various field-tests for many competitive sports, a reliable and valid test specifically developed for use in men's gymnastics has not yet been developed. The Men's Gymnastics Functional Measurement Tool (MGFMT) was designed to assess sport-specific physical abilities in male competitive gymnasts. The purpose of this study was to develop the MGFMT by establishing a scoring system for individual test items and to initiate the process of establishing test-retest reliability and construct validity. Methods A total of 83 competitive male gymnasts ages 7-18 underwent testing using the MGFMT. Thirty of these subjects underwent re-testing one week later in order to assess test-retest reliability. Construct validity was assessed using a simple regression analysis between total MGFMT scores and the gymnasts’ USA-Gymnastics competitive level to calculate the coefficient of determination (r2). Test-retest reliability was analyzed using Model 1 Intraclass correlation coefficients (ICC). Statistical significance was set at the p<0.05 level. Results The relationship between total MGFMT scores and subjects’ current USA-Gymnastics competitive level was found to be good (r2 = 0.63). Reliability testing of the MGFMT composite test score showed excellent test-retest reliability over a one-week period (ICC = 0.97). Test-retest reliability of the individual component tests ranged from good to excellent (ICC = 0.75-0.97). Conclusions The results of this study provide initial support for the construct validity and test-retest reliability of the MGFMT. Level of Evidence Level 3 PMID:27999723
Automated plant, production management system
NASA Astrophysics Data System (ADS)
Aksenova, V. I.; Belov, V. I.
1984-12-01
The development of a complex of tasks for the operational management of production (OUP) within the framework of an automated system for production management (ASUP) shows that it is impossible to have effective computations without reliable initial information. The influence of many factors involving the production and economic activity of the entire enterprise upon the plan and course of production are considered. It is suggested that an adequate model should be available which covers all levels of the hierarchical system: workplace, section (bridgade), shop, enterprise, and the model should be incorporated into the technological sequence of performance and there should be provisions for an adequate man machine system.
Reliability testing of the Larsen and Sharp classifications for rheumatoid arthritis of the elbow.
Jew, Nicholas B; Hollins, Anthony M; Mauck, Benjamin M; Smith, Richard A; Azar, Frederick M; Miller, Robert H; Throckmorton, Thomas W
2017-01-01
Two popular systems for classifying rheumatoid arthritis affecting the elbow are the Larsen and Sharp schemes. To our knowledge, no study has investigated the reliability of these 2 systems. We compared the intraobserver and interobserver agreement of the 2 systems to determine whether one is more reliable than the other. The radiographs of 45 patients diagnosed with rheumatoid arthritis affecting the elbow were evaluated. Anteroposterior and lateral radiographs were deidentified and distributed to 6 evaluators (4 fellowship-trained upper extremity surgeons and 2 orthopedic trainees). Each evaluator graded all 45 radiographs according to the Larsen and Sharp scoring methods on 2 occasions, at least 2 weeks apart. Overall intraobserver reliability was 0.93 (95% confidence interval [CI], 0.90-0.95) for the Larsen system and 0.92 (95% CI, 0.86-0.96) for the Sharp classification, both indicating substantial agreement. Overall interobserver reliability was 0.70 (95% CI, 0.60-0.80) for the Larsen classification and 0.68 (95% CI, 0.54-0.81) for the Sharp system, both indicating good agreement. There were no significant differences in the intraobserver or interobserver reliability of the systems overall and no significant differences in reliability between attending surgeons and trainees for either classification system. The Larsen and Sharp systems both show substantial intraobserver reliability and good interobserver agreement for the radiographic classification of rheumatoid arthritis affecting the elbow. Differences in training level did not result in substantial variances in reliability for either system. We conclude that both systems can be reliably used to evaluate rheumatoid arthritis of the elbow by observers of varying training levels. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Reliability of Space-Shuttle Pressure Vessels with Random Batch Effects
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Kulkarni, Pandurang M.
2000-01-01
In this article we revisit the problem of estimating the joint reliability against failure by stress rupture of a group of fiber-wrapped pressure vessels used on Space-Shuttle missions. The available test data were obtained from an experiment conducted at the U.S. Department of Energy Lawrence Livermore Laboratory (LLL) in which scaled-down vessels were subjected to life testing at four accelerated levels of pressure. We estimate the reliability assuming that both the Shuttle and LLL vessels were chosen at random in a two-stage process from an infinite population with spools of fiber as the primary sampling unit. Two main objectives of this work are: (1) to obtain practical estimates of reliability taking into account random spool effects and (2) to obtain a realistic assessment of estimation accuracy under the random model. Here, reliability is calculated in terms of a 'system' of 22 fiber-wrapped pressure vessels, taking into account typical pressures and exposure times experienced by Shuttle vessels. Comparisons are made with previous studies. The main conclusion of this study is that, although point estimates of reliability are still in the 'comfort zone,' it is advisable to plan for replacement of the pressure vessels well before the expected Lifetime of 100 missions per Shuttle Orbiter. Under a random-spool model, there is simply not enough information in the LLL data to provide reasonable assurance that such replacement would not be necessary.
Peterson, Eleanor B; Calhoun, Aaron W; Rider, Elizabeth A
2014-09-01
With increased recognition of the importance of sound communication skills and communication skills education, reliable assessment tools are essential. This study reports on the psychometric properties of an assessment tool based on the Kalamazoo Consensus Statement Essential Elements Communication Checklist. The Gap-Kalamazoo Communication Skills Assessment Form (GKCSAF), a modified version of an existing communication skills assessment tool, the Kalamazoo Essential Elements Communication Checklist-Adapted, was used to assess learners in a multidisciplinary, simulation-based communication skills educational program using multiple raters. 118 simulated conversations were available for analysis. Internal consistency and inter-rater reliability were determined by calculating a Cronbach's alpha score and intra-class correlation coefficients (ICC), respectively. The GKCSAF demonstrated high internal consistency with a Cronbach's alpha score of 0.844 (faculty raters) and 0.880 (peer observer raters), and high inter-rater reliability with an ICC of 0.830 (faculty raters) and 0.89 (peer observer raters). The Gap-Kalamazoo Communication Skills Assessment Form is a reliable method of assessing the communication skills of multidisciplinary learners using multi-rater methods within the learning environment. The Gap-Kalamazoo Communication Skills Assessment Form can be used by educational programs that wish to implement a reliable assessment and feedback system for a variety of learners. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
An Introduction to Markov Modeling: Concepts and Uses
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Lau, Sonie (Technical Monitor)
1998-01-01
Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.