Sample records for called probabilistic failure

  1. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  2. INTEGRATION OF RELIABILITY WITH MECHANISTIC THERMALHYDRAULICS: REPORT ON APPROACH AND TEST PROBLEM RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. Schroeder; R. W. Youngblood

    The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less

  3. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  4. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  5. Use of Probabilistic Engineering Methods in the Detailed Design and Development Phases of the NASA Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Fayssal, Safie; Weldon, Danny

    2008-01-01

    The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program called Constellation to send crew and cargo to the international Space Station, to the moon, and beyond. As part of the Constellation program, a new launch vehicle, Ares I, is being developed by NASA Marshall Space Flight Center. Designing a launch vehicle with high reliability and increased safety requires a significant effort in understanding design variability and design uncertainty at the various levels of the design (system, element, subsystem, component, etc.) and throughout the various design phases (conceptual, preliminary design, etc.). In a previous paper [1] we discussed a probabilistic functional failure analysis approach intended mainly to support system requirements definition, system design, and element design during the early design phases. This paper provides an overview of the application of probabilistic engineering methods to support the detailed subsystem/component design and development as part of the "Design for Reliability and Safety" approach for the new Ares I Launch Vehicle. Specifically, the paper discusses probabilistic engineering design analysis cases that had major impact on the design and manufacturing of the Space Shuttle hardware. The cases represent important lessons learned from the Space Shuttle Program and clearly demonstrate the significance of probabilistic engineering analysis in better understanding design deficiencies and identifying potential design improvement for Ares I. The paper also discusses the probabilistic functional failure analysis approach applied during the early design phases of Ares I and the forward plans for probabilistic design analysis in the detailed design and development phases.

  6. Risk analysis of analytical validations by probabilistic modification of FMEA.

    PubMed

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Review of the probabilistic failure analysis methodology and other probabilistic approaches for application in aerospace structural design

    NASA Technical Reports Server (NTRS)

    Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.

    1993-01-01

    Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.

  8. Probabilistic structural analysis of aerospace components using NESSUS

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.

    1988-01-01

    Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.

  9. Probabilistic failure assessment with application to solid rocket motors

    NASA Technical Reports Server (NTRS)

    Jan, Darrell L.; Davidson, Barry D.; Moore, Nicholas R.

    1990-01-01

    A quantitative methodology is being developed for assessment of risk of failure of solid rocket motors. This probabilistic methodology employs best available engineering models and available information in a stochastic framework. The framework accounts for incomplete knowledge of governing parameters, intrinsic variability, and failure model specification error. Earlier case studies have been conducted on several failure modes of the Space Shuttle Main Engine. Work in progress on application of this probabilistic approach to large solid rocket boosters such as the Advanced Solid Rocket Motor for the Space Shuttle is described. Failure due to debonding has been selected as the first case study for large solid rocket motors (SRMs) since it accounts for a significant number of historical SRM failures. Impact of incomplete knowledge of governing parameters and failure model specification errors is expected to be important.

  10. Design for Reliability and Safety Approach for the NASA New Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal, M.; Weldon, Danny M.

    2007-01-01

    The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program intended for sending crew and cargo to the international Space Station (ISS), to the moon, and beyond. This program is called Constellation. As part of the Constellation program, NASA is developing new launch vehicles aimed at significantly increase safety and reliability, reduce the cost of accessing space, and provide a growth path for manned space exploration. Achieving these goals requires a rigorous process that addresses reliability, safety, and cost upfront and throughout all the phases of the life cycle of the program. This paper discusses the "Design for Reliability and Safety" approach for the NASA new crew launch vehicle called ARES I. The ARES I is being developed by NASA Marshall Space Flight Center (MSFC) in support of the Constellation program. The ARES I consists of three major Elements: A solid First Stage (FS), an Upper Stage (US), and liquid Upper Stage Engine (USE). Stacked on top of the ARES I is the Crew exploration vehicle (CEV). The CEV consists of a Launch Abort System (LAS), Crew Module (CM), Service Module (SM), and a Spacecraft Adapter (SA). The CEV development is being led by NASA Johnson Space Center (JSC). Designing for high reliability and safety require a good integrated working environment and a sound technical design approach. The "Design for Reliability and Safety" approach addressed in this paper discusses both the environment and the technical process put in place to support the ARES I design. To address the integrated working environment, the ARES I project office has established a risk based design group called "Operability Design and Analysis" (OD&A) group. This group is an integrated group intended to bring together the engineering, design, and safety organizations together to optimize the system design for safety, reliability, and cost. On the technical side, the ARES I project has, through the OD&A environment, implemented a probabilistic approach to analyze and evaluate design uncertainties and understand their impact on safety, reliability, and cost. This paper focuses on the use of the various probabilistic approaches that have been pursued by the ARES I project. Specifically, the paper discusses an integrated functional probabilistic analysis approach that addresses upffont some key areas to support the ARES I Design Analysis Cycle (DAC) pre Preliminary Design (PD) Phase. This functional approach is a probabilistic physics based approach that combines failure probabilities with system dynamics and engineering failure impact models to identify key system risk drivers and potential system design requirements. The paper also discusses other probabilistic risk assessment approaches planned by the ARES I project to support the PD phase and beyond.

  11. The application of probabilistic fracture analysis to residual life evaluation of embrittled reactor vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, T.L.; Simonen, F.A.

    1992-05-01

    Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less

  12. The application of probabilistic fracture analysis to residual life evaluation of embrittled reactor vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, T.L.; Simonen, F.A.

    1992-01-01

    Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less

  13. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  14. Specifying design conservatism: Worst case versus probabilistic analysis

    NASA Technical Reports Server (NTRS)

    Miles, Ralph F., Jr.

    1993-01-01

    Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.

  15. Probabilistic Risk Assessment: A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Probabilistic risk analysis is an integration of failure modes and effects analysis (FMEA), fault tree analysis and other techniques to assess the potential for failure and to find ways to reduce risk. This bibliography references 160 documents in the NASA STI Database that contain the major concepts, probabilistic risk assessment, risk and probability theory, in the basic index or major subject terms, An abstract is included with most citations, followed by the applicable subject terms.

  16. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  17. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  18. Assuring SS7 dependability: A robustness characterization of signaling network elements

    NASA Astrophysics Data System (ADS)

    Karmarkar, Vikram V.

    1994-04-01

    Current and evolving telecommunication services will rely on signaling network performance and reliability properties to build competitive call and connection control mechanisms under increasing demands on flexibility without compromising on quality. The dimensions of signaling dependability most often evaluated are the Rate of Call Loss and End-to-End Route Unavailability. A third dimension of dependability that captures the concern about large or catastrophic failures can be termed Network Robustness. This paper is concerned with the dependability aspects of the evolving Signaling System No. 7 (SS7) networks and attempts to strike a balance between the probabilistic and deterministic measures that must be evaluated to accomplish a risk-trend assessment to drive architecture decisions. Starting with high-level network dependability objectives and field experience with SS7 in the U.S., potential areas of growing stringency in network element (NE) dependability are identified to improve against current measures of SS7 network quality, as per-call signaling interactions increase. A sensitivity analysis is presented to highlight the impact due to imperfect coverage of duplex network component or element failures (i.e., correlated failures), to assist in the setting of requirements on NE robustness. A benefit analysis, covering several dimensions of dependability, is used to generate the domain of solutions available to the network architect in terms of network and network element fault tolerance that may be specified to meet the desired signaling quality goals.

  19. Probabilistic confidence for decisions based on uncertain reliability estimates

    NASA Astrophysics Data System (ADS)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  20. A probabilistic analysis of the implications of instrument failures on ESA's Swarm mission for its individual satellite orbit deployments

    NASA Astrophysics Data System (ADS)

    Jackson, Andrew

    2015-07-01

    On launch, one of Swarm's absolute scalar magnetometers (ASMs) failed to function, leaving an asymmetrical arrangement of redundant spares on different spacecrafts. A decision was required concerning the deployment of individual satellites into the low-orbit pair or the higher "lonely" orbit. I analyse the probabilities for successful operation of two of the science components of the Swarm mission in terms of a classical probabilistic failure analysis, with a view to concluding a favourable assignment for the satellite with the single working ASM. I concentrate on the following two science aspects: the east-west gradiometer aspect of the lower pair of satellites and the constellation aspect, which requires a working ASM in each of the two orbital planes. I use the so-called "expert solicitation" probabilities for instrument failure solicited from Mission Advisory Group (MAG) members. My conclusion from the analysis is that it is better to have redundancy of ASMs in the lonely satellite orbit. Although the opposite scenario, having redundancy (and thus four ASMs) in the lower orbit, increases the chance of a working gradiometer late in the mission; it does so at the expense of a likely constellation. Although the results are presented based on actual MAG members' probabilities, the results are rather generic, excepting the case when the probability of individual ASM failure is very small; in this case, any arrangement will ensure a successful mission since there is essentially no failure expected at all. Since the very design of the lower pair is to enable common mode rejection of external signals, it is likely that its work can be successfully achieved during the first 5 years of the mission.

  1. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  2. Probabilistic finite elements for fracture and fatigue analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Lawrence, M.; Besterfield, G. H.

    1989-01-01

    The fusion of the probabilistic finite element method (PFEM) and reliability analysis for probabilistic fracture mechanics (PFM) is presented. A comprehensive method for determining the probability of fatigue failure for curved crack growth was developed. The criterion for failure or performance function is stated as: the fatigue life of a component must exceed the service life of the component; otherwise failure will occur. An enriched element that has the near-crack-tip singular strain field embedded in the element is used to formulate the equilibrium equation and solve for the stress intensity factors at the crack-tip. Performance and accuracy of the method is demonstrated on a classical mode 1 fatigue problem.

  3. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  4. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  5. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  6. Probabilistic sizing of laminates with uncertainties

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Liaw, D. G.; Chamis, C. C.

    1993-01-01

    A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.

  7. An overview of engineering concepts and current design algorithms for probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Duffy, S. F.; Hu, J.; Hopkins, D. A.

    1995-01-01

    The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.

  8. Identification of failure type in corroded pipelines: a bayesian probabilistic approach.

    PubMed

    Breton, T; Sanchez-Gheno, J C; Alamilla, J L; Alvarez-Ramirez, J

    2010-07-15

    Spillover of hazardous materials from transport pipelines can lead to catastrophic events with serious and dangerous environmental impact, potential fire events and human fatalities. The problem is more serious for large pipelines when the construction material is under environmental corrosion conditions, as in the petroleum and gas industries. In this way, predictive models can provide a suitable framework for risk evaluation, maintenance policies and substitution procedure design that should be oriented to reduce increased hazards. This work proposes a bayesian probabilistic approach to identify and predict the type of failure (leakage or rupture) for steel pipelines under realistic corroding conditions. In the first step of the modeling process, the mechanical performance of the pipe is considered for establishing conditions under which either leakage or rupture failure can occur. In the second step, experimental burst tests are used to introduce a mean probabilistic boundary defining a region where the type of failure is uncertain. In the boundary vicinity, the failure discrimination is carried out with a probabilistic model where the events are considered as random variables. In turn, the model parameters are estimated with available experimental data and contrasted with a real catastrophic event, showing good discrimination capacity. The results are discussed in terms of policies oriented to inspection and maintenance of large-size pipelines in the oil and gas industry. 2010 Elsevier B.V. All rights reserved.

  9. Probabilistic Rock Slope Engineering.

    DTIC Science & Technology

    1984-06-01

    4 U rmy Corps PROBABILISTIC ROCK SLOPE ENGINEERING by Stanley M. Miller jGeotechnical Engineer 509 E. Calle Avenue Tucson, Arizona 85705 Co N 00 IFI...NUMBERS Geological Engineer CW71 1ork Unit 31755 509 E. Calle Avenue, Tucson, Arizona 85705 11. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE...communication, J. P. Sa,.-1Iy, Inspiration Consolidated Copper Co., Inspiration, Ariz., 1980. Personal communication, R. D. Call, Pincock, Allen, and

  10. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  11. Probabilistic Prediction of Lifetimes of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.

    2006-01-01

    ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.

  12. Probabilistic Evaluation of Blade Impact Damage

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Abumeri, G. H.

    2003-01-01

    The response to high velocity impact of a composite blade is probabilistically evaluated. The evaluation is focused on quantifying probabilistically the effects of uncertainties (scatter) in the variables that describe the impact, the blade make-up (geometry and material), the blade response (displacements, strains, stresses, frequencies), the blade residual strength after impact, and the blade damage tolerance. The results of probabilistic evaluations results are in terms of probability cumulative distribution functions and probabilistic sensitivities. Results show that the blade has relatively low damage tolerance at 0.999 probability of structural failure and substantial at 0.01 probability.

  13. The Stag Hunt Game: An Example of an Excel-Based Probabilistic Game

    ERIC Educational Resources Information Center

    Bridge, Dave

    2016-01-01

    With so many role-playing simulations already in the political science education literature, the recent repeated calls for new games is both timely and appropriate. This article answers and extends those calls by advocating the creation of probabilistic games using Microsoft Excel. I introduce the example of the Stag Hunt Game--a short, effective,…

  14. The application of probabilistic design theory to high temperature low cycle fatigue

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.

    1981-01-01

    Metal fatigue under stress and thermal cycling is a principal mode of failure in gas turbine engine hot section components such as turbine blades and disks and combustor liners. Designing for fatigue is subject to considerable uncertainty, e.g., scatter in cycles to failure, available fatigue test data and operating environment data, uncertainties in the models used to predict stresses, etc. Methods of analyzing fatigue test data for probabilistic design purposes are summarized. The general strain life as well as homo- and hetero-scedastic models are considered. Modern probabilistic design theory is reviewed and examples are presented which illustrate application to reliability analysis of gas turbine engine components.

  15. Experiences with Probabilistic Analysis Applied to Controlled Systems

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Giesy, Daniel P.

    2004-01-01

    This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.

  16. Probabilistic risk assessment of the Space Shuttle. Phase 3: A study of the potential of losing the vehicle during nominal operation. Volume 2: Integrated loss of vehicle model

    NASA Technical Reports Server (NTRS)

    Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.

    1995-01-01

    The application of the probabilistic risk assessment methodology to a Space Shuttle environment, particularly to the potential of losing the Shuttle during nominal operation is addressed. The different related concerns are identified and combined to determine overall program risks. A fault tree model is used to allocate system probabilities to the subsystem level. The loss of the vehicle due to failure to contain energetic gas and debris, to maintain proper propulsion and configuration is analyzed, along with the loss due to Orbiter, external tank failure, and landing failure or error.

  17. A Probabilistic Assessment of Failure for Air Force Building Systems

    DTIC Science & Technology

    2015-03-26

    Rain Water Drainage System 1.000 0.225 0.085 0.424 0.800 0.968 0.998 4.449 D209001 Special Piping Systems 0.436 0.088 0.289 0.881 0.998 1.000 1.000...Rain Water Drainage 0.522 D2090 Other Plumbing Systems 0.303 D 30 H V A C D3010 Energy Supply 0.316 D3020 Heat Generating Systems 0.636 D3030...A PROBABILISTIC ASSESSMENT OF FAILURE FOR AIR FORCE BUILDING SYSTEMS THESIS Stephanie L

  18. Probabilistic Analysis of Space Shuttle Body Flap Actuator Ball Bearings

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Jett, Timothy R.; Predmore, Roamer E.; Zaretsky, Erin V.

    2007-01-01

    A probabilistic analysis, using the 2-parameter Weibull-Johnson method, was performed on experimental life test data from space shuttle actuator bearings. Experiments were performed on a test rig under simulated conditions to determine the life and failure mechanism of the grease lubricated bearings that support the input shaft of the space shuttle body flap actuators. The failure mechanism was wear that can cause loss of bearing preload. These tests established life and reliability data for both shuttle flight and ground operation. Test data were used to estimate the failure rate and reliability as a function of the number of shuttle missions flown. The Weibull analysis of the test data for a 2-bearing shaft assembly in each body flap actuator established a reliability level of 99.6 percent for a life of 12 missions. A probabilistic system analysis for four shuttles, each of which has four actuators, predicts a single bearing failure in one actuator of one shuttle after 22 missions (a total of 88 missions for a 4-shuttle fleet). This prediction is comparable with actual shuttle flight history in which a single actuator bearing was found to have failed by wear at 20 missions.

  19. Probabilistic Analysis of Space Shuttle Body Flap Actuator Ball Bearings

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Jett, Timothy R.; Predmore, Roamer E.; Zaretsky, Erwin V.

    2008-01-01

    A probabilistic analysis, using the 2-parameter Weibull-Johnson method, was performed on experimental life test data from space shuttle actuator bearings. Experiments were performed on a test rig under simulated conditions to determine the life and failure mechanism of the grease lubricated bearings that support the input shaft of the space shuttle body flap actuators. The failure mechanism was wear that can cause loss of bearing preload. These tests established life and reliability data for both shuttle flight and ground operation. Test data were used to estimate the failure rate and reliability as a function of the number of shuttle missions flown. The Weibull analysis of the test data for the four actuators on one shuttle, each with a 2-bearing shaft assembly, established a reliability level of 96.9 percent for a life of 12 missions. A probabilistic system analysis for four shuttles, each of which has four actuators, predicts a single bearing failure in one actuator of one shuttle after 22 missions (a total of 88 missions for a 4-shuttle fleet). This prediction is comparable with actual shuttle flight history in which a single actuator bearing was found to have failed by wear at 20 missions.

  20. Probabilistic and Possibilistic Analyses of the Strength of a Bonded Joint

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, T.; Smith, Steven A.

    2001-01-01

    The effects of uncertainties on the strength of a single lap shear joint are explained. Probabilistic and possibilistic methods are used to account for uncertainties. Linear and geometrically nonlinear finite element analyses are used in the studies. To evaluate the strength of the joint, fracture in the adhesive and material strength failure in the strap are considered. The study shows that linear analyses yield conservative predictions for failure loads. The possibilistic approach for treating uncertainties appears to be viable for preliminary design, but with several qualifications.

  1. Development of Probabilistic Flood Inundation Mapping For Flooding Induced by Dam Failure

    NASA Astrophysics Data System (ADS)

    Tsai, C.; Yeh, J. J. J.

    2017-12-01

    A primary function of flood inundation mapping is to forecast flood hazards and assess potential losses. However, uncertainties limit the reliability of inundation hazard assessments. Major sources of uncertainty should be taken into consideration by an optimal flood management strategy. This study focuses on the 20km reach downstream of the Shihmen Reservoir in Taiwan. A dam failure induced flood herein provides the upstream boundary conditions of flood routing. The two major sources of uncertainty that are considered in the hydraulic model and the flood inundation mapping herein are uncertainties in the dam break model and uncertainty of the roughness coefficient. The perturbance moment method is applied to a dam break model and the hydro system model to develop probabilistic flood inundation mapping. Various numbers of uncertain variables can be considered in these models and the variability of outputs can be quantified. The probabilistic flood inundation mapping for dam break induced floods can be developed with consideration of the variability of output using a commonly used HEC-RAS model. Different probabilistic flood inundation mappings are discussed and compared. Probabilistic flood inundation mappings are hoped to provide new physical insights in support of the evaluation of concerning reservoir flooded areas.

  2. Probabilistic structural analysis methods for space transportation propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.

    1991-01-01

    Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .

  3. Recent developments of the NESSUS probabilistic structural analysis computer program

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  4. Probabilistic Analysis of a Composite Crew Module

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Krishnamurthy, Thiagarajan

    2011-01-01

    An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.

  5. Probabilistic structural analysis methods of hot engine structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Hopkins, D. A.

    1989-01-01

    Development of probabilistic structural analysis methods for hot engine structures at Lewis Research Center is presented. Three elements of the research program are: (1) composite load spectra methodology; (2) probabilistic structural analysis methodology; and (3) probabilistic structural analysis application. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) turbine blade temperature, pressure, and torque of the space shuttle main engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; and (3) evaluation of the failure probability. Collectively, the results demonstrate that the structural durability of hot engine structural components can be effectively evaluated in a formal probabilistic/reliability framework.

  6. Reliability, Risk and Cost Trade-Offs for Composite Designs

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1996-01-01

    Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.

  7. Probabilistic inspection strategies for minimizing service failures

    NASA Technical Reports Server (NTRS)

    Brot, Abraham

    1994-01-01

    The INSIM computer program is described which simulates the 'limited fatigue life' environment in which aircraft structures generally operate. The use of INSIM to develop inspection strategies which aim to minimize service failures is demonstrated. Damage-tolerance methodology, inspection thresholds and customized inspections are simulated using the probability of failure as the driving parameter.

  8. Application of Probabilistic Analysis to Aircraft Impact Dynamics

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.

    2003-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.

  9. Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations

    NASA Astrophysics Data System (ADS)

    Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.

    2014-02-01

    The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.

  10. PROBABILISTIC RISK ANALYSIS OF RADIOACTIVE WASTE DISPOSALS - a case study

    NASA Astrophysics Data System (ADS)

    Trinchero, P.; Delos, A.; Tartakovsky, D. M.; Fernandez-Garcia, D.; Bolster, D.; Dentz, M.; Sanchez-Vila, X.; Molinero, J.

    2009-12-01

    The storage of contaminant material in superficial or sub-superficial repositories, such as tailing piles for mine waste or disposal sites for low and intermediate nuclear waste, poses a potential threat for the surrounding biosphere. The minimization of these risks can be achieved by supporting decision-makers with quantitative tools capable to incorporate all source of uncertainty within a rigorous probabilistic framework. A case study is presented where we assess the risks associated to the superficial storage of hazardous waste close to a populated area. The intrinsic complexity of the problem, involving many events with different spatial and time scales and many uncertainty parameters is overcome by using a formal PRA (probabilistic risk assessment) procedure that allows decomposing the system into a number of key events. Hence, the failure of the system is directly linked to the potential contamination of one of the three main receptors: the underlying karst aquifer, a superficial stream that flows near the storage piles and a protection area surrounding a number of wells used for water supply. The minimal cut sets leading to the failure of the system are obtained by defining a fault-tree that incorporates different events including the failure of the engineered system (e.g. cover of the piles) and the failure of the geological barrier (e.g. clay layer that separates the bottom of the pile from the karst formation). Finally the probability of failure is quantitatively assessed combining individual independent or conditional probabilities that are computed numerically or borrowed from reliability database.

  11. Design of Critical Components

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C.; Zaretsky, Erwin V.

    2001-01-01

    Critical component design is based on minimizing product failures that results in loss of life. Potential catastrophic failures are reduced to secondary failures where components removed for cause or operating time in the system. Issues of liability and cost of component removal become of paramount importance. Deterministic design with factors of safety and probabilistic design address but lack the essential characteristics for the design of critical components. In deterministic design and fabrication there are heuristic rules and safety factors developed over time for large sets of structural/material components. These factors did not come without cost. Many designs failed and many rules (codes) have standing committees to oversee their proper usage and enforcement. In probabilistic design, not only are failures a given, the failures are calculated; an element of risk is assumed based on empirical failure data for large classes of component operations. Failure of a class of components can be predicted, yet one can not predict when a specific component will fail. The analogy is to the life insurance industry where very careful statistics are book-kept on classes of individuals. For a specific class, life span can be predicted within statistical limits, yet life-span of a specific element of that class can not be predicted.

  12. Reliability and risk assessment of structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1991-01-01

    Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.

  13. An overview of computational simulation methods for composite structures failure and life analysis

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1993-01-01

    Three parallel computational simulation methods are being developed at the LeRC Structural Mechanics Branch (SMB) for composite structures failure and life analysis: progressive fracture CODSTRAN; hierarchical methods for high-temperature composites; and probabilistic evaluation. Results to date demonstrate that these methods are effective in simulating composite structures failure/life/reliability.

  14. Probabilistic simulation of uncertainties in thermal structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Shiao, Michael

    1990-01-01

    Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.

  15. Probabilistic metrology or how some measurement outcomes render ultra-precise estimates

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Gendra, B.; Muñoz-Tapia, R.; Bagan, E.

    2016-10-01

    We show on theoretical grounds that, even in the presence of noise, probabilistic measurement strategies (which have a certain probability of failure or abstention) can provide, upon a heralded successful outcome, estimates with a precision that exceeds the deterministic bounds for the average precision. This establishes a new ultimate bound on the phase estimation precision of particular measurement outcomes (or sequence of outcomes). For probe systems subject to local dephasing, we quantify such precision limit as a function of the probability of failure that can be tolerated. Our results show that the possibility of abstaining can set back the detrimental effects of noise.

  16. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 2

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.

  17. Life prediction of different commercial dental implants as influence by uncertainties in their fatigue material properties and loading conditions.

    PubMed

    Pérez, M A

    2012-12-01

    Probabilistic analyses allow the effect of uncertainty in system parameters to be determined. In the literature, many researchers have investigated static loading effects on dental implants. However, the intrinsic variability and uncertainty of most of the main problem parameters are not accounted for. The objective of this research was to apply a probabilistic computational approach to predict the fatigue life of three different commercial dental implants considering the variability and uncertainty in their fatigue material properties and loading conditions. For one of the commercial dental implants, the influence of its diameter in the fatigue life performance was also studied. This stochastic technique was based on the combination of a probabilistic finite element method (PFEM) and a cumulative damage approach known as B-model. After 6 million of loading cycles, local failure probabilities of 0.3, 0.4 and 0.91 were predicted for the Lifecore, Avinent and GMI implants, respectively (diameter of 3.75mm). The influence of the diameter for the GMI implant was studied and the results predicted a local failure probability of 0.91 and 0.1 for the 3.75mm and 5mm, respectively. In all cases the highest failure probability was located at the upper screw-threads. Therefore, the probabilistic methodology proposed herein may be a useful tool for performing a qualitative comparison between different commercial dental implants. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Application of probabilistic analysis/design methods in space programs - The approaches, the status, and the needs

    NASA Technical Reports Server (NTRS)

    Ryan, Robert S.; Townsend, John S.

    1993-01-01

    The prospective improvement of probabilistic methods for space program analysis/design entails the further development of theories, codes, and tools which match specific areas of application, the drawing of lessons from previous uses of probability and statistics data bases, the enlargement of data bases (especially in the field of structural failures), and the education of engineers and managers on the advantages of these methods. An evaluation is presently made of the current limitations of probabilistic engineering methods. Recommendations are made for specific applications.

  19. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  20. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  1. Probabilistic Based Modeling and Simulation Assessment

    DTIC Science & Technology

    2010-06-01

    different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the probability of injury...variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment impact, and...first area of focus is to develop a methodology to integrate probabilistic analysis into finite element analysis of vehicle collisions and blast . The

  2. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  3. An ontology-based nurse call management system (oNCS) with probabilistic priority assessment

    PubMed Central

    2011-01-01

    Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1) the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2) the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS) was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves the assignment of nurses to calls. Calls generally have a nurse present faster and the workload-distribution amongst the nurses improves. PMID:21294860

  4. Analysis of flood hazard under consideration of dike breaches

    NASA Astrophysics Data System (ADS)

    Vorogushyn, S.; Apel, H.; Lindenschmidt, K.-E.; Merz, B.

    2009-04-01

    The study focuses on the development and application of a new modelling system which allows a comprehensive flood hazard assessment along diked river reaches under consideration of dike failures. The proposed Inundation Hazard Assessment Model (IHAM) represents a hybrid probabilistic-deterministic model. It comprises three models interactively coupled at runtime. These are: (1) 1D unsteady hydrodynamic model of river channel and floodplain flow between dikes, (2) probabilistic dike breach model which determines possible dike breach locations, breach widths and breach outflow discharges, and (3) 2D raster-based diffusion wave storage cell model of the hinterland areas behind the dikes. Due to the unsteady nature of the 1D and 2D coupled models, the dependence between hydraulic load at various locations along the reach is explicitly considered. The probabilistic dike breach model describes dike failures due to three failure mechanisms: overtopping, piping and slope instability caused by the seepage flow through the dike core (micro-instability). Dike failures for each mechanism are simulated based on fragility functions. The probability of breach is conditioned by the uncertainty in geometrical and geotechnical dike parameters. The 2D storage cell model driven by the breach outflow boundary conditions computes an extended spectrum of flood intensity indicators such as water depth, flow velocity, impulse, inundation duration and rate of water rise. IHAM is embedded in a Monte Carlo simulation in order to account for the natural variability of the flood generation processes reflected in the form of input hydrographs and for the randomness of dike failures given by breach locations, times and widths. The scenario calculations for the developed synthetic input hydrographs for the main river and tributary were carried out for floods with return periods of T = 100; 200; 500; 1000 a. Based on the modelling results, probabilistic dike hazard maps could be generated that indicate the failure probability of each discretised dike section for every scenario magnitude. Besides the binary inundation patterns that indicate the probability of raster cells being inundated, IHAM generates probabilistic flood hazard maps. These maps display spatial patterns of the considered flood intensity indicators and their associated return periods. The probabilistic nature of IHAM allows for the generation of percentile flood hazard maps that indicate the median and uncertainty bounds of the flood intensity indicators. The uncertainty results from the natural variability of the flow hydrographs and randomness of dike breach processes. The same uncertainty sources determine the uncertainty in the flow hydrographs along the study reach. The simulations showed that the dike breach stochasticity has an increasing impact on hydrograph uncertainty in downstream direction. Whereas in the upstream part of the reach the hydrograph uncertainty is mainly stipulated by the variability of the flood wave form, the dike failures strongly shape the uncertainty boundaries in the downstream part of the reach. Finally, scenarios of polder deployment for the extreme floods with T = 200; 500; 1000 a were simulated with IHAM. The results indicate a rather weak reduction of the mean and median flow hydrographs in the river channel. However, the capping of the flow peaks resulted in a considerable reduction of the overtopping failures downstream of the polder with a simultaneous slight increase of the piping and slope micro-instability frequencies explained by a more durable average impoundment. The developed IHAM simulation system represents a new scientific tool for studying fluvial inundation dynamics under extreme conditions incorporating effects of technical flood protection measures. With its major outputs in form of novel probabilistic inundation and dike hazard maps, the IHAM system has a high practical value for decision support in flood management.

  5. Probabilistic Physics-Based Risk Tools Used to Analyze the International Space Station Electrical Power System Output

    NASA Technical Reports Server (NTRS)

    Patel, Bhogila M.; Hoge, Peter A.; Nagpal, Vinod K.; Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2004-01-01

    This paper describes the methods employed to apply probabilistic modeling techniques to the International Space Station (ISS) power system. These techniques were used to quantify the probabilistic variation in the power output, also called the response variable, due to variations (uncertainties) associated with knowledge of the influencing factors called the random variables. These uncertainties can be due to unknown environmental conditions, variation in the performance of electrical power system components or sensor tolerances. Uncertainties in these variables, cause corresponding variations in the power output, but the magnitude of that effect varies with the ISS operating conditions, e.g. whether or not the solar panels are actively tracking the sun. Therefore, it is important to quantify the influence of these uncertainties on the power output for optimizing the power available for experiments.

  6. Interrelation Between Safety Factors and Reliability

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

  7. Probabilistic risk analysis of building contamination.

    PubMed

    Bolster, D T; Tartakovsky, D M

    2008-10-01

    We present a general framework for probabilistic risk assessment (PRA) of building contamination. PRA provides a powerful tool for the rigorous quantification of risk in contamination of building spaces. A typical PRA starts by identifying relevant components of a system (e.g. ventilation system components, potential sources of contaminants, remediation methods) and proceeds by using available information and statistical inference to estimate the probabilities of their failure. These probabilities are then combined by means of fault-tree analyses to yield probabilistic estimates of the risk of system failure (e.g. building contamination). A sensitivity study of PRAs can identify features and potential problems that need to be addressed with the most urgency. Often PRAs are amenable to approximations, which can significantly simplify the approach. All these features of PRA are presented in this paper via a simple illustrative example, which can be built upon in further studies. The tool presented here can be used to design and maintain adequate ventilation systems to minimize exposure of occupants to contaminants.

  8. Probabilistic risk assessment of the Space Shuttle. Phase 3: A study of the potential of losing the vehicle during nominal operation. Volume 5: Auxiliary shuttle risk analyses

    NASA Technical Reports Server (NTRS)

    Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.

    1995-01-01

    Volume 5 is Appendix C, Auxiliary Shuttle Risk Analyses, and contains the following reports: Probabilistic Risk Assessment of Space Shuttle Phase 1 - Space Shuttle Catastrophic Failure Frequency Final Report; Risk Analysis Applied to the Space Shuttle Main Engine - Demonstration Project for the Main Combustion Chamber Risk Assessment; An Investigation of the Risk Implications of Space Shuttle Solid Rocket Booster Chamber Pressure Excursions; Safety of the Thermal Protection System of the Space Shuttle Orbiter - Quantitative Analysis and Organizational Factors; Space Shuttle Main Propulsion Pressurization System Probabilistic Risk Assessment, Final Report; and Space Shuttle Probabilistic Risk Assessment Proof-of-Concept Study - Auxiliary Power Unit and Hydraulic Power Unit Analysis Report.

  9. Probabilistic assessment of landslide tsunami hazard for the northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Pampell-Manis, A.; Horrillo, J.; Shigihara, Y.; Parambath, L.

    2016-01-01

    The devastating consequences of recent tsunamis affecting Indonesia and Japan have prompted a scientific response to better assess unexpected tsunami hazards. Although much uncertainty exists regarding the recurrence of large-scale tsunami events in the Gulf of Mexico (GoM), geological evidence indicates that a tsunami is possible and would most likely come from a submarine landslide triggered by an earthquake. This study customizes for the GoM a first-order probabilistic landslide tsunami hazard assessment. Monte Carlo Simulation (MCS) is employed to determine landslide configurations based on distributions obtained from observational submarine mass failure (SMF) data. Our MCS approach incorporates a Cholesky decomposition method for correlated landslide size parameters to capture correlations seen in the data as well as uncertainty inherent in these events. Slope stability analyses are performed using landslide and sediment properties and regional seismic loading to determine landslide configurations which fail and produce a tsunami. The probability of each tsunamigenic failure is calculated based on the joint probability of slope failure and probability of the triggering earthquake. We are thus able to estimate sizes and return periods for probabilistic maximum credible landslide scenarios. We find that the Cholesky decomposition approach generates landslide parameter distributions that retain the trends seen in observational data, improving the statistical validity and relevancy of the MCS technique in the context of landslide tsunami hazard assessment. Estimated return periods suggest that probabilistic maximum credible SMF events in the north and northwest GoM have a recurrence of 5000-8000 years, in agreement with age dates of observed deposits.

  10. A systematic risk management approach employed on the CloudSat project

    NASA Technical Reports Server (NTRS)

    Basilio, R. R.; Plourde, K. S.; Lam, T.

    2000-01-01

    The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.

  11. Probabilistic liquefaction triggering based on the cone penetration test

    USGS Publications Warehouse

    Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Tokimatsu, K.

    2005-01-01

    Performance-based earthquake engineering requires a probabilistic treatment of potential failure modes in order to accurately quantify the overall stability of the system. This paper is a summary of the application portions of the probabilistic liquefaction triggering correlations proposed recently proposed by Moss and co-workers. To enable probabilistic treatment of liquefaction triggering, the variables comprising the seismic load and the liquefaction resistance were treated as inherently uncertain. Supporting data from an extensive Cone Penetration Test (CPT)-based liquefaction case history database were used to develop a probabilistic correlation. The methods used to measure the uncertainty of the load and resistance variables, how the interactions of these variables were treated using Bayesian updating, and how reliability analysis was applied to produce curves of equal probability of liquefaction are presented. The normalization for effective overburden stress, the magnitude correlated duration weighting factor, and the non-linear shear mass participation factor used are also discussed.

  12. Probabilistic vs linear blending approaches to shared control for wheelchair driving.

    PubMed

    Ezeh, Chinemelu; Trautman, Pete; Devigne, Louise; Bureau, Valentin; Babel, Marie; Carlson, Tom

    2017-07-01

    Some people with severe mobility impairments are unable to operate powered wheelchairs reliably and effectively, using commercially available interfaces. This has sparked a body of research into "smart wheelchairs", which assist users to drive safely and create opportunities for them to use alternative interfaces. Various "shared control" techniques have been proposed to provide an appropriate level of assistance that is satisfactory and acceptable to the user. Most shared control techniques employ a traditional strategy called linear blending (LB), where the user's commands and wheelchair's autonomous commands are combined in some proportion. In this paper, however, we implement a more generalised form of shared control called probabilistic shared control (PSC). This probabilistic formulation improves the accuracy of modelling the interaction between the user and the wheelchair by taking into account uncertainty in the interaction. In this paper, we demonstrate the practical success of PSC over LB in terms of safety, particularly for novice users.

  13. Probabilistic Sizing and Verification of Space Ceramic Structures

    NASA Astrophysics Data System (ADS)

    Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit

    2012-07-01

    Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.

  14. Toward a Probabilistic Phenological Model for Wheat Growing Degree Days (GDD)

    NASA Astrophysics Data System (ADS)

    Rahmani, E.; Hense, A.

    2017-12-01

    Are there deterministic relations between phenological and climate parameters? The answer is surely `No'. This answer motivated us to solve the problem through probabilistic theories. Thus, we developed a probabilistic phenological model which has the advantage of giving additional information in terms of uncertainty. To that aim, we turned to a statistical analysis named survival analysis. Survival analysis deals with death in biological organisms and failure in mechanical systems. In survival analysis literature, death or failure is considered as an event. By event, in this research we mean ripening date of wheat. We will assume only one event in this special case. By time, we mean the growing duration from sowing to ripening as lifetime for wheat which is a function of GDD. To be more precise we will try to perform the probabilistic forecast for wheat ripening. The probability value will change between 0 and 1. Here, the survivor function gives the probability that the not ripened wheat survives longer than a specific time or will survive to the end of its lifetime as a ripened crop. The survival function at each station is determined by fitting a normal distribution to the GDD as the function of growth duration. Verification of the models obtained is done using CRPS skill score (CRPSS). The positive values of CRPSS indicate the large superiority of the probabilistic phonologic survival model to the deterministic models. These results demonstrate that considering uncertainties in modeling are beneficial, meaningful and necessary. We believe that probabilistic phenological models have the potential to help reduce the vulnerability of agricultural production systems to climate change thereby increasing food security.

  15. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.

  16. Reasoning in Reference Games: Individual- vs. Population-Level Probabilistic Modeling

    PubMed Central

    Franke, Michael; Degen, Judith

    2016-01-01

    Recent advances in probabilistic pragmatics have achieved considerable success in modeling speakers’ and listeners’ pragmatic reasoning as probabilistic inference. However, these models are usually applied to population-level data, and so implicitly suggest a homogeneous population without individual differences. Here we investigate potential individual differences in Theory-of-Mind related depth of pragmatic reasoning in so-called reference games that require drawing ad hoc Quantity implicatures of varying complexity. We show by Bayesian model comparison that a model that assumes a heterogenous population is a better predictor of our data, especially for comprehension. We discuss the implications for the treatment of individual differences in probabilistic models of language use. PMID:27149675

  17. Probabilistic design of fibre concrete structures

    NASA Astrophysics Data System (ADS)

    Pukl, R.; Novák, D.; Sajdlová, T.; Lehký, D.; Červenka, J.; Červenka, V.

    2017-09-01

    Advanced computer simulation is recently well-established methodology for evaluation of resistance of concrete engineering structures. The nonlinear finite element analysis enables to realistically predict structural damage, peak load, failure, post-peak response, development of cracks in concrete, yielding of reinforcement, concrete crushing or shear failure. The nonlinear material models can cover various types of concrete and reinforced concrete: ordinary concrete, plain or reinforced, without or with prestressing, fibre concrete, (ultra) high performance concrete, lightweight concrete, etc. Advanced material models taking into account fibre concrete properties such as shape of tensile softening branch, high toughness and ductility are described in the paper. Since the variability of the fibre concrete material properties is rather high, the probabilistic analysis seems to be the most appropriate format for structural design and evaluation of structural performance, reliability and safety. The presented combination of the nonlinear analysis with advanced probabilistic methods allows evaluation of structural safety characterized by failure probability or by reliability index respectively. Authors offer a methodology and computer tools for realistic safety assessment of concrete structures; the utilized approach is based on randomization of the nonlinear finite element analysis of the structural model. Uncertainty of the material properties or their randomness obtained from material tests are accounted in the random distribution. Furthermore, degradation of the reinforced concrete materials such as carbonation of concrete, corrosion of reinforcement, etc. can be accounted in order to analyze life-cycle structural performance and to enable prediction of the structural reliability and safety in time development. The results can serve as a rational basis for design of fibre concrete engineering structures based on advanced nonlinear computer analysis. The presented methodology is illustrated on results from two probabilistic studies with different types of concrete structures related to practical applications and made from various materials (with the parameters obtained from real material tests).

  18. Corroded Anchor Structure Stability/Reliability (CAS_Stab-R) Software for Hydraulic Structures

    DTIC Science & Technology

    2017-12-01

    This report describes software that provides a probabilistic estimate of time -to-failure for a corroding anchor strand system. These anchor...stability to the structure. A series of unique pull-test experiments conducted by Ebeling et al. (2016) at the U.S. Army Engineer Research and...Reliability (CAS_Stab-R) produces probabilistic Remaining Anchor Life time estimates for anchor cables based upon the direct corrosion rate for the

  19. Probabilistic Structural Analysis Program

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.

    2010-01-01

    NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.

  20. Wave scheduling - Decentralized scheduling of task forces in multicomputers

    NASA Technical Reports Server (NTRS)

    Van Tilborg, A. M.; Wittie, L. D.

    1984-01-01

    Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.

  1. REVIEWS OF TOPICAL PROBLEMS: 21st century: what is life from the perspective of physics?

    NASA Astrophysics Data System (ADS)

    Ivanitskii, Genrikh R.

    2010-07-01

    The evolution of the biophysical paradigm over 65 years since the publication in 1944 of Erwin Schrödinger's What is Life? The Physical Aspects of the Living Cell is reviewed. Based on the advances in molecular genetics, it is argued that all the features characteristic of living systems can also be found in nonliving ones. Ten paradoxes in logic and physics are analyzed that allow defining life in terms of a spatial-temporal hierarchy of structures and combinatory probabilistic logic. From the perspective of physics, life can be defined as resulting from a game involving interactions of matter one part of which acquires the ability to remember the success (or failure) probabilities from the previous rounds of the game, thereby increasing its chances for further survival in the next round. This part of matter is currently called living matter.

  2. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  3. Phase transitions in coupled map lattices and in associated probabilistic cellular automata.

    PubMed

    Just, Wolfram

    2006-10-01

    Analytical tools are applied to investigate piecewise linear coupled map lattices in terms of probabilistic cellular automata. The so-called disorder condition of probabilistic cellular automata is closely related with attracting sets in coupled map lattices. The importance of this condition for the suppression of phase transitions is illustrated by spatially one-dimensional systems. Invariant densities and temporal correlations are calculated explicitly. Ising type phase transitions are found for one-dimensional coupled map lattices acting on repelling sets and for a spatially two-dimensional Miller-Huse-like system with stable long time dynamics. Critical exponents are calculated within a finite size scaling approach. The relevance of detailed balance of the resulting probabilistic cellular automaton for the critical behavior is pointed out.

  4. Reliability assessment of slender concrete columns at the stability failure

    NASA Astrophysics Data System (ADS)

    Valašík, Adrián; Benko, Vladimír; Strauss, Alfred; Täubling, Benjamin

    2018-01-01

    The European Standard for designing concrete columns within the use of non-linear methods shows deficiencies in terms of global reliability, in case that the concrete columns fail by the loss of stability. The buckling failure is a brittle failure which occurs without warning and the probability of its formation depends on the columns slenderness. Experiments with slender concrete columns were carried out in cooperation with STRABAG Bratislava LTD in Central Laboratory of Faculty of Civil Engineering SUT in Bratislava. The following article aims to compare the global reliability of slender concrete columns with slenderness of 90 and higher. The columns were designed according to methods offered by EN 1992-1-1 [1]. The mentioned experiments were used as basis for deterministic nonlinear modelling of the columns and subsequent the probabilistic evaluation of structural response variability. Final results may be utilized as thresholds for loading of produced structural elements and they aim to present probabilistic design as less conservative compared to classic partial safety factor based design and alternative ECOV method.

  5. Use of limited data to construct Bayesian networks for probabilistic risk assessment.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, Katrina M.; Swiler, Laura Painton

    2013-03-01

    Probabilistic Risk Assessment (PRA) is a fundamental part of safety/quality assurance for nuclear power and nuclear weapons. Traditional PRA very effectively models complex hardware system risks using binary probabilistic models. However, traditional PRA models are not flexible enough to accommodate non-binary soft-causal factors, such as digital instrumentation&control, passive components, aging, common cause failure, and human errors. Bayesian Networks offer the opportunity to incorporate these risks into the PRA framework. This report describes the results of an early career LDRD project titled %E2%80%9CUse of Limited Data to Construct Bayesian Networks for Probabilistic Risk Assessment%E2%80%9D. The goal of the work was tomore » establish the capability to develop Bayesian Networks from sparse data, and to demonstrate this capability by producing a data-informed Bayesian Network for use in Human Reliability Analysis (HRA) as part of nuclear power plant Probabilistic Risk Assessment (PRA). This report summarizes the research goal and major products of the research.« less

  6. Life Predicted in a Probabilistic Design Space for Brittle Materials With Transient Loads

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Palfi, Tamas; Reh, Stefan

    2005-01-01

    Analytical techniques have progressively become more sophisticated, and now we can consider the probabilistic nature of the entire space of random input variables on the lifetime reliability of brittle structures. This was demonstrated with NASA s CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code combined with the commercially available ANSYS/Probabilistic Design System (ANSYS/PDS), a probabilistic analysis tool that is an integral part of the ANSYS finite-element analysis program. ANSYS/PDS allows probabilistic loads, component geometry, and material properties to be considered in the finite-element analysis. CARES/Life predicts the time dependent probability of failure of brittle material structures under generalized thermomechanical loading--such as that found in a turbine engine hot-section. Glenn researchers coupled ANSYS/PDS with CARES/Life to assess the effects of the stochastic variables of component geometry, loading, and material properties on the predicted life of the component for fully transient thermomechanical loading and cyclic loading.

  7. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  8. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  9. Terminal Model Of Newtonian Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1994-01-01

    Paper presents study of theory of Newtonian dynamics of terminal attractors and repellers, focusing on issues of reversibility vs. irreversibility and deterministic evolution vs. probabilistic or chaotic evolution of dynamic systems. Theory developed called "terminal dynamics" emphasizes difference between it and classical Newtonian dynamics. Also holds promise for explaining irreversibility, unpredictability, probabilistic behavior, and chaos in turbulent flows, in thermodynamic phenomena, and in other dynamic phenomena and systems.

  10. Complete mechanical characterization of an external hexagonal implant connection: in vitro study, 3D FEM, and probabilistic fatigue.

    PubMed

    Prados-Privado, María; Gehrke, Sérgio A; Rojo, Rosa; Prados-Frutos, Juan Carlos

    2018-06-11

    The aim of this study was to fully characterize the mechanical behavior of an external hexagonal implant connection (ø3.5 mm, 10-mm length) with an in vitro study, a three-dimensional finite element analysis, and a probabilistic fatigue study. Ten implant-abutment assemblies were randomly divided into two groups, five were subjected to a fracture test to obtain the maximum fracture load, and the remaining were exposed to a fatigue test with 360,000 cycles of 150 ± 10 N. After mechanical cycling, all samples were attached to the torque-testing machine and the removal torque was measured in Newton centimeters. A finite element analysis (FEA) was then executed in ANSYS® to verify all results obtained in the mechanical tests. Finally, due to the randomness of the fatigue phenomenon, a probabilistic fatigue model was computed to obtain the probability of failure associated with each cycle load. FEA demonstrated that the fracture corresponded with a maximum stress of 2454 MPa obtained in the in vitro fracture test. Mean life was verified by the three methods. Results obtained by the FEA, the in vitro test, and the probabilistic approaches were in accordance. Under these conditions, no mechanical etiology failure is expected to occur up to 100,000 cycles. Graphical abstract ᅟ.

  11. Probabilistic fatigue methodology for six nines reliability

    NASA Technical Reports Server (NTRS)

    Everett, R. A., Jr.; Bartlett, F. D., Jr.; Elber, Wolf

    1990-01-01

    Fleet readiness and flight safety strongly depend on the degree of reliability that can be designed into rotorcraft flight critical components. The current U.S. Army fatigue life specification for new rotorcraft is the so-called six nines reliability, or a probability of failure of one in a million. The progress of a round robin which was established by the American Helicopter Society (AHS) Subcommittee for Fatigue and Damage Tolerance is reviewed to investigate reliability-based fatigue methodology. The participants in this cooperative effort are in the U.S. Army Aviation Systems Command (AVSCOM) and the rotorcraft industry. One phase of the joint activity examined fatigue reliability under uniquely defined conditions for which only one answer was correct. The other phases were set up to learn how the different industry methods in defining fatigue strength affected the mean fatigue life and reliability calculations. Hence, constant amplitude and spectrum fatigue test data were provided so that each participant could perform their standard fatigue life analysis. As a result of this round robin, the probabilistic logic which includes both fatigue strength and spectrum loading variability in developing a consistant reliability analysis was established. In this first study, the reliability analysis was limited to the linear cumulative damage approach. However, it is expected that superior fatigue life prediction methods will ultimately be developed through this open AHS forum. To that end, these preliminary results were useful in identifying some topics for additional study.

  12. Analysis of scale effect in compressive ice failure and implications for design

    NASA Astrophysics Data System (ADS)

    Taylor, Rocky Scott

    The main focus of the study was the analysis of scale effect in local ice pressure resulting from probabilistic (spalling) fracture and the relationship between local and global loads due to the averaging of pressures across the width of a structure. A review of fundamental theory, relevant ice mechanics and a critical analysis of data and theory related to the scale dependent pressure behavior of ice were completed. To study high pressure zones (hpzs), data from small-scale indentation tests carried out at the NRC-IOT were analyzed, including small-scale ice block and ice sheet tests. Finite element analysis was used to model a sample ice block indentation event using a damaging, viscoelastic material model and element removal techniques (for spalling). Medium scale tactile sensor data from the Japan Ocean Industries Association (JOIA) program were analyzed to study details of hpz behavior. The averaging of non-simultaneous hpz loads during an ice-structure interaction was examined using local panel pressure data. Probabilistic averaging methodology for extrapolating full-scale pressures from local panel pressures was studied and an improved correlation model was formulated. Panel correlations for high speed events were observed to be lower than panel correlations for low speed events. Global pressure estimates based on probabilistic averaging were found to give substantially lower average errors in estimation of load compared with methods based on linear extrapolation (no averaging). Panel correlations were analyzed for Molikpaq and compared with JOIA results. From this analysis, it was shown that averaging does result in decreasing pressure for increasing structure width. The relationship between local pressure and ice thickness for a panel of unit width was studied in detail using full-scale data from the STRICE, Molikpaq, Cook Inlet and Japan Ocean Industries Association (JOIA) data sets. A distinct trend of decreasing pressure with increasing ice thickness was observed. The pressure-thickness behavior was found to be well modeled by the power law relationships Pavg = 0.278 h-0.408 MPa and Pstd = 0.172h-0.273 MPa for the mean and standard deviation of pressure, respectively. To study theoretical aspects of spalling fracture and the pressure-thickness scale effect, probabilistic failure models have been developed. A probabilistic model based on Weibull theory (tensile stresses only) was first developed. Estimates of failure pressure obtained with this model were orders of magnitude higher than the pressures observed from benchmark data due to the assumption of only tensile failure. A probabilistic fracture mechanics (PFM) model including both tensile and compressive (shear) cracks was developed. Criteria for unstable fracture in tensile and compressive (shear) zones were given. From these results a clear theoretical scale effect in peak (spalling) pressure was observed. This scale effect followed the relationship Pp,th = 0.15h-0.50 MPa which agreed well with the benchmark data. The PFM model was applied to study the effect of ice edge shape (taper angle) and hpz eccentricity. Results indicated that specimens with flat edges spall at lower pressures while those with more tapered edges spall less readily. The mean peak (failure) pressure was also observed to decrease with increased eccentricity. It was concluded that hpzs centered about the middle of the ice thickness are the zones most likely to create the peak pressures that are of interest in design. Promising results were obtained using the PFM model, which provides strong support for continued research in the development and application of probabilistic fracture mechanics to the study of scale effects in compressive ice failure and to guide the development of methods for the estimation of design ice pressures.

  13. Derivation of Failure Rates and Probability of Failures for the International Space Station Probabilistic Risk Assessment Study

    NASA Technical Reports Server (NTRS)

    Vitali, Roberto; Lutomski, Michael G.

    2004-01-01

    National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.

  14. Differential reliability : probabilistic engineering applied to wood members in bending-tension

    Treesearch

    Stanley K. Suddarth; Frank E. Woeste; William L. Galligan

    1978-01-01

    Reliability analysis is a mathematical technique for appraising the design and materials of engineered structures to provide a quantitative estimate of probability of failure. Two or more cases which are similar in all respects but one may be analyzed by this method; the contrast between the probabilities of failure for these cases allows strong analytical focus on the...

  15. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

  16. Commercialization of NESSUS: Status

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Millwater, Harry R.

    1991-01-01

    A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.

  17. Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.

    PubMed

    Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof

    2009-04-01

    Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.

  18. Probabilistic analysis on the failure of reactivity control for the PWR

    NASA Astrophysics Data System (ADS)

    Sony Tjahyani, D. T.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The fundamental safety function of the power reactor is to control reactivity, to remove heat from the reactor, and to confine radioactive material. The safety analysis is used to ensure that each parameter is fulfilled during the design and is done by deterministic and probabilistic method. The analysis of reactivity control is important to be done because it will affect the other of fundamental safety functions. The purpose of this research is to determine the failure probability of the reactivity control and its failure contribution on a PWR design. The analysis is carried out by determining intermediate events, which cause the failure of reactivity control. Furthermore, the basic event is determined by deductive method using the fault tree analysis. The AP1000 is used as the object of research. The probability data of component failure or human error, which is used in the analysis, is collected from IAEA, Westinghouse, NRC and other published documents. The results show that there are six intermediate events, which can cause the failure of the reactivity control. These intermediate events are uncontrolled rod bank withdrawal at low power or full power, malfunction of boron dilution, misalignment of control rod withdrawal, malfunction of improper position of fuel assembly and ejection of control rod. The failure probability of reactivity control is 1.49E-03 per year. The causes of failures which are affected by human factor are boron dilution, misalignment of control rod withdrawal and malfunction of improper position for fuel assembly. Based on the assessment, it is concluded that the failure probability of reactivity control on the PWR is still within the IAEA criteria.

  19. Developing acceptance limits for measured bearing wear of the Space Shuttle Main Engine high pressure oxidizer turbopump

    NASA Technical Reports Server (NTRS)

    Genge, Gary G.

    1991-01-01

    The probabilistic design approach currently receiving attention for structural failure modes has been adapted for obtaining measured bearing wear limits in the Space Shuttle Main Engine high-pressure oxidizer turbopump. With the development of the shaft microtravel measurements to determine bearing health, an acceptance limit was neeed that protects against all known faiure modes yet is not overly conservative. This acceptance criteria limit has been successfully determined using probabilistic descriptions of preflight hardware geometry, empirical bearing wear data, mission requirements, and measurement tool precision as an input for a Monte Carlo simulation. The result of the simulation is a frequency distribution of failures as a function of preflight acceptance limits. When the distribution is converted into a reliability curve, a conscious risk management decision is made concerning the acceptance limit.

  20. Probabilistic failure analysis of bone using a finite element model of mineral-collagen composites.

    PubMed

    Dong, X Neil; Guda, Teja; Millwater, Harry R; Wang, Xiaodu

    2009-02-09

    Microdamage accumulation is a major pathway for energy dissipation during the post-yield deformation of bone. In this study, a two-dimensional probabilistic finite element model of a mineral-collagen composite was developed to investigate the influence of the tissue and ultrastructural properties of bone on the evolution of microdamage from an initial defect in tension. The probabilistic failure analyses indicated that the microdamage progression would be along the plane of the initial defect when the debonding at mineral-collagen interfaces was either absent or limited in the vicinity of the defect. In this case, the formation of a linear microcrack would be facilitated. However, the microdamage progression would be scattered away from the initial defect plane if interfacial debonding takes place at a large scale. This would suggest the possible formation of diffuse damage. In addition to interfacial debonding, the sensitivity analyses indicated that the microdamage progression was also dependent on the other material and ultrastructural properties of bone. The intensity of stress concentration accompanied with microdamage progression was more sensitive to the elastic modulus of the mineral phase and the nonlinearity of the collagen phase, whereas the scattering of failure location was largely dependent on the mineral to collagen ratio and the nonlinearity of the collagen phase. The findings of this study may help understanding the post-yield behavior of bone at the ultrastructural level and shed light on the underlying mechanism of bone fractures.

  1. Probabilistic Failure Analysis of Bone Using a Finite Element Model of Mineral-Collagen Composites

    PubMed Central

    Dong, X. Neil; Guda, Teja; Millwater, Harry R.; Wang, Xiaodu

    2009-01-01

    Microdamage accumulation is a major pathway for energy dissipation during the post-yield deformation of bone. In this study, a two-dimensional probabilistic finite element model of a mineral-collagen composite was developed to investigate the influence of the tissue and ultrastructural properties of bone on the evolution of microdamage from an initial defect in tension. The probabilistic failure analyses indicated that the microdamage progression would be along the plane of the initial defect when the debonding at mineral-collagen interfaces was either absent or limited in the vicinity of the defect. In this case, the formation of a linear microcrack would be facilitated. However, the microdamage progression would be scattered away from the initial defect plane if interfacial debonding takes place at a large scale. This would suggest the possible formation of diffuse damage. In addition to interfacial debonding, the sensitivity analyses indicated that the microdamage progression was also dependent on the other material and ultrastructural properties of bone. The intensity of stress concentration accompanied with microdamage progression was more sensitive to the elastic modulus of the mineral phase and the nonlinearity of the collagen phase, whereas the scattering of failure location was largely dependent on the mineral to collagen ratio and the nonlinearity of the collagen phase. The findings of this study may help understanding the post-yield behavior of bone at the ultrastructural level and shed light on the underlying mechanism of bone fractures. PMID:19058806

  2. Probabilistic characterization of wind turbine blades via aeroelasticity and spinning finite element formulation

    NASA Astrophysics Data System (ADS)

    Velazquez, Antonio; Swartz, R. Andrew

    2012-04-01

    Wind energy is an increasingly important component of this nation's renewable energy portfolio, however safe and economical wind turbine operation is a critical need to ensure continued adoption. Safe operation of wind turbine structures requires not only information regarding their condition, but their operational environment. Given the difficulty inherent in SHM processes for wind turbines (damage detection, location, and characterization), some uncertainty in conditional assessment is expected. Furthermore, given the stochastic nature of the loading on turbine structures, a probabilistic framework is appropriate to characterize their risk of failure at a given time. Such information will be invaluable to turbine controllers, allowing them to operate the structures within acceptable risk profiles. This study explores the characterization of the turbine loading and response envelopes for critical failure modes of the turbine blade structures. A framework is presented to develop an analytical estimation of the loading environment (including loading effects) based on the dynamic behavior of the blades. This is influenced by behaviors including along and across-wind aero-elastic effects, wind shear gradient, tower shadow effects, and centrifugal stiffening effects. The proposed solution includes methods that are based on modal decomposition of the blades and require frequent updates to the estimated modal properties to account for the time-varying nature of the turbine and its environment. The estimated demand statistics are compared to a code-based resistance curve to determine a probabilistic estimate of the risk of blade failure given the loading environment.

  3. Compendium of Mechanical Limit-States

    NASA Technical Reports Server (NTRS)

    Kowal, Michael

    1996-01-01

    A compendium was compiled and is described to provide a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The different limit-state relationships can be used to analyze the reliability of a candidate mechanical system. In determining the limit-states to be included in the compendium, a comprehensive listing of the possible failure modes that could affect mechanical systems reliability was generated. Previous literature defining mechanical modes of failure was studied, and cited failure modes were included. From this, classifications for failure modes were derived and are described in some detail.

  4. Respiratory Failure

    MedlinePlus

    ... of oxygen in the blood, it's called hypoxemic (HI-pok-SE-mik) respiratory failure. When respiratory failure ... carbon dioxide in the blood, it's called hypercapnic (HI-per-KAP-nik) respiratory failure. Causes Diseases and ...

  5. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series.

    PubMed

    Thorndahl, S; Willems, P

    2008-01-01

    Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.

  6. Long-term strength and damage accumulation in laminates

    NASA Astrophysics Data System (ADS)

    Dzenis, Yuris A.; Joshi, Shiv P.

    1993-04-01

    A modified version of the probabilistic model developed by authors for damage evolution analysis of laminates subjected to random loading is utilized to predict long-term strength of laminates. The model assumes that each ply in a laminate consists of a large number of mesovolumes. Probabilistic variation functions for mesovolumes stiffnesses as well as strengths are used in the analysis. Stochastic strains are calculated using the lamination theory and random function theory. Deterioration of ply stiffnesses is calculated on the basis of the probabilities of mesovolumes failures using the theory of excursions of random process beyond the limits. Long-term strength and damage accumulation in a Kevlar/epoxy laminate under tension and complex in-plane loading are investigated. Effects of the mean level and stochastic deviation of loading on damage evolution and time-to-failure of laminate are discussed. Long-term cumulative damage at the time of the final failure at low loading levels is more than at high loading levels. The effect of the deviation in loading is more pronounced at lower mean loading levels.

  7. A Markov chain model for reliability growth and decay

    NASA Technical Reports Server (NTRS)

    Siegrist, K.

    1982-01-01

    A mathematical model is developed to describe a complex system undergoing a sequence of trials in which there is interaction between the internal states of the system and the outcomes of the trials. For example, the model might describe a system undergoing testing that is redesigned after each failure. The basic assumptions for the model are that the state of the system after a trial depends probabilistically only on the state before the trial and on the outcome of the trial and that the outcome of a trial depends probabilistically only on the state of the system before the trial. It is shown that under these basic assumptions, the successive states form a Markov chain and the successive states and outcomes jointly form a Markov chain. General results are obtained for the transition probabilities, steady-state distributions, etc. A special case studied in detail describes a system that has two possible state ('repaired' and 'unrepaired') undergoing trials that have three possible outcomes ('inherent failure', 'assignable-cause' 'failure' and 'success'). For this model, the reliability function is computed explicitly and an optimal repair policy is obtained.

  8. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  9. General Purpose Probabilistic Programming Platform with Effective Stochastic Inference

    DTIC Science & Technology

    2018-04-01

    2.2 Venture 10 2.3 BayesDB 12 2.4 Picture 17 2.5 MetaProb 20 3.0 METHODS , ASSUMPTIONS, AND PROCEDURES 22 4.0 RESULTS AND DISCUSSION 23 4.1...The methods section outlines the research approach. The results and discussion section gives representative quantitative and qualitative results...modeling via CrossCat, a probabilistic method that emulates many of the judgment calls ordinarily made by a human data analyst. This AI assistance

  10. Functionally Graded Designer Viscoelastic Materials Tailored to Perform Prescribed Tasks with Probabilistic Failures and Lifetimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilton, Harry H.

    Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.

  11. Non-unitary probabilistic quantum computing circuit and method

    NASA Technical Reports Server (NTRS)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  12. Virulo

    EPA Science Inventory

    Virulo is a probabilistic model for predicting virus attenuation. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve a chosen degree o...

  13. CARES/Life Software for Designing More Reliable Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.

  14. Fracture mechanics analysis of cracked structures using weight function and neural network method

    NASA Astrophysics Data System (ADS)

    Chen, J. G.; Zang, F. G.; Yang, Y.; Shi, K. K.; Fu, X. L.

    2018-06-01

    Stress intensity factors(SIFs) due to thermal-mechanical load has been established by using weight function method. Two reference stress states sere used to determine the coefficients in the weight function. Results were evaluated by using data from literature and show a good agreement between them. So, the SIFs can be determined quickly using the weight function obtained when cracks subjected to arbitrary loads, and presented method can be used for probabilistic fracture mechanics analysis. A probabilistic methodology considering Monte-Carlo with neural network (MCNN) has been developed. The results indicate that an accurate probabilistic characteristic of the KI can be obtained by using the developed method. The probability of failure increases with the increasing of loads, and the relationship between is nonlinear.

  15. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  16. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  17. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  18. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  19. Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.

  20. Reliability and Probabilistic Risk Assessment - How They Play Together

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal; Stutts, Richard; Huang, Zhaofeng

    2015-01-01

    Since the Space Shuttle Challenger accident in 1986, NASA has extensively used probabilistic analysis methods to assess, understand, and communicate the risk of space launch vehicles. Probabilistic Risk Assessment (PRA), used in the nuclear industry, is one of the probabilistic analysis methods NASA utilizes to assess Loss of Mission (LOM) and Loss of Crew (LOC) risk for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability distributions to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: 1) what can go wrong that would lead to loss or degraded performance (i.e., scenarios involving undesired consequences of interest), 2) how likely is it (probabilities), and 3) what is the severity of the degradation (consequences). Since the Challenger accident, PRA has been used in supporting decisions regarding safety upgrades for launch vehicles. Another area that was given a lot of emphasis at NASA after the Challenger accident is reliability engineering. Reliability engineering has been a critical design function at NASA since the early Apollo days. However, after the Challenger accident, quantitative reliability analysis and reliability predictions were given more scrutiny because of their importance in understanding failure mechanism and quantifying the probability of failure, which are key elements in resolving technical issues, performing design trades, and implementing design improvements. Although PRA and reliability are both probabilistic in nature and, in some cases, use the same tools, they are two different activities. Specifically, reliability engineering is a broad design discipline that deals with loss of function and helps understand failure mechanism and improve component and system design. PRA is a system scenario based risk assessment process intended to assess the risk scenarios that could lead to a major/top undesirable system event, and to identify those scenarios that are high-risk drivers. PRA output is critical to support risk informed decisions concerning system design. This paper describes the PRA process and the reliability engineering discipline in detail. It discusses their differences and similarities and how they work together as complementary analyses to support the design and risk assessment processes. Lessons learned, applications, and case studies in both areas are also discussed in the paper to demonstrate and explain these differences and similarities.

  1. Probabilistic Fatigue Life Updating for Railway Bridges Based on Local Inspection and Repair.

    PubMed

    Lee, Young-Joo; Kim, Robin E; Suh, Wonho; Park, Kiwon

    2017-04-24

    Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target bridge life, bridge maintenance activities such as local inspection and repair should be undertaken properly. However, this is a challenging task because there are various sources of uncertainty associated with aging bridges, train loads, environmental conditions, and maintenance work. Therefore, to perform optimal risk-based maintenance of railway bridges, it is essential to estimate the probabilistic fatigue life of a railway bridge and update the life information based on the results of local inspections and repair. Recently, a system reliability approach was proposed to evaluate the fatigue failure risk of structural systems and update the prior risk information in various inspection scenarios. However, this approach can handle only a constant-amplitude load and has limitations in considering a cyclic load with varying amplitude levels, which is the major loading pattern generated by train traffic. In addition, it is not feasible to update the prior risk information after bridges are repaired. In this research, the system reliability approach is further developed so that it can handle a varying-amplitude load and update the system-level risk of fatigue failure for railway bridges after inspection and repair. The proposed method is applied to a numerical example of an in-service railway bridge, and the effects of inspection and repair on the probabilistic fatigue life are discussed.

  2. Probabilistic Fatigue Life Updating for Railway Bridges Based on Local Inspection and Repair

    PubMed Central

    Lee, Young-Joo; Kim, Robin E.; Suh, Wonho; Park, Kiwon

    2017-01-01

    Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target bridge life, bridge maintenance activities such as local inspection and repair should be undertaken properly. However, this is a challenging task because there are various sources of uncertainty associated with aging bridges, train loads, environmental conditions, and maintenance work. Therefore, to perform optimal risk-based maintenance of railway bridges, it is essential to estimate the probabilistic fatigue life of a railway bridge and update the life information based on the results of local inspections and repair. Recently, a system reliability approach was proposed to evaluate the fatigue failure risk of structural systems and update the prior risk information in various inspection scenarios. However, this approach can handle only a constant-amplitude load and has limitations in considering a cyclic load with varying amplitude levels, which is the major loading pattern generated by train traffic. In addition, it is not feasible to update the prior risk information after bridges are repaired. In this research, the system reliability approach is further developed so that it can handle a varying-amplitude load and update the system-level risk of fatigue failure for railway bridges after inspection and repair. The proposed method is applied to a numerical example of an in-service railway bridge, and the effects of inspection and repair on the probabilistic fatigue life are discussed. PMID:28441768

  3. Design of robust reliable control for T-S fuzzy Markovian jumping delayed neutral type neural networks with probabilistic actuator faults and leakage delays: An event-triggered communication scheme.

    PubMed

    Syed Ali, M; Vadivel, R; Saravanakumar, R

    2018-06-01

    This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Immunity-Based Aircraft Fault Detection System

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    In the study reported in this paper, we have developed and applied an Artificial Immune System (AIS) algorithm for aircraft fault detection, as an extension to a previous work on intelligent flight control (IFC). Though the prior studies had established the benefits of IFC, one area of weakness that needed to be strengthened was the control dead band induced by commanding a failed surface. Since the IFC approach uses fault accommodation with no detection, the dead band, although it reduces over time due to learning, is present and causes degradation in handling qualities. If the failure can be identified, this dead band can be further A ed to ensure rapid fault accommodation and better handling qualities. The paper describes the application of an immunity-based approach that can detect a broad spectrum of known and unforeseen failures. The approach incorporates the knowledge of the normal operational behavior of the aircraft from sensory data, and probabilistically generates a set of pattern detectors that can detect any abnormalities (including faults) in the behavior pattern indicating unsafe in-flight operation. We developed a tool called MILD (Multi-level Immune Learning Detection) based on a real-valued negative selection algorithm that can generate a small number of specialized detectors (as signatures of known failure conditions) and a larger set of generalized detectors for unknown (or possible) fault conditions. Once the fault is detected and identified, an adaptive control system would use this detection information to stabilize the aircraft by utilizing available resources (control surfaces). We experimented with data sets collected under normal and various simulated failure conditions using a piloted motion-base simulation facility. The reported results are from a collection of test cases that reflect the performance of the proposed immunity-based fault detection algorithm.

  5. Achieving Accreditation Council for Graduate Medical Education duty hours compliance within advanced surgical training: a simulation-based feasibility assessment.

    PubMed

    Obi, Andrea; Chung, Jennifer; Chen, Ryan; Lin, Wandi; Sun, Siyuan; Pozehl, William; Cohn, Amy M; Daskin, Mark S; Seagull, F Jacob; Reddy, Rishindra M

    2015-11-01

    Certain operative cases occur unpredictably and/or have long operative times, creating a conflict between Accreditation Council for Graduate Medical Education (ACGME) rules and adequate training experience. A ProModel-based simulation was developed based on historical data. Probabilistic distributions of operative time calculated and combined with an ACGME compliant call schedule. For the advanced surgical cases modeled (cardiothoracic transplants), 80-hour violations were 6.07% and the minimum number of days off was violated 22.50%. There was a 36% chance of failure to fulfill any (either heart or lung) minimum case requirement despite adequate volume. The variable nature of emergency cases inevitably leads to work hour violations under ACGME regulations. Unpredictable cases mandate higher operative volume to ensure achievement of adequate caseloads. Publically available simulation technology provides a valuable avenue to identify adequacy of case volumes for trainees in both the elective and emergency setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  7. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  8. Probabilistic assessment methodology for continuous-type petroleum accumulations

    USGS Publications Warehouse

    Crovelli, R.A.

    2003-01-01

    The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.

  9. Probabilistic structural analysis of space propulsion system LOX post

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Rajagopal, K. R.; Ho, H. W.; Cunniff, J. M.

    1990-01-01

    The probabilistic structural analysis program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress; Cruse et al., 1988) is applied to characterize the dynamic loading and response of the Space Shuttle main engine (SSME) LOX post. The design and operation of the SSME are reviewed; the LOX post structure is described; and particular attention is given to the generation of composite load spectra, the finite-element model of the LOX post, and the steps in the NESSUS structural analysis. The results are presented in extensive tables and graphs, and it is shown that NESSUS correctly predicts the structural effects of changes in the temperature loading. The probabilistic approach also facilitates (1) damage assessments for a given failure model (based on gas temperature, heat-shield gap, and material properties) and (2) correlation of the gas temperature with operational parameters such as engine thrust.

  10. Decision making generalized by a cumulative probability weighting function

    NASA Astrophysics Data System (ADS)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  11. Probabilistic finite elements for fatigue and fracture analysis

    NASA Astrophysics Data System (ADS)

    Belytschko, Ted; Liu, Wing Kam

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  12. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1992-01-01

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  13. Evaluation of a National Call Center and a Local Alerts System for Detection of New Cases of Ebola Virus Disease - Guinea, 2014-2015

    DTIC Science & Technology

    2016-03-11

    Control and Prevention Evaluation of a National Call Center and a Local Alerts System for Detection of New Cases of Ebola Virus Disease — Guinea, 2014...principally through the use of a telephone alert system. Community members and health facilities report deaths and suspected Ebola cases to local alert ...sensitivity of the national call center with the local alerts system, the CDC country team performed probabilistic record linkage of the combined

  14. The Importance of Human Reliability Analysis in Human Space Flight: Understanding the Risks

    NASA Technical Reports Server (NTRS)

    Hamlin, Teri L.

    2010-01-01

    HRA is a method used to describe, qualitatively and quantitatively, the occurrence of human failures in the operation of complex systems that affect availability and reliability. Modeling human actions with their corresponding failure in a PRA (Probabilistic Risk Assessment) provides a more complete picture of the risk and risk contributions. A high quality HRA can provide valuable information on potential areas for improvement, including training, procedural, equipment design and need for automation.

  15. A study of discrete control signal fault conditions in the shuttle DPS

    NASA Technical Reports Server (NTRS)

    Reddi, S. S.; Retter, C. T.

    1976-01-01

    An analysis of the effects of discrete failures on the data processing subsystem is presented. A functional description of each discrete together with a list of software modules that use this discrete are included. A qualitative description of the consequences that may ensue due to discrete failures is given followed by a probabilistic reliability analysis of the data processing subsystem. Based on the investigation conducted, recommendations were made to improve the reliability of the subsystem.

  16. Estimation of probability of failure for damage-tolerant aerospace structures

    NASA Astrophysics Data System (ADS)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft.

  17. Pre-configured polyhedron based protection against multi-link failures in optical mesh networks.

    PubMed

    Huang, Shanguo; Guo, Bingli; Li, Xin; Zhang, Jie; Zhao, Yongli; Gu, Wanyi

    2014-02-10

    This paper focuses on random multi-link failures protection in optical mesh networks, instead of single, the dual or sequential failures of previous studies. Spare resource efficiency and failure robustness are major concerns in link protection strategy designing and a k-regular and k-edge connected structure is proved to be one of the optimal solutions for link protection network. Based on this, a novel pre-configured polyhedron based protection structure is proposed, and it could provide protection for both simultaneous and sequential random link failures with improved spare resource efficiency. Its performance is evaluated in terms of spare resource consumption, recovery rate and average recovery path length, as well as compared with ring based and subgraph protection under probabilistic link failure scenarios. Results show the proposed novel link protection approach has better performance than previous works.

  18. Psychics, aliens, or experience? Using the Anomalistic Belief Scale to examine the relationship between type of belief and probabilistic reasoning.

    PubMed

    Prike, Toby; Arnold, Michelle M; Williamson, Paul

    2017-08-01

    A growing body of research has shown people who hold anomalistic (e.g., paranormal) beliefs may differ from nonbelievers in their propensity to make probabilistic reasoning errors. The current study explored the relationship between these beliefs and performance through the development of a new measure of anomalistic belief, called the Anomalistic Belief Scale (ABS). One key feature of the ABS is that it includes a balance of both experiential and theoretical belief items. Another aim of the study was to use the ABS to investigate the relationship between belief and probabilistic reasoning errors on conjunction fallacy tasks. As expected, results showed there was a relationship between anomalistic belief and propensity to commit the conjunction fallacy. Importantly, regression analyses on the factors that make up the ABS showed that the relationship between anomalistic belief and probabilistic reasoning occurred only for beliefs about having experienced anomalistic phenomena, and not for theoretical anomalistic beliefs. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Probabilistic Modeling of High-Temperature Material Properties of a 5-Harness 0/90 Sylramic Fiber/ CVI-SiC/ MI-SiC Woven Composite

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.; Tong, Michael; Murthy, P. L. N.; Mital, Subodh

    1998-01-01

    An integrated probabilistic approach has been developed to assess composites for high temperature applications. This approach was used to determine thermal and mechanical properties and their probabilistic distributions of a 5-harness 0/90 Sylramic fiber/CVI-SiC/Mi-SiC woven Ceramic Matrix Composite (CMC) at high temperatures. The purpose of developing this approach was to generate quantitative probabilistic information on this CMC to help complete the evaluation for its potential application for HSCT combustor liner. This approach quantified the influences of uncertainties inherent in constituent properties called primitive variables on selected key response variables of the CMC at 2200 F. The quantitative information is presented in the form of Cumulative Density Functions (CDFs). Probability Density Functions (PDFS) and primitive variable sensitivities on response. Results indicate that the scatters in response variables were reduced by 30-50% when the uncertainties in the primitive variables, which showed the most influence, were reduced by 50%.

  20. Stochastic damage evolution in textile laminates

    NASA Technical Reports Server (NTRS)

    Dzenis, Yuris A.; Bogdanovich, Alexander E.; Pastore, Christopher M.

    1993-01-01

    A probabilistic model utilizing random material characteristics to predict damage evolution in textile laminates is presented. Model is based on a division of each ply into two sublaminas consisting of cells. The probability of cell failure is calculated using stochastic function theory and maximal strain failure criterion. Three modes of failure, i.e. fiber breakage, matrix failure in transverse direction, as well as matrix or interface shear cracking, are taken into account. Computed failure probabilities are utilized in reducing cell stiffness based on the mesovolume concept. A numerical algorithm is developed predicting the damage evolution and deformation history of textile laminates. Effect of scatter of fiber orientation on cell properties is discussed. Weave influence on damage accumulation is illustrated with the help of an example of a Kevlar/epoxy laminate.

  1. Probabilistic and structural reliability analysis of laminated composite structures based on the IPACS code

    NASA Technical Reports Server (NTRS)

    Sobel, Larry; Buttitta, Claudio; Suarez, James

    1993-01-01

    Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.

  2. Augmenting Probabilistic Risk Assesment with Malevolent Initiators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis Smith; David Schwieder

    2011-11-01

    As commonly practiced, the use of probabilistic risk assessment (PRA) in nuclear power plants only considers accident initiators such as natural hazards, equipment failures, and human error. Malevolent initiators are ignored in PRA, but are considered the domain of physical security, which uses vulnerability assessment based on an officially specified threat (design basis threat). This paper explores the implications of augmenting and extending existing PRA models by considering new and modified scenarios resulting from malevolent initiators. Teaming the augmented PRA models with conventional vulnerability assessments can cost-effectively enhance security of a nuclear power plant. This methodology is useful for operatingmore » plants, as well as in the design of new plants. For the methodology, we have proposed an approach that builds on and extends the practice of PRA for nuclear power plants for security-related issues. Rather than only considering 'random' failures, we demonstrated a framework that is able to represent and model malevolent initiating events and associated plant impacts.« less

  3. Coherent-state discrimination via nonheralded probabilistic amplification

    NASA Astrophysics Data System (ADS)

    Rosati, Matteo; Mari, Andrea; Giovannetti, Vittorio

    2016-06-01

    A scheme for the detection of low-intensity optical coherent signals was studied which uses a probabilistic amplifier operated in the nonheralded version as the underlying nonlinear operation to improve the detection efficiency. This approach allows us to improve the statistics by keeping track of all possible outcomes of the amplification stage (including failures). When compared with an optimized Kennedy receiver, the resulting discrimination success probability we obtain presents a gain up to ˜1.85 % and it approaches the Helstrom bound appreciably faster than the Dolinar receiver when employed in an adaptive strategy. We also notice that the advantages obtained can ultimately be associated with the fact that, in the high-gain limit, the nonheralded version of the probabilistic amplifier induces a partial dephasing which preserves quantum coherence among low-energy eigenvectors while removing it elsewhere. A proposal to realize such a transformation based on an optical cavity implementation is presented.

  4. Probabilistic analysis of the influence of the bonding degree of the stem-cement interface in the performance of cemented hip prostheses.

    PubMed

    Pérez, M A; Grasa, J; García-Aznar, J M; Bea, J A; Doblaré, M

    2006-01-01

    The long-term behavior of the stem-cement interface is one of the most frequent topics of discussion in the design of cemented total hip replacements, especially with regards to the process of damage accumulation in the cement layer. This effect is analyzed here comparing two different situations of the interface: completely bonded and debonded with friction. This comparative analysis is performed using a probabilistic computational approach that considers the variability and uncertainty of determinant factors that directly compromise the damage accumulation in the cement mantle. This stochastic technique is based on the combination of probabilistic finite elements (PFEM) and a cumulative damage approach known as B-model. Three random variables were considered: muscle and joint contact forces at the hip (both for walking and stair climbing), cement damage and fatigue properties of the cement. The results predicted that the regions with higher failure probability in the bulk cement are completely different depending on the stem-cement interface characteristics. In a bonded interface, critical sites appeared at the distal and medial parts of the cement, while for debonded interfaces, the critical regions were found distally and proximally. In bonded interfaces, the failure probability was higher than in debonded ones. The same conclusion may be established for stair climbing in comparison with walking activity.

  5. Design features and results from fatigue reliability research machines.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Kececioglu, D.; Mcconnell, J. B.

    1971-01-01

    The design, fabrication, development, operation, calibration and results from reversed bending combined with steady torque fatigue research machines are presented. Fifteen-centimeter long, notched, SAE 4340 steel specimens are subjected to various combinations of these stresses and cycled to failure. Failure occurs when the crack in the notch passes through the specimen automatically shutting down the test machine. These cycles-to-failure data are statistically analyzed to develop a probabilistic S-N diagram. These diagrams have many uses; a rotating component design example given in the literature shows that minimum size and weight for a specified number of cycles and reliability can be calculated using these diagrams.

  6. Systems Reliability Framework for Surface Water Sustainability and Risk Management

    NASA Astrophysics Data System (ADS)

    Myers, J. R.; Yeghiazarian, L.

    2016-12-01

    With microbial contamination posing a serious threat to the availability of clean water across the world, it is necessary to develop a framework that evaluates the safety and sustainability of water systems in respect to non-point source fecal microbial contamination. The concept of water safety is closely related to the concept of failure in reliability theory. In water quality problems, the event of failure can be defined as the concentration of microbial contamination exceeding a certain standard for usability of water. It is pertinent in watershed management to know the likelihood of such an event of failure occurring at a particular point in space and time. Microbial fate and transport are driven by environmental processes taking place in complex, multi-component, interdependent environmental systems that are dynamic and spatially heterogeneous, which means these processes and therefore their influences upon microbial transport must be considered stochastic and variable through space and time. A physics-based stochastic model of microbial dynamics is presented that propagates uncertainty using a unique sampling method based on artificial neural networks to produce a correlation between watershed characteristics and spatial-temporal probabilistic patterns of microbial contamination. These results are used to address the question of water safety through several sustainability metrics: reliability, vulnerability, resilience and a composite sustainability index. System reliability is described uniquely though the temporal evolution of risk along watershed points or pathways. Probabilistic resilience describes how long the system is above a certain probability of failure, and the vulnerability metric describes how the temporal evolution of risk changes throughout a hierarchy of failure levels. Additionally our approach allows for the identification of contributions in microbial contamination and uncertainty from specific pathways and sources. We expect that this framework will significantly improve the efficiency and precision of sustainable watershed management strategies through providing a better understanding of how watershed characteristics and environmental parameters affect surface water quality and sustainability. With microbial contamination posing a serious threat to the availability of clean water across the world, it is necessary to develop a framework that evaluates the safety and sustainability of water systems in respect to non-point source fecal microbial contamination. The concept of water safety is closely related to the concept of failure in reliability theory. In water quality problems, the event of failure can be defined as the concentration of microbial contamination exceeding a certain standard for usability of water. It is pertinent in watershed management to know the likelihood of such an event of failure occurring at a particular point in space and time. Microbial fate and transport are driven by environmental processes taking place in complex, multi-component, interdependent environmental systems that are dynamic and spatially heterogeneous, which means these processes and therefore their influences upon microbial transport must be considered stochastic and variable through space and time. A physics-based stochastic model of microbial dynamics is presented that propagates uncertainty using a unique sampling method based on artificial neural networks to produce a correlation between watershed characteristics and spatial-temporal probabilistic patterns of microbial contamination. These results are used to address the question of water safety through several sustainability metrics: reliability, vulnerability, resilience and a composite sustainability index. System reliability is described uniquely though the temporal evolution of risk along watershed points or pathways. Probabilistic resilience describes how long the system is above a certain probability of failure, and the vulnerability metric describes how the temporal evolution of risk changes throughout a hierarchy of failure levels. Additionally our approach allows for the identification of contributions in microbial contamination and uncertainty from specific pathways and sources. We expect that this framework will significantly improve the efficiency and precision of sustainable watershed management strategies through providing a better understanding of how watershed characteristics and environmental parameters affect surface water quality and sustainability.

  7. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  8. Stochastic methods for analysis of power flow in electric networks

    NASA Astrophysics Data System (ADS)

    1982-09-01

    The modeling and effects of probabilistic behavior on steady state power system operation were analyzed. A solution to the steady state network flow equations which adhere both to Kirchoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques was obtained. The development of sound techniques for producing meaningful data to serve as input is examined. Electric demand modeling, equipment failure analysis, and algorithm development are investigated. Two major development areas are described: a decomposition of stochastic processes which gives stationarity, ergodicity, and even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.

  9. Validation of PV-RPM Code in the System Advisor Model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine

    2017-04-01

    This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less

  10. Probabilistic safety analysis of earth retaining structures during earthquakes

    NASA Astrophysics Data System (ADS)

    Grivas, D. A.; Souflis, C.

    1982-07-01

    A procedure is presented for determining the probability of failure of Earth retaining structures under static or seismic conditions. Four possible modes of failure (overturning, base sliding, bearing capacity, and overall sliding) are examined and their combined effect is evaluated with the aid of combinatorial analysis. The probability of failure is shown to be a more adequate measure of safety than the customary factor of safety. As Earth retaining structures may fail in four distinct modes, a system analysis can provide a single estimate for the possibility of failure. A Bayesian formulation of the safety retaining walls is found to provide an improved measure for the predicted probability of failure under seismic loading. The presented Bayesian analysis can account for the damage incurred to a retaining wall during an earthquake to provide an improved estimate for its probability of failure during future seismic events.

  11. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    NASA Astrophysics Data System (ADS)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  12. Annual Research Review: Resilient Functioning in Maltreated Children--Past, Present, and Future Perspectives

    ERIC Educational Resources Information Center

    Cicchetti, Dante

    2013-01-01

    Background: Through a process of probabilistic epigenesis, child maltreatment progressively contributes to compromised adaptation on a variety of developmental domains central to successful adjustment. These developmental failures pose significant risk for the emergence of psychopathology across the life course. In addition to the psychological…

  13. Low Base-Substitution Mutation Rate in the Germline Genome of the Ciliate Tetrahymena thermophila

    DTIC Science & Technology

    2016-09-15

    generations of mutation accumulation (MA). We applied an existing mutation-calling pipeline and developed a new probabilistic mutation detection approach...noise introduced by mismapped reads. We used both our new method and an existing mutation-calling pipeline (Sung, Tucker, et al. 2012) to analyse the...and larger MA experiments will be required to confidently estimate the mutational spectrum of a species with such a low mutation rate. Materials and

  14. The SAM framework: modeling the effects of management factors on human behavior in risk analysis.

    PubMed

    Murphy, D M; Paté-Cornell, M E

    1996-08-01

    Complex engineered systems, such as nuclear reactors and chemical plants, have the potential for catastrophic failure with disastrous consequences. In recent years, human and management factors have been recognized as frequent root causes of major failures in such systems. However, classical probabilistic risk analysis (PRA) techniques do not account for the underlying causes of these errors because they focus on the physical system and do not explicitly address the link between components' performance and organizational factors. This paper describes a general approach for addressing the human and management causes of system failure, called the SAM (System-Action-Management) framework. Beginning with a quantitative risk model of the physical system, SAM expands the scope of analysis to incorporate first the decisions and actions of individuals that affect the physical system. SAM then links management factors (incentives, training, policies and procedures, selection criteria, etc.) to those decisions and actions. The focus of this paper is on four quantitative models of action that describe this last relationship. These models address the formation of intentions for action and their execution as a function of the organizational environment. Intention formation is described by three alternative models: a rational model, a bounded rationality model, and a rule-based model. The execution of intentions is then modeled separately. These four models are designed to assess the probabilities of individual actions from the perspective of management, thus reflecting the uncertainties inherent to human behavior. The SAM framework is illustrated for a hypothetical case of hazardous materials transportation. This framework can be used as a tool to increase the safety and reliability of complex technical systems by modifying the organization, rather than, or in addition to, re-designing the physical system.

  15. PubMed related articles: a probabilistic topic-based model for content similarity

    PubMed Central

    Lin, Jimmy; Wilbur, W John

    2007-01-01

    Background We present a probabilistic topic-based model for content similarity called pmra that underlies the related article search feature in PubMed. Whether or not a document is about a particular topic is computed from term frequencies, modeled as Poisson distributions. Unlike previous probabilistic retrieval models, we do not attempt to estimate relevance–but rather our focus is "relatedness", the probability that a user would want to examine a particular document given known interest in another. We also describe a novel technique for estimating parameters that does not require human relevance judgments; instead, the process is based on the existence of MeSH ® in MEDLINE ®. Results The pmra retrieval model was compared against bm25, a competitive probabilistic model that shares theoretical similarities. Experiments using the test collection from the TREC 2005 genomics track shows a small but statistically significant improvement of pmra over bm25 in terms of precision. Conclusion Our experiments suggest that the pmra model provides an effective ranking algorithm for related article search. PMID:17971238

  16. Methods for Probabilistic Fault Diagnosis: An Electrical Power System Case Study

    NASA Technical Reports Server (NTRS)

    Ricks, Brian W.; Mengshoel, Ole J.

    2009-01-01

    Health management systems that more accurately and quickly diagnose faults that may occur in different technical systems on-board a vehicle will play a key role in the success of future NASA missions. We discuss in this paper the diagnosis of abrupt continuous (or parametric) faults within the context of probabilistic graphical models, more specifically Bayesian networks that are compiled to arithmetic circuits. This paper extends our previous research, within the same probabilistic setting, on diagnosis of abrupt discrete faults. Our approach and diagnostic algorithm ProDiagnose are domain-independent; however we use an electrical power system testbed called ADAPT as a case study. In one set of ADAPT experiments, performed as part of the 2009 Diagnostic Challenge, our system turned out to have the best performance among all competitors. In a second set of experiments, we show how we have recently further significantly improved the performance of the probabilistic model of ADAPT. While these experiments are obtained for an electrical power system testbed, we believe they can easily be transitioned to real-world systems, thus promising to increase the success of future NASA missions.

  17. A methodology for post-mainshock probabilistic assessment of building collapse risk

    USGS Publications Warehouse

    Luco, N.; Gerstenberger, M.C.; Uma, S.R.; Ryu, H.; Liel, A.B.; Raghunandan, M.

    2011-01-01

    This paper presents a methodology for post-earthquake probabilistic risk (of damage) assessment that we propose in order to develop a computational tool for automatic or semi-automatic assessment. The methodology utilizes the same so-called risk integral which can be used for pre-earthquake probabilistic assessment. The risk integral couples (i) ground motion hazard information for the location of a structure of interest with (ii) knowledge of the fragility of the structure with respect to potential ground motion intensities. In the proposed post-mainshock methodology, the ground motion hazard component of the risk integral is adapted to account for aftershocks which are deliberately excluded from typical pre-earthquake hazard assessments and which decrease in frequency with the time elapsed since the mainshock. Correspondingly, the structural fragility component is adapted to account for any damage caused by the mainshock, as well as any uncertainty in the extent of this damage. The result of the adapted risk integral is a fully-probabilistic quantification of post-mainshock seismic risk that can inform emergency response mobilization, inspection prioritization, and re-occupancy decisions.

  18. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  19. The probabilistic convolution tree: efficient exact Bayesian inference for faster LC-MS/MS protein inference.

    PubMed

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called "causal independence"). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to O(k log(k)2) and the space to O(k log(k)) where k is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions.

  20. The Probabilistic Convolution Tree: Efficient Exact Bayesian Inference for Faster LC-MS/MS Protein Inference

    PubMed Central

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234

  1. Probabilistic simple sticker systems

    NASA Astrophysics Data System (ADS)

    Selvarajoo, Mathuri; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod

    2017-04-01

    A model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, was introduced by by L. Kari, G. Paun, G. Rozenberg, A. Salomaa, and S. Yu in the paper entitled DNA computing, sticker systems and universality from the journal of Acta Informatica vol. 35, pp. 401-420 in the year 1998. A sticker system uses the Watson-Crick complementary feature of DNA molecules: starting from the incomplete double stranded sequences, and iteratively using sticking operations until a complete double stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rules generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of sticker systems. Recently, a variant of restricted sticker systems, called probabilistic sticker systems, has been introduced [4]. In this variant, the probabilities are initially associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings in the computation of the string. Strings for the language are selected according to some probabilistic requirements. In this paper, we study fundamental properties of probabilistic simple sticker systems. We prove that the probabilistic enhancement increases the computational power of simple sticker systems.

  2. Mechanical failure probability of glasses in Earth orbit

    NASA Technical Reports Server (NTRS)

    Kinser, Donald L.; Wiedlocher, David E.

    1992-01-01

    Results of five years of earth-orbital exposure on mechanical properties of glasses indicate that radiation effects on mechanical properties of glasses, for the glasses examined, are less than the probable error of measurement. During the 5 year exposure, seven micrometeorite or space debris impacts occurred on the samples examined. These impacts were located in locations which were not subjected to effective mechanical testing, hence limited information on their influence upon mechanical strength was obtained. Combination of these results with micrometeorite and space debris impact frequency obtained by other experiments permits estimates of the failure probability of glasses exposed to mechanical loading under earth-orbit conditions. This probabilistic failure prediction is described and illustrated with examples.

  3. Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.

    PubMed

    Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi

    2015-10-01

    In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.

  4. Reliability and Maintainability Data for Lead Lithium Cooling Systems

    DOE PAGES

    Cadwallader, Lee

    2016-11-16

    This article presents component failure rate data for use in assessment of lead lithium cooling systems. Best estimate data applicable to this liquid metal coolant is presented. Repair times for similar components are also referenced in this work. These data support probabilistic safety assessment and reliability, availability, maintainability and inspectability analyses.

  5. 77 FR 61446 - Proposed Revision Probabilistic Risk Assessment and Severe Accident Evaluation for New Reactors

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-09

    ... documents online in the NRC Library at http://www.nrc.gov/reading-rm/adams.html . To begin the search... of digital instrumentation and control system PRAs, including common cause failures in PRAs and uncertainty analysis associated with new reactor digital systems, and (4) incorporation of additional...

  6. 77 FR 66649 - Proposed Revision to Probabilistic Risk Assessment and Severe Accident Evaluation for New Reactors

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... in the NRC Library at http://www.nrc.gov/reading-rm/adams.html . To begin the search, select ``ADAMS... of digital instrumentation and control system PRAs, including common cause failures in PRAs and uncertainty analysis associated with new reactor digital systems, and (4) incorporation of additional...

  7. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  8. Classification of Company Performance using Weighted Probabilistic Neural Network

    NASA Astrophysics Data System (ADS)

    Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi

    2018-05-01

    Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.

  9. Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models

    NASA Astrophysics Data System (ADS)

    Thon, Ingo

    One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.

  10. Guided SAR image despeckling with probabilistic non local weights

    NASA Astrophysics Data System (ADS)

    Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny

    2017-12-01

    SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.

  11. 47 CFR 64.1507 - Prohibition on disconnection or interruption of service for failure to remit pay-per-call and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... service for failure to remit pay-per-call and similar service charges. 64.1507 Section 64.1507 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Interstate Pay-Per-Call and Other Information Services § 64.1507...

  12. Unsteady Probabilistic Analysis of a Gas Turbine System

    NASA Technical Reports Server (NTRS)

    Brown, Marilyn

    2003-01-01

    In this work, we have considered an annular cascade configuration subjected to unsteady inflow conditions. The unsteady response calculation has been implemented into the time marching CFD code, MSUTURBO. The computed steady state results for the pressure distribution demonstrated good agreement with experimental data. We have computed results for the amplitudes of the unsteady pressure over the blade surfaces. With the increase in gas turbine engine structural complexity and performance over the past 50 years, structural engineers have created an array of safety nets to ensure against component failures in turbine engines. In order to reduce what is now considered to be excessive conservatism and yet maintain the same adequate margins of safety, there is a pressing need to explore methods of incorporating probabilistic design procedures into engine development. Probabilistic methods combine and prioritize the statistical distributions of each design variable, generate an interactive distribution and offer the designer a quantified relationship between robustness, endurance and performance. The designer can therefore iterate between weight reduction, life increase, engine size reduction, speed increase etc.

  13. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks.

    PubMed

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses.

  14. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks

    PubMed Central

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses. PMID:29163110

  15. Probabilistic double guarantee kidnapping detection in SLAM.

    PubMed

    Tian, Yang; Ma, Shugen

    2016-01-01

    For determining whether kidnapping has happened and which type of kidnapping it is while a robot performs autonomous tasks in an unknown environment, a double guarantee kidnapping detection (DGKD) method has been proposed. The good performance of DGKD in a relative small environment is shown. However, a limitation of DGKD is found in a large-scale environment by our recent work. In order to increase the adaptability of DGKD in a large-scale environment, an improved method called probabilistic double guarantee kidnapping detection is proposed in this paper to combine probability of features' positions and the robot's posture. Simulation results demonstrate the validity and accuracy of the proposed method.

  16. Computational Everyday Life Human Behavior Model as Servicable Knowledge

    NASA Astrophysics Data System (ADS)

    Motomura, Yoichi; Nishida, Yoshifumi

    A project called `Open life matrix' is not only a research activity but also real problem solving as an action research. This concept is realized by large-scale data collection, probabilistic causal structure model construction and information service providing using the model. One concrete outcome of this project is childhood injury prevention activity in new team consist of hospital, government, and many varieties of researchers. The main result from the project is a general methodology to apply probabilistic causal structure models as servicable knowledge for action research. In this paper, the summary of this project and future direction to emphasize action research driven by artificial intelligence technology are discussed.

  17. Language acquisition and use: learning and applying probabilistic constraints.

    PubMed

    Seidenberg, M S

    1997-03-14

    What kinds of knowledge underlie the use of language and how is this knowledge acquired? Linguists equate knowing a language with knowing a grammar. Classic "poverty of the stimulus" arguments suggest that grammar identification is an intractable inductive problem and that acquisition is possible only because children possess innate knowledge of grammatical structure. An alternative view is emerging from studies of statistical and probabilistic aspects of language, connectionist models, and the learning capacities of infants. This approach emphasizes continuity between how language is acquired and how it is used. It retains the idea that innate capacities constrain language learning, but calls into question whether they include knowledge of grammatical structure.

  18. Probabilistic Priority Message Checking Modeling Based on Controller Area Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    Although the probabilistic model checking tool called PRISM has been applied in many communication systems, such as wireless local area network, Bluetooth, and ZigBee, the technique is not used in a controller area network (CAN). In this paper, we use PRISM to model the mechanism of priority messages for CAN because the mechanism has allowed CAN to become the leader in serial communication for automobile and industry control. Through modeling CAN, it is easy to analyze the characteristic of CAN for further improving the security and efficiency of automobiles. The Markov chain model helps us to model the behaviour of priority messages.

  19. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review

    PubMed Central

    McClelland, James L.

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868

  20. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    PubMed

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  1. A Probabilistic Approach to Crosslingual Information Retrieval

    DTIC Science & Technology

    2001-06-01

    language expansion step can be performed before the translation process. Implemented as a call to the INQUERY function get_modified_query with one of the...database consists of American English while the dictionary is British English. Therefore, e.g. the Spanish word basura is translated to rubbish and

  2. Probabilistic evaluation of seismic isolation effect with respect to siting of a fusion reactor facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeda, Masatoshi; Komura, Toshiyuki; Hirotani, Tsutomu

    1995-12-01

    Annual failure probabilities of buildings and equipment were roughly evaluated for two fusion-reactor-like buildings, with and without seismic base isolation, in order to examine the effectiveness of the base isolation system regarding siting issues. The probabilities are calculated considering nonlinearity and rupture of isolators. While the probability of building failure for both buildings on the same site was almost equal, the function failures for equipment showed that the base-isolated building had higher reliability than the non-isolated building. Even if the base-isolated building alone is located on a higher seismic hazard area, it could compete favorably with the ordinary one inmore » reliability of equipment.« less

  3. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  4. Limited-scope probabilistic safety analysis for the Los Alamos Meson Physics Facility (LAMPF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharirli, M.; Rand, J.L.; Sasser, M.K.

    1992-01-01

    The reliability of instrumentation and safety systems is a major issue in the operation of accelerator facilities. A probabilistic safety analysis was performed or the key safety and instrumentation systems at the Los Alamos Meson Physics Facility (LAMPF). in Phase I of this unique study, the Personnel Safety System (PSS) and the Current Limiters (XLs) were analyzed through the use of the fault tree analyses, failure modes and effects analysis, and criticality analysis. Phase II of the program was done to update and reevaluate the safety systems after the Phase I recommendations were implemented. This paper provides a brief reviewmore » of the studies involved in Phases I and II of the program.« less

  5. Limited-scope probabilistic safety analysis for the Los Alamos Meson Physics Facility (LAMPF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharirli, M.; Rand, J.L.; Sasser, M.K.

    1992-12-01

    The reliability of instrumentation and safety systems is a major issue in the operation of accelerator facilities. A probabilistic safety analysis was performed or the key safety and instrumentation systems at the Los Alamos Meson Physics Facility (LAMPF). in Phase I of this unique study, the Personnel Safety System (PSS) and the Current Limiters (XLs) were analyzed through the use of the fault tree analyses, failure modes and effects analysis, and criticality analysis. Phase II of the program was done to update and reevaluate the safety systems after the Phase I recommendations were implemented. This paper provides a brief reviewmore » of the studies involved in Phases I and II of the program.« less

  6. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2003-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  7. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2007-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  8. A Probabilistic Approach to Predict Thermal Fatigue Life for Ball Grid Array Solder Joints

    NASA Astrophysics Data System (ADS)

    Wei, Helin; Wang, Kuisheng

    2011-11-01

    Numerous studies of the reliability of solder joints have been performed. Most life prediction models are limited to a deterministic approach. However, manufacturing induces uncertainty in the geometry parameters of solder joints, and the environmental temperature varies widely due to end-user diversity, creating uncertainties in the reliability of solder joints. In this study, a methodology for accounting for variation in the lifetime prediction for lead-free solder joints of ball grid array packages (PBGA) is demonstrated. The key aspects of the solder joint parameters and the cyclic temperature range related to reliability are involved. Probabilistic solutions of the inelastic strain range and thermal fatigue life based on the Engelmaier model are developed to determine the probability of solder joint failure. The results indicate that the standard deviation increases significantly when more random variations are involved. Using the probabilistic method, the influence of each variable on the thermal fatigue life is quantified. This information can be used to optimize product design and process validation acceptance criteria. The probabilistic approach creates the opportunity to identify the root causes of failed samples from product fatigue tests and field returns. The method can be applied to better understand how variation affects parameters of interest in an electronic package design with area array interconnections.

  9. Bayesian Probabilistic Projections of Life Expectancy for All Countries

    PubMed Central

    Raftery, Adrian E.; Chunn, Jennifer L.; Gerland, Patrick; Ševčíková, Hana

    2014-01-01

    We propose a Bayesian hierarchical model for producing probabilistic forecasts of male period life expectancy at birth for all the countries of the world from the present to 2100. Such forecasts would be an input to the production of probabilistic population projections for all countries, which is currently being considered by the United Nations. To evaluate the method, we did an out-of-sample cross-validation experiment, fitting the model to the data from 1950–1995, and using the estimated model to forecast for the subsequent ten years. The ten-year predictions had a mean absolute error of about 1 year, about 40% less than the current UN methodology. The probabilistic forecasts were calibrated, in the sense that (for example) the 80% prediction intervals contained the truth about 80% of the time. We illustrate our method with results from Madagascar (a typical country with steadily improving life expectancy), Latvia (a country that has had a mortality crisis), and Japan (a leading country). We also show aggregated results for South Asia, a region with eight countries. Free publicly available R software packages called bayesLife and bayesDem are available to implement the method. PMID:23494599

  10. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  11. [Management of vaginal infection following failure of a probabilistic treatment: is the vaginal swab really useful?].

    PubMed

    Bretelle, F; Chiarelli, P; Palmer, I; Glatt, N

    2015-02-01

    The aim of this observational national multi-centre study was to describe medical care of vaginal infections resisting a primary probabilistic treatment. Two hundred and seventy female patients were included during a 9-month period (from 2013, March 20th to 2013, December 7th) by 155 gynaecologists located throughout France. All patients were presenting a vulvo-vaginitis episode which started about three weeks ago and which was characterized by leucorrhea (93 % cases), itching (88 % cases) and/or vulvar and/or vaginal irritation (88 % cases). In most cases, this episode was previously treated by a short course of an azole antifungal medication. This treatment was initiated by the patient herself without any doctor's prescription in six out of 10 cases and had no influence on the evolution of the original clinical symptoms. Second line treatments included azole antifungal medications (56 % cases), local fixed combinations (antifungal agent and bactericidal antibiotic) (29 %), metronidazole (9 %), oral antibiotics (7.4 %). At the end of the treatment, 85 % patients recovered from vaginitis symptoms. The recovery rate was 82.6 % for patients who got a bacteriological examination and 87.6 % for patients who were treated without any bacteriological examination. The difference is not statistically significant. These results seem to show that a probabilistic medical care is as effective as (but probably more economical than) a therapeutic strategy guided by the results of further examinations in case of failure of a primary treatment. This conclusion should be confirmed by a medico-economic comparison after randomization. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  12. Relating design and environmental variables to reliability

    NASA Astrophysics Data System (ADS)

    Kolarik, William J.; Landers, Thomas L.

    The combination of space application and nuclear power source demands high reliability hardware. The possibilities of failure, either an inability to provide power or a catastrophic accident, must be minimized. Nuclear power experiences on the ground have led to highly sophisticated probabilistic risk assessment procedures, most of which require quantitative information to adequately assess such risks. In the area of hardware risk analysis, reliability information plays a key role. One of the lessons learned from the Three Mile Island experience is that thorough analyses of critical components are essential. Nuclear grade equipment shows some reliability advantages over commercial. However, no statistically significant difference has been found. A recent study pertaining to spacecraft electronics reliability, examined some 2500 malfunctions on more than 300 aircraft. The study classified the equipment failures into seven general categories. Design deficiencies and lack of environmental protection accounted for about half of all failures. Within each class, limited reliability modeling was performed using a Weibull failure model.

  13. Cost-effectiveness of noninvasive ventilation for chronic obstructive pulmonary disease-related respiratory failure in Indian hospitals without ICU facilities.

    PubMed

    Patel, Shraddha P; Pena, Margarita E; Babcock, Charlene Irvin

    2015-01-01

    The majority of Indian hospitals do not provide intensive care unit (ICU) care or ward-based noninvasive positive pressure ventilation (NIV). Because no mechanical ventilation or NIV is available in these hospitals, the majority of patients suffering from respiratory failure die. To perform a cost-effective analysis of two strategies (ward-based NIV with concurrent standard treatment vs standard treatment alone) in chronic obstructive pulmonary disease (COPD) respiratory failure patients treated in Indian hospitals without ICU care. A decision-analytical model was created to compare the cost-effectiveness for the two strategies. Estimates from the literature were used for parameters in the model. Future costs were discounted at 3%. All costs were reported in USD (2012). One-way, two-way, and probabilistic sensitivity analysis were performed. The time horizon was lifetime and perspective was societal. The NIV strategy resulted in 17.7% more survival and was slightly more costly (increased cost of $101 (USD 2012) but resulted in increased quality-adjusted life-years (QALYs) (1.67 QALY). The cost-effectiveness (2012 USD)/QALY in the standard and NIV groups was $78/QALY ($535.02/6.82) and $75/QALY ($636.33/8.49), respectively. Incremental cost-effectiveness ratio (ICER) was only $61 USD/QALY. This was substantially lower than the gross domestic product (GDP) per capita for India (1489 USD), suggesting the NIV strategy was very cost effective. Using a 5% discount rate resulted in only minimally different results. Probabilistic analysis suggests that NIV strategy was preferred 100% of the time when willingness to pay was >$250 2012 USD. Ward-based NIV treatment is cost-effective in India, and may increase survival of patients with COPD respiratory failure when ICU is not available.

  14. A unified bond theory, probabilistic meso-scale modeling, and experimental validation of deformed steel rebar in normal strength concrete

    NASA Astrophysics Data System (ADS)

    Wu, Chenglin

    Bond between deformed rebar and concrete is affected by rebar deformation pattern, concrete properties, concrete confinement, and rebar-concrete interfacial properties. Two distinct groups of bond models were traditionally developed based on the dominant effects of concrete splitting and near-interface shear-off failures. Their accuracy highly depended upon the test data sets selected in analysis and calibration. In this study, a unified bond model is proposed and developed based on an analogy to the indentation problem around the rib front of deformed rebar. This mechanics-based model can take into account the combined effect of concrete splitting and interface shear-off failures, resulting in average bond strengths for all practical scenarios. To understand the fracture process associated with bond failure, a probabilistic meso-scale model of concrete is proposed and its sensitivity to interface and confinement strengths are investigated. Both the mechanical and finite element models are validated with the available test data sets and are superior to existing models in prediction of average bond strength (< 6% error) and crack spacing (< 6% error). The validated bond model is applied to derive various interrelations among concrete crushing, concrete splitting, interfacial behavior, and the rib spacing-to-height ratio of deformed rebar. It can accurately predict the transition of failure modes from concrete splitting to rebar pullout and predict the effect of rebar surface characteristics as the rib spacing-to-height ratio increases. Based on the unified theory, a global bond model is proposed and developed by introducing bond-slip laws, and validated with testing of concrete beams with spliced reinforcement, achieving a load capacity prediction error of less than 26%. The optimal rebar parameters and concrete cover in structural designs can be derived from this study.

  15. A cost-effectiveness analysis of a proactive management strategy for the Sprint Fidelis recall: a probabilistic decision analysis model.

    PubMed

    Bashir, Jamil; Cowan, Simone; Raymakers, Adam; Yamashita, Michael; Danter, Matthew; Krahn, Andrew; Lynd, Larry D

    2013-12-01

    The management of the recall is complicated by the competing risks of lead failure and complications that can occur with lead revision. Many of these patients are currently undergoing an elective generator change--an ideal time to consider lead revision. To determine the cost-effectiveness of a proactive management strategy for the Sprint Fidelis recall. We obtained detailed clinical outcomes and costing data from a retrospective analysis of 341 patients who received the Sprint Fidelis lead in British Columbia, where patients younger than 60 years were offered lead extraction when undergoing generator replacement. These population-based data were used to construct and populate a probabilistic Markov model in which a proactive management strategy was compared to a conservative strategy to determine the incremental cost per lead failure avoided. In our population, elective lead revisions were half the cost of emergent revisions and had a lower complication rate. In the model, the incremental cost-effectiveness ratio of proactive lead revision versus a recommended monitoring strategy was $12,779 per lead failure avoided. The proactive strategy resulted in 21 fewer failures per 100 patients treated and reduced the chance of an additional complication from an unexpected surgery. Cost-effectiveness analysis suggests that prospective lead revision should be considered when patients with a Sprint Fidelis lead present for pulse generator change. Elective revision of the lead is justified even when 25% of the population is operated on per year, and in some scenarios, it is both less costly and provides a better outcome. © 2013 Heart Rhythm Society Published by Heart Rhythm Society All rights reserved.

  16. Probabilistic-Based Modeling and Simulation Assessment

    DTIC Science & Technology

    2010-06-01

    developed to determine the relative importance of structural components of the vehicle under differnet crash and blast scenarios. With the integration of...the vehicle under different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the...parameter variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment

  17. Towards Real-time, On-board, Hardware-Supported Sensor and Software Health Management for Unmanned Aerial Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Rozier, Kristin Y.; Reinbacher, Thomas; Mengshoel, Ole J.; Mbaya, Timmy; Ippolito, Corey

    2013-01-01

    Unmanned aerial systems (UASs) can only be deployed if they can effectively complete their missions and respond to failures and uncertain environmental conditions while maintaining safety with respect to other aircraft as well as humans and property on the ground. In this paper, we design a real-time, on-board system health management (SHM) capability to continuously monitor sensors, software, and hardware components for detection and diagnosis of failures and violations of safety or performance rules during the flight of a UAS. Our approach to SHM is three-pronged, providing: (1) real-time monitoring of sensor and/or software signals; (2) signal analysis, preprocessing, and advanced on the- fly temporal and Bayesian probabilistic fault diagnosis; (3) an unobtrusive, lightweight, read-only, low-power realization using Field Programmable Gate Arrays (FPGAs) that avoids overburdening limited computing resources or costly re-certification of flight software due to instrumentation. Our implementation provides a novel approach of combining modular building blocks, integrating responsive runtime monitoring of temporal logic system safety requirements with model-based diagnosis and Bayesian network-based probabilistic analysis. We demonstrate this approach using actual data from the NASA Swift UAS, an experimental all-electric aircraft.

  18. Effects of shipping on marine acoustic habitats in Canadian Arctic estimated via probabilistic modeling and mapping.

    PubMed

    Aulanier, Florian; Simard, Yvan; Roy, Nathalie; Gervaise, Cédric; Bandet, Marion

    2017-12-15

    Canadian Arctic and Subarctic regions experience a rapid decrease of sea ice accompanied with increasing shipping traffic. The resulting time-space changes in shipping noise are studied for four key regions of this pristine environment, for 2013 traffic conditions and a hypothetical tenfold traffic increase. A probabilistic modeling and mapping framework, called Ramdam, which integrates the intrinsic variability and uncertainties of shipping noise and its effects on marine habitats, is developed and applied. A substantial transformation of soundscapes is observed in areas where shipping noise changes from present occasional-transient contributor to a dominant noise source. Examination of impacts on low-frequency mammals within ecologically and biologically significant areas reveals that shipping noise has the potential to trigger behavioral responses and masking in the future, although no risk of temporary or permanent hearing threshold shifts is noted. Such probabilistic modeling and mapping is strategic in marine spatial planning of this emerging noise issues. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  19. A Positive Role for Failure in Virtue Education

    ERIC Educational Resources Information Center

    Athanassoulis, Nafsika

    2017-01-01

    This article outlines a role for constructive failures in virtue education. Some failures can be catastrophic and push the agent toward vice, but other types of failure can have positive consequences, we'll call these failures constructive failures, failures that help on the road to virtue. So, while failures are generally appealed to as examples…

  20. A novel risk assessment method for landfill slope failure: Case study application for Bhalswa Dumpsite, India.

    PubMed

    Jahanfar, Ali; Amirmojahedi, Mohsen; Gharabaghi, Bahram; Dubey, Brajesh; McBean, Edward; Kumar, Dinesh

    2017-03-01

    Rapid population growth of major urban centres in many developing countries has created massive landfills with extraordinary heights and steep side-slopes, which are frequently surrounded by illegal low-income residential settlements developed too close to landfills. These extraordinary landfills are facing high risks of catastrophic failure with potentially large numbers of fatalities. This study presents a novel method for risk assessment of landfill slope failure, using probabilistic analysis of potential failure scenarios and associated fatalities. The conceptual framework of the method includes selecting appropriate statistical distributions for the municipal solid waste (MSW) material shear strength and rheological properties for potential failure scenario analysis. The MSW material properties for a given scenario is then used to analyse the probability of slope failure and the resulting run-out length to calculate the potential risk of fatalities. In comparison with existing methods, which are solely based on the probability of slope failure, this method provides a more accurate estimate of the risk of fatalities associated with a given landfill slope failure. The application of the new risk assessment method is demonstrated with a case study for a landfill located within a heavily populated area of New Delhi, India.

  1. Simulation Assisted Risk Assessment: Blast Overpressure Modeling

    NASA Technical Reports Server (NTRS)

    Lawrence, Scott L.; Gee, Ken; Mathias, Donovan; Olsen, Michael

    2006-01-01

    A probabilistic risk assessment (PRA) approach has been developed and applied to the risk analysis of capsule abort during ascent. The PRA is used to assist in the identification of modeling and simulation applications that can significantly impact the understanding of crew risk during this potentially dangerous maneuver. The PRA approach is also being used to identify the appropriate level of fidelity for the modeling of those critical failure modes. The Apollo launch escape system (LES) was chosen as a test problem for application of this approach. Failure modes that have been modeled and/or simulated to date include explosive overpressure-based failure, explosive fragment-based failure, land landing failures (range limits exceeded either near launch or Mode III trajectories ending on the African continent), capsule-booster re-contact during separation, and failure due to plume-induced instability. These failure modes have been investigated using analysis tools in a variety of technical disciplines at various levels of fidelity. The current paper focuses on the development and application of a blast overpressure model for the prediction of structural failure due to overpressure, including the application of high-fidelity analysis to predict near-field and headwinds effects.

  2. Cost-Effectiveness Analysis of Intensity Modulated Radiation Therapy Versus 3-Dimensional Conformal Radiation Therapy for Anal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodges, Joseph C., E-mail: joseph.hodges@utsouthwestern.edu; Beg, Muhammad S.; Das, Prajnan

    2014-07-15

    Purpose: To compare the cost-effectiveness of intensity modulated radiation therapy (IMRT) and 3-dimensional conformal radiation therapy (3D-CRT) for anal cancer and determine disease, patient, and treatment parameters that influence the result. Methods and Materials: A Markov decision model was designed with the various disease states for the base case of a 65-year-old patient with anal cancer treated with either IMRT or 3D-CRT and concurrent chemotherapy. Health states accounting for rates of local failure, colostomy failure, treatment breaks, patient prognosis, acute and late toxicities, and the utility of toxicities were informed by existing literature and analyzed with deterministic and probabilistic sensitivitymore » analysis. Results: In the base case, mean costs and quality-adjusted life expectancy in years (QALY) for IMRT and 3D-CRT were $32,291 (4.81) and $28,444 (4.78), respectively, resulting in an incremental cost-effectiveness ratio of $128,233/QALY for IMRT compared with 3D-CRT. Probabilistic sensitivity analysis found that IMRT was cost-effective in 22%, 47%, and 65% of iterations at willingness-to-pay thresholds of $50,000, $100,000, and $150,000 per QALY, respectively. Conclusions: In our base model, IMRT was a cost-ineffective strategy despite the reduced acute treatment toxicities and their associated costs of management. The model outcome was sensitive to variations in local and colostomy failure rates, as well as patient-reported utilities relating to acute toxicities.« less

  3. Application of Probabilistic Methods to Assess Risk Due to Resonance in the Design of J-2X Rocket Engine Turbine Blades

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; DeHaye, Michael; DeLessio, Steven

    2011-01-01

    The LOX-Hydrogen J-2X Rocket Engine, which is proposed for use as an upper-stage engine for numerous earth-to-orbit and heavy lift launch vehicle architectures, is presently in the design phase and will move shortly to the initial development test phase. Analysis of the design has revealed numerous potential resonance issues with hardware in the turbomachinery turbine-side flow-path. The analysis of the fuel pump turbine blades requires particular care because resonant failure of the blades, which are rotating in excess of 30,000 revolutions/minutes (RPM), could be catastrophic for the engine and the entire launch vehicle. This paper describes a series of probabilistic analyses performed to assess the risk of failure of the turbine blades due to resonant vibration during past and present test series. Some significant results are that the probability of failure during a single complete engine hot-fire test is low (1%) because of the small likelihood of resonance, but that the probability increases to around 30% for a more focused turbomachinery-only test because all speeds will be ramped through and there is a greater likelihood of dwelling at more speeds. These risk calculations have been invaluable for use by program management in deciding if risk-reduction methods such as dampers are necessary immediately or if the test can be performed before the risk-reduction hardware is ready.

  4. Optimization Testbed Cometboards Extended into Stochastic Domain

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.

    2010-01-01

    COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.

  5. Application of Generative Topographic Mapping to Gear Failures Monitoring

    NASA Astrophysics Data System (ADS)

    Liao, Guanglan; Li, Weihua; Shi, Tielin; Rao, Raj B. K. N.

    2002-07-01

    The Generative Topographic Mapping (GTM) model is introduced as a probabilistic re-formation of the self-organizing map and has already been used in a variety of applications. This paper presents a study of the GTM in industrial gear failures monitoring. Vibration signals are analyzed using the GTM model, and the results show that gear feature data sets can be projected into a two-dimensional space and clustered in different areas according to their conditions, which can classify and identify clearly a gear work condition with cracked or broken tooth compared with the normal condition. With the trace of the image points in the two-dimensional space, the variation of gear work conditions can be observed visually, therefore, the occurrence and varying trend of gear failures can be monitored in time.

  6. Dynamic Modeling of Ascent Abort Scenarios for Crewed Launches

    NASA Technical Reports Server (NTRS)

    Bigler, Mark; Boyer, Roger L.

    2015-01-01

    For the last 30 years, the United States' human space program has been focused on low Earth orbit exploration and operations with the Space Shuttle and International Space Station programs. After over 40 years, the U.S. is again working to return humans beyond Earth orbit. To do so, NASA is developing a new launch vehicle and spacecraft to provide this capability. The launch vehicle is referred to as the Space Launch System (SLS) and the spacecraft is called Orion. The new launch system is being developed with an abort system that will enable the crew to escape launch failures that would otherwise be catastrophic as well as probabilistic design requirements set for probability of loss of crew (LOC) and loss of mission (LOM). In order to optimize the risk associated with designing this new launch system, as well as verifying the associated requirements, NASA has developed a comprehensive Probabilistic Risk Assessment (PRA) of the integrated ascent phase of the mission that includes the launch vehicle, spacecraft and ground launch facilities. Given the dynamic nature of rocket launches and the potential for things to go wrong, developing a PRA to assess the risk can be a very challenging effort. Prior to launch and after the crew has boarded the spacecraft, the risk exposure time can be on the order of three hours. During this time, events may initiate from either the spacecraft, the launch vehicle, or the ground systems, thus requiring an emergency egress from the spacecraft to a safe ground location or a pad abort via the spacecraft's launch abort system. Following launch, again either the spacecraft or the launch vehicle can initiate the need for the crew to abort the mission and return home. Obviously, there are thousands of scenarios whose outcome depends on when the abort is initiated during ascent and how the abort is performed. This includes modeling the risk associated with explosions and benign system failures that require aborting a spacecraft under very dynamic conditions, particularly in the lower atmosphere, and returning the crew home safely. This paper will provide an overview of the PRA model that has been developed of this new launch system, including some of the challenges that are associated with this effort.

  7. Dynamic Modeling of Ascent Abort Scenarios for Crewed Launches

    NASA Technical Reports Server (NTRS)

    Bigler, Mark; Boyer, Roger L.

    2015-01-01

    For the last 30 years, the United States's human space program has been focused on low Earth orbit exploration and operations with the Space Shuttle and International Space Station programs. After nearly 50 years, the U.S. is again working to return humans beyond Earth orbit. To do so, NASA is developing a new launch vehicle and spacecraft to provide this capability. The launch vehicle is referred to as the Space Launch System (SLS) and the spacecraft is called Orion. The new launch system is being developed with an abort system that will enable the crew to escape launch failures that would otherwise be catastrophic as well as probabilistic design requirements set for probability of loss of crew (LOC) and loss of mission (LOM). In order to optimize the risk associated with designing this new launch system, as well as verifying the associated requirements, NASA has developed a comprehensive Probabilistic Risk Assessment (PRA) of the integrated ascent phase of the mission that includes the launch vehicle, spacecraft and ground launch facilities. Given the dynamic nature of rocket launches and the potential for things to go wrong, developing a PRA to assess the risk can be a very challenging effort. Prior to launch and after the crew has boarded the spacecraft, the risk exposure time can be on the order of three hours. During this time, events may initiate from either of the spacecraft, the launch vehicle, or the ground systems, thus requiring an emergency egress from the spacecraft to a safe ground location or a pad abort via the spacecraft's launch abort system. Following launch, again either the spacecraft or the launch vehicle can initiate the need for the crew to abort the mission and return to the home. Obviously, there are thousands of scenarios whose outcome depends on when the abort is initiated during ascent as to how the abort is performed. This includes modeling the risk associated with explosions and benign system failures that require aborting a spacecraft under very dynamic conditions, particularly in the lower atmosphere, and returning the crew home safely. This paper will provide an overview of the PRA model that has been developed of this new launch system, including some of the challenges that are associated with this effort. Key Words: PRA, space launches, human space program, ascent abort, spacecraft, launch vehicles

  8. SHEDS-PM: A POPULATION EXPOSURE MODEL FOR PREDICTING DISTRIBUTIONS OF PM EXPOSURE AND DOSE FROM BOTH OUTDOOR AND INDOOR SOURCES

    EPA Science Inventory

    The US EPA National Exposure Research Laboratory (NERL) has developed a population exposure and dose model for particulate matter (PM), called the Stochastic Human Exposure and Dose Simulation (SHEDS) model. SHEDS-PM uses a probabilistic approach that incorporates both variabi...

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, J.D.; Ott, E.; Yorke, J.A.

    Dimension is perhaps the most basic property of an attractor. In this paper we discuss a variety of different definitions of dimension, compute their values for a typical example, and review previous work on the dimension of chaotic attractors. The relevant definitions of dimension are of two general types, those that depend only on metric properties, and those that depend on probabilistic properties (that is, they depend on the frequency with which a typical trajectory visits different regions of the attractor). Both our example and the previous work that we review support the conclusion that all of the probabilistic dimensionsmore » take on the same value, which we call the dimension of the natural measure, and all of the metric dimensions take on a common value, which we call the fractal dimension. Furthermore, the dimension of the natural measure is typically equal to the Lyapunov dimension, which is defined in terms of Lyapunov numbers, and thus is usually far easier to calculate than any other definition. Because it is computable and more physically relevant, we feel that the dimension of the natural measure is more important than the fractal dimension.« less

  10. Probabilistic atlas and geometric variability estimation to drive tissue segmentation.

    PubMed

    Xu, Hao; Thirion, Bertrand; Allassonnière, Stéphanie

    2014-09-10

    Computerized anatomical atlases play an important role in medical image analysis. While an atlas usually refers to a standard or mean image also called template, which presumably represents well a given population, it is not enough to characterize the observed population in detail. A template image should be learned jointly with the geometric variability of the shapes represented in the observations. These two quantities will in the sequel form the atlas of the corresponding population. The geometric variability is modeled as deformations of the template image so that it fits the observations. In this paper, we provide a detailed analysis of a new generative statistical model based on dense deformable templates that represents several tissue types observed in medical images. Our atlas contains both an estimation of probability maps of each tissue (called class) and the deformation metric. We use a stochastic algorithm for the estimation of the probabilistic atlas given a dataset. This atlas is then used for atlas-based segmentation method to segment the new images. Experiments are shown on brain T1 MRI datasets. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Reliability analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Kan, Han-Pin

    1992-01-01

    A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.

  12. Reduced activation in ventral striatum and ventral tegmental area during probabilistic decision-making in schizophrenia.

    PubMed

    Rausch, Franziska; Mier, Daniela; Eifler, Sarah; Esslinger, Christine; Schilling, Claudia; Schirmbeck, Frederike; Englisch, Susanne; Meyer-Lindenberg, Andreas; Kirsch, Peter; Zink, Mathias

    2014-07-01

    Patients with schizophrenia suffer from deficits in monitoring and controlling their own thoughts. Within these so-called metacognitive impairments, alterations in probabilistic reasoning might be one cognitive phenomenon disposing to delusions. However, so far little is known about alterations in associated brain functionality. A previously established task for functional magnetic resonance imaging (fMRI), which requires a probabilistic decision after a variable amount of stimuli, was applied to 23 schizophrenia patients and 28 healthy controls matched for age, gender and educational levels. We compared activation patterns during decision-making under conditions of certainty versus uncertainty and evaluated the process of final decision-making in ventral striatum (VS) and ventral tegmental area (VTA). We replicated a pre-described extended cortical activation pattern during probabilistic reasoning. During final decision-making, activations in several fronto- and parietocortical areas, as well as in VS and VTA became apparent. In both of these regions schizophrenia patients showed a significantly reduced activation. These results further define the network underlying probabilistic decision-making. The observed hypo-activation in regions commonly associated with dopaminergic neurotransmission fits into current concepts of disrupted prediction error signaling in schizophrenia and suggests functional links to reward anticipation. Forthcoming studies with patients at risk for psychosis and drug-naive first episode patients are necessary to elucidate the development of these findings over time and the interplay with associated clinical symptoms. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Direct Telephonic Communication in a Heart Failure Transitional Care Program: An observational study.

    PubMed

    Ota, Ken S; Beutler, David S; Sheikh, Hassam; Weiss, Jessica L; Parkinson, Dallin; Nguyen, Peter; Gerkin, Richard D; Loli, Akil I

    2013-10-01

    This study investigated the trend of phone calls in the Banner Good Samaritan Medical Center (BGSMC) Heart Failure Transitional Care Program (HFTCP). The primary goal of the HFTCP is to reduce 30-Day readmissions for heart failure patients by using a multi-pronged approach. This study included 104 patients in the HFTCP discharged over a 51-week period who had around-the-clock telephone access to the Transitionalist. Cellular phone records were reviewed. This study evaluated the length and timing of calls. A total of 4398 telephone calls were recorded of which 39% were inbound and 61% were outbound. This averaged to 86 calls per week. During the "Weekday Daytime" period, Eighty-five percent of the totals calls were made. There were 229 calls during the "Weekday Nights" period with 1.5 inbound calls per week. The "Total Weekend" calls were 10.2% of the total calls which equated to a weekly average of 8.8. Our experience is that direct, physician-patient telephone contact is feasible with a panel of around 100 HF patients for one provider. If the proper financial reimbursements are provided, physicians may be apt to participate in similar transitional care programs. Likewise, third party payers will benefit from the reduction in unnecessary emergency room visits and hospitalizations.

  14. Probability in reasoning: a developmental test on conditionals.

    PubMed

    Barrouillet, Pierre; Gauffroy, Caroline

    2015-04-01

    Probabilistic theories have been claimed to constitute a new paradigm for the psychology of reasoning. A key assumption of these theories is captured by what they call the Equation, the hypothesis that the meaning of the conditional is probabilistic in nature and that the probability of If p then q is the conditional probability, in such a way that P(if p then q)=P(q|p). Using the probabilistic truth-table task in which participants are required to evaluate the probability of If p then q sentences, the present study explored the pervasiveness of the Equation through ages (from early adolescence to adulthood), types of conditionals (basic, causal, and inducements) and contents. The results reveal that the Equation is a late developmental achievement only endorsed by a narrow majority of educated adults for certain types of conditionals depending on the content they involve. Age-related changes in evaluating the probability of all the conditionals studied closely mirror the development of truth-value judgements observed in previous studies with traditional truth-table tasks. We argue that our modified mental model theory can account for this development, and hence for the findings related with the probability task, which do not consequently support the probabilistic approach of human reasoning over alternative theories. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Model Verification and Validation Concepts for a Probabilistic Fracture Assessment Model to Predict Cracking of Knife Edge Seals in the Space Shuttle Main Engine High Pressure Oxidizer

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Riha, David S.

    2013-01-01

    Physics-based models are routinely used to predict the performance of engineered systems to make decisions such as when to retire system components, how to extend the life of an aging system, or if a new design will be safe or available. Model verification and validation (V&V) is a process to establish credibility in model predictions. Ideally, carefully controlled validation experiments will be designed and performed to validate models or submodels. In reality, time and cost constraints limit experiments and even model development. This paper describes elements of model V&V during the development and application of a probabilistic fracture assessment model to predict cracking in space shuttle main engine high-pressure oxidizer turbopump knife-edge seals. The objective of this effort was to assess the probability of initiating and growing a crack to a specified failure length in specific flight units for different usage and inspection scenarios. The probabilistic fracture assessment model developed in this investigation combined a series of submodels describing the usage, temperature history, flutter tendencies, tooth stresses and numbers of cycles, fatigue cracking, nondestructive inspection, and finally the probability of failure. The analysis accounted for unit-to-unit variations in temperature, flutter limit state, flutter stress magnitude, and fatigue life properties. The investigation focused on the calculation of relative risk rather than absolute risk between the usage scenarios. Verification predictions were first performed for three units with known usage and cracking histories to establish credibility in the model predictions. Then, numerous predictions were performed for an assortment of operating units that had flown recently or that were projected for future flights. Calculations were performed using two NASA-developed software tools: NESSUS(Registered Trademark) for the probabilistic analysis, and NASGRO(Registered Trademark) for the fracture mechanics analysis. The goal of these predictions was to provide additional information to guide decisions on the potential of reusing existing and installed units prior to the new design certification.

  16. Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization

    ERIC Educational Resources Information Center

    Gelman, Andrew; Lee, Daniel; Guo, Jiqiang

    2015-01-01

    Stan is a free and open-source C++ program that performs Bayesian inference or optimization for arbitrary user-specified models and can be called from the command line, R, Python, Matlab, or Julia and has great promise for fitting large and complex statistical models in many areas of application. We discuss Stan from users' and developers'…

  17. Reliability-based management of buried pipelines considering external corrosion defects

    NASA Astrophysics Data System (ADS)

    Miran, Seyedeh Azadeh

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.

  18. Life Prediction of Spent Fuel Storage Canister Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballinger, Ronald

    The original purpose of this project was to develop a probabilistic model for SCC-induced failure of spent fuel storage canisters, exposed to a salt-air environment in the temperature range 30-70°C for periods up to and exceeding 100 years. The nature of this degradation process, which involves multiple degradation mechanisms, combined with variable and uncertain environmental conditions dictates a probabilistic approach to life prediction. A final report for the original portion of the project was submitted earlier. However, residual stress measurements for as-welded and repair welds could not be performed within the original time of the project. As a result ofmore » this, a no-cost extension was granted in order to complete these tests. In this report, we report on the results of residual stress measurements.« less

  19. Award-Winning CARES/Life Ceramics Durability Evaluation Software Is Making Advanced Technology Accessible

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CARES/Life software developed at the NASA Lewis Research Center eases this by providing a tool that uses probabilistic reliability analysis techniques to optimize the design and manufacture of brittle material components. CARES/Life is an integrated package that predicts the probability of a monolithic ceramic component's failure as a function of its time in service. It couples commercial finite element programs--which resolve a component's temperature and stress distribution - with reliability evaluation and fracture mechanics routines for modeling strength - limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength.

  20. International Space Station End-of-Life Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Duncan, Gary W.

    2014-01-01

    The International Space Station (ISS) end-of-life (EOL) cycle is currently scheduled for 2020, although there are ongoing efforts to extend ISS life cycle through 2028. The EOL for the ISS will require deorbiting the ISS. This will be the largest manmade object ever to be de-orbited therefore safely deorbiting the station will be a very complex problem. This process is being planned by NASA and its international partners. Numerous factors will need to be considered to accomplish this such as target corridors, orbits, altitude, drag, maneuvering capabilities etc. The ISS EOL Probabilistic Risk Assessment (PRA) will play a part in this process by estimating the reliability of the hardware supplying the maneuvering capabilities. The PRA will model the probability of failure of the systems supplying and controlling the thrust needed to aid in the de-orbit maneuvering.

  1. Direct Telephonic Communication in a Heart Failure Transitional Care Program: An observational study

    PubMed Central

    Ota, Ken S.; Beutler, David S.; Sheikh, Hassam; Weiss, Jessica L.; Parkinson, Dallin; Nguyen, Peter; Gerkin, Richard D.; Loli, Akil I.

    2013-01-01

    Background This study investigated the trend of phone calls in the Banner Good Samaritan Medical Center (BGSMC) Heart Failure Transitional Care Program (HFTCP). The primary goal of the HFTCP is to reduce 30-Day readmissions for heart failure patients by using a multi-pronged approach. Methods This study included 104 patients in the HFTCP discharged over a 51-week period who had around-the-clock telephone access to the Transitionalist. Cellular phone records were reviewed. This study evaluated the length and timing of calls. Results A total of 4398 telephone calls were recorded of which 39% were inbound and 61% were outbound. This averaged to 86 calls per week. During the “Weekday Daytime” period, Eighty-five percent of the totals calls were made. There were 229 calls during the “Weekday Nights” period with 1.5 inbound calls per week. The “Total Weekend” calls were 10.2% of the total calls which equated to a weekly average of 8.8. Conclusions Our experience is that direct, physician-patient telephone contact is feasible with a panel of around 100 HF patients for one provider. If the proper financial reimbursements are provided, physicians may be apt to participate in similar transitional care programs. Likewise, third party payers will benefit from the reduction in unnecessary emergency room visits and hospitalizations. PMID:28352437

  2. A comprehensive Probabilistic Tsunami Hazard Assessment for the city of Naples (Italy)

    NASA Astrophysics Data System (ADS)

    Anita, G.; Tonini, R.; Selva, J.; Sandri, L.; Pierdominici, S.; Faenza, L.; Zaccarelli, L.

    2012-12-01

    A comprehensive Probabilistic Tsunami Hazard Assessment (PTHA) should consider different tsunamigenic sources (seismic events, slide failures, volcanic eruptions) to calculate the hazard on given target sites. This implies a multi-disciplinary analysis of all natural tsunamigenic sources, in a multi-hazard/risk framework, which considers also the effects of interaction/cascade events. Our approach shows the ongoing effort to analyze the comprehensive PTHA for the city of Naples (Italy) including all types of sources located in the Tyrrhenian Sea, as developed within the Italian project ByMuR (Bayesian Multi-Risk Assessment). The project combines a multi-hazard/risk approach to treat the interactions among different hazards, and a Bayesian approach to handle the uncertainties. The natural potential tsunamigenic sources analyzed are: 1) submarine seismic sources located on active faults in the Tyrrhenian Sea and close to the Southern Italian shore line (also we consider the effects of the inshore seismic sources and the associated active faults which we provide their rapture properties), 2) mass failures and collapses around the target area (spatially identified on the basis of their propensity to failure), and 3) volcanic sources mainly identified by pyroclastic flows and collapses from the volcanoes in the Neapolitan area (Vesuvius, Campi Flegrei and Ischia). All these natural sources are here preliminary analyzed and combined, in order to provide a complete picture of a PTHA for the city of Naples. In addition, the treatment of interaction/cascade effects is formally discussed in the case of significant temporary variations in the short-term PTHA due to an earthquake.

  3. Auxiliary feedwater system risk-based inspection guide for the Salem Nuclear Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugh, R.; Gore, B.F. Vo, T.V.

    In a study by the US Nuclear Regulatory Commission (NRC), Pacific Northwest Laboratory has developed and applied a methodology for deriving plant-specific risk-based inspection guidance for the auxiliary feedwater (AFW) system at pressurized water reactors that have not undergone probabilistic risk assessment (PRA). This methodology uses existing PRA results and plant operating experience information. Existing PRA-based inspection guidance information recently developed for the NRC for various plants was used to identify generic component failure modes. This information was then combined with plant-specific and industry-wide component information and failure data to identify failure modes and failure mechanisms for the AFW systemmore » at the selected plants. Salem was selected as the fifth plant for study. The product of this effort is a prioritized listing of AFW failures which have occurred at the plant and at other PWRs. This listing is intended for use by NRC inspectors in the preparation of inspection plans addressing AFW risk-important components at the Salem plant. 23 refs., 1 fig., 1 tab.« less

  4. Using Generic Data to Establish Dormancy Failure Rates

    NASA Technical Reports Server (NTRS)

    Reistle, Bruce

    2014-01-01

    Many hardware items are dormant prior to being operated. The dormant period might be especially long, for example during missions to the moon or Mars. In missions with long dormant periods the risk incurred during dormancy can exceed the active risk contribution. Probabilistic Risk Assessments (PRAs) need to account for the dormant risk contribution as well as the active contribution. A typical method for calculating a dormant failure rate is to multiply the active failure rate by a constant, the dormancy factor. For example, some practitioners use a heuristic and divide the active failure rate by 30 to obtain an estimate of the dormant failure rate. To obtain a more empirical estimate of the dormancy factor, this paper uses the recently updated database NPRD-2011 [1] to arrive at a set of distributions for the dormancy factor. The resulting dormancy factor distributions are significantly different depending on whether the item is electrical, mechanical, or electro-mechanical. Additionally, this paper will show that using a heuristic constant fails to capture the uncertainty of the possible dormancy factors.

  5. Probabilistic Plan Management

    DTIC Science & Technology

    2009-11-17

    set of chains , the step adds scheduled methods that have an a priori likelihood of a failure outcome (Lines 3-5). It identifies the max eul value of the...activity meeting its objective, as well as its expected contribution to the schedule. By explicitly calculating these values , PADS is able to summarize the...variables. One of the main difficulties of this model is convolving the probability density functions and value functions while solving the model; this

  6. Probabilistic Analysis of a SiC/SiC Ceramic Matrix Composite Turbine Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Nemeth, Noel N.; Brewer, David N.; Mital, Subodh

    2004-01-01

    To demonstrate the advanced composite materials technology under development within the Ultra-Efficient Engine Technology (UEET) Program, it was planned to fabricate, test, and analyze a turbine vane made entirely of silicon carbide-fiber-reinforced silicon carbide matrix composite (SiC/SiC CMC) material. The objective was to utilize a five-harness satin weave melt-infiltrated (MI) SiC/SiC composite material developed under this program to design and fabricate a stator vane that can endure 1000 hours of engine service conditions. The vane was designed such that the expected maximum stresses were kept within the proportional limit strength of the material. Any violation of this design requirement was considered as the failure. This report presents results of a probabilistic analysis and reliability assessment of the vane. Probability of failure to meet the design requirements was computed. In the analysis, material properties, strength, and pressure loading were considered as random variables. The pressure loads were considered normally distributed with a nominal variation. A temperature profile on the vane was obtained by performing a computational fluid dynamics (CFD) analysis and was assumed to be deterministic. The results suggest that for the current vane design, the chance of not meeting design requirements is about 1.6 percent.

  7. Review of reactor pressure vessel evaluation report for Yankee Rowe Nuclear Power Station (YAEC No. 1735)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.

    1992-03-01

    The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the primary acceptance criterion'' in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less

  8. Review of reactor pressure vessel evaluation report for Yankee Rowe Nuclear Power Station (YAEC No. 1735)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.

    1992-03-01

    The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the ``primary acceptance criterion`` in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less

  9. Improved detection of congestive heart failure via probabilistic symbolic pattern recognition and heart rate variability metrics.

    PubMed

    Mahajan, Ruhi; Viangteeravat, Teeradache; Akbilgic, Oguz

    2017-12-01

    A timely diagnosis of congestive heart failure (CHF) is crucial to evade a life-threatening event. This paper presents a novel probabilistic symbol pattern recognition (PSPR) approach to detect CHF in subjects from their cardiac interbeat (R-R) intervals. PSPR discretizes each continuous R-R interval time series by mapping them onto an eight-symbol alphabet and then models the pattern transition behavior in the symbolic representation of the series. The PSPR-based analysis of the discretized series from 107 subjects (69 normal and 38 CHF subjects) yielded discernible features to distinguish normal subjects and subjects with CHF. In addition to PSPR features, we also extracted features using the time-domain heart rate variability measures such as average and standard deviation of R-R intervals. An ensemble of bagged decision trees was used to classify two groups resulting in a five-fold cross-validation accuracy, specificity, and sensitivity of 98.1%, 100%, and 94.7%, respectively. However, a 20% holdout validation yielded an accuracy, specificity, and sensitivity of 99.5%, 100%, and 98.57%, respectively. Results from this study suggest that features obtained with the combination of PSPR and long-term heart rate variability measures can be used in developing automated CHF diagnosis tools. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Probabilistic pipe fracture evaluations for leak-rate-detection applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, S.; Ghadiali, N.; Paul, D.

    1995-04-01

    Regulatory Guide 1.45, {open_quotes}Reactor Coolant Pressure Boundary Leakage Detection Systems,{close_quotes} was published by the U.S. Nuclear Regulatory Commission (NRC) in May 1973, and provides guidance on leak detection methods and system requirements for Light Water Reactors. Additionally, leak detection limits are specified in plant Technical Specifications and are different for Boiling Water Reactors (BWRs) and Pressurized Water Reactors (PWRs). These leak detection limits are also used in leak-before-break evaluations performed in accordance with Draft Standard Review Plan, Section 3.6.3, {open_quotes}Leak Before Break Evaluation Procedures{close_quotes} where a margin of 10 on the leak detection limit is used in determining the crackmore » size considered in subsequent fracture analyses. This study was requested by the NRC to: (1) evaluate the conditional failure probability for BWR and PWR piping for pipes that were leaking at the allowable leak detection limit, and (2) evaluate the margin of 10 to determine if it was unnecessarily large. A probabilistic approach was undertaken to conduct fracture evaluations of circumferentially cracked pipes for leak-rate-detection applications. Sixteen nuclear piping systems in BWR and PWR plants were analyzed to evaluate conditional failure probability and effects of crack-morphology variability on the current margins used in leak rate detection for leak-before-break.« less

  11. Probabilistic evaluation of uncertainties and risks in aerospace components

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.

    1992-01-01

    This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.

  12. Sensor Based Engine Life Calculation: A Probabilistic Perspective

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei; Chen, Philip

    2003-01-01

    It is generally known that an engine component will accumulate damage (life usage) during its lifetime of use in a harsh operating environment. The commonly used cycle count for engine component usage monitoring has an inherent range of uncertainty which can be overly costly or potentially less safe from an operational standpoint. With the advance of computer technology, engine operation modeling, and the understanding of damage accumulation physics, it is possible (and desirable) to use the available sensor information to make a more accurate assessment of engine component usage. This paper describes a probabilistic approach to quantify the effects of engine operating parameter uncertainties on the thermomechanical fatigue (TMF) life of a selected engine part. A closed-loop engine simulation with a TMF life model is used to calculate the life consumption of different mission cycles. A Monte Carlo simulation approach is used to generate the statistical life usage profile for different operating assumptions. The probabilities of failure of different operating conditions are compared to illustrate the importance of the engine component life calculation using sensor information. The results of this study clearly show that a sensor-based life cycle calculation can greatly reduce the risk of component failure as well as extend on-wing component life by avoiding unnecessary maintenance actions.

  13. External events analysis for the Savannah River Site K reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandyberry, M.D.; Wingo, H.E.

    1990-01-01

    The probabilistic external events analysis performed for the Savannah River Site K-reactor PRA considered many different events which are generally perceived to be external'' to the reactor and its systems, such as fires, floods, seismic events, and transportation accidents (as well as many others). Events which have been shown to be significant contributors to risk include seismic events, tornados, a crane failure scenario, fires and dam failures. The total contribution to the core melt frequency from external initiators has been found to be 2.2 {times} 10{sup {minus}4} per year, from which seismic events are the major contributor (1.2 {times} 10{supmore » {minus}4} per year). Fire initiated events contribute 1.4 {times} 10{sup {minus}7} per year, tornados 5.8 {times} 10{sup {minus}7} per year, dam failures 1.5 {times} 10{sup {minus}6} per year and the crane failure scenario less than 10{sup {minus}4} per year to the core melt frequency. 8 refs., 3 figs., 5 tabs.« less

  14. Predicting the Probability of Failure of Cementitious Sewer Pipes Using Stochastic Finite Element Method

    PubMed Central

    Alani, Amir M.; Faramarzi, Asaad

    2015-01-01

    In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092

  15. The use of subjective expert opinions in cost optimum design of aerospace structures. [probabilistic failure models

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hanagud, S.

    1975-01-01

    The results of two questionnaires sent to engineering experts are statistically analyzed and compared with objective data from Saturn V design and testing. Engineers were asked how likely it was for structural failure to occur at load increments above and below analysts' stress limit predictions. They were requested to estimate the relative probabilities of different failure causes, and of failure at each load increment given a specific cause. Three mathematical models are constructed based on the experts' assessment of causes. The experts' overall assessment of prediction strength fits the Saturn V data better than the models do, but a model test option (T-3) based on the overall assessment gives more design change likelihood to overstrength structures than does an older standard test option. T-3 compares unfavorably with the standard option in a cost optimum structural design problem. The report reflects a need for subjective data when objective data are unavailable.

  16. Constructor theory of probability

    PubMed Central

    2016-01-01

    Unitary quantum theory, having no Born Rule, is non-probabilistic. Hence the notorious problem of reconciling it with the unpredictability and appearance of stochasticity in quantum measurements. Generalizing and improving upon the so-called ‘decision-theoretic approach’, I shall recast that problem in the recently proposed constructor theory of information—where quantum theory is represented as one of a class of superinformation theories, which are local, non-probabilistic theories conforming to certain constructor-theoretic conditions. I prove that the unpredictability of measurement outcomes (to which constructor theory gives an exact meaning) necessarily arises in superinformation theories. Then I explain how the appearance of stochasticity in (finitely many) repeated measurements can arise under superinformation theories. And I establish sufficient conditions for a superinformation theory to inform decisions (made under it) as if it were probabilistic, via a Deutsch–Wallace-type argument—thus defining a class of decision-supporting superinformation theories. This broadens the domain of applicability of that argument to cover constructor-theory compliant theories. In addition, in this version some of the argument's assumptions, previously construed as merely decision-theoretic, follow from physical properties expressed by constructor-theoretic principles. PMID:27616914

  17. Probabilistic Cellular Automata

    PubMed Central

    Agapie, Alexandru; Giuclea, Marius

    2014-01-01

    Abstract Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case—connecting the probability of a configuration in the stationary distribution to its number of zero-one borders—the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata. PMID:24999557

  18. Probabilistic cellular automata.

    PubMed

    Agapie, Alexandru; Andreica, Anca; Giuclea, Marius

    2014-09-01

    Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.

  19. On the limits of probabilistic forecasting in nonlinear time series analysis II: Differential entropy.

    PubMed

    Amigó, José M; Hirata, Yoshito; Aihara, Kazuyuki

    2017-08-01

    In a previous paper, the authors studied the limits of probabilistic prediction in nonlinear time series analysis in a perfect model scenario, i.e., in the ideal case that the uncertainty of an otherwise deterministic model is due to only the finite precision of the observations. The model consisted of the symbolic dynamics of a measure-preserving transformation with respect to a finite partition of the state space, and the quality of the predictions was measured by the so-called ignorance score, which is a conditional entropy. In practice, though, partitions are dispensed with by considering numerical and experimental data to be continuous, which prompts us to trade off in this paper the Shannon entropy for the differential entropy. Despite technical differences, we show that the core of the previous results also hold in this extended scenario for sufficiently high precision. The corresponding imperfect model scenario will be revisited too because it is relevant for the applications. The theoretical part and its application to probabilistic forecasting are illustrated with numerical simulations and a new prediction algorithm.

  20. Integration of RAMS in LCC analysis for linear transport infrastructures. A case study for railways.

    NASA Astrophysics Data System (ADS)

    Calle-Cordón, Álvaro; Jiménez-Redondo, Noemi; Morales-Gámiz, F. J.; García-Villena, F. A.; Garmabaki, Amir H. S.; Odelius, Johan

    2017-09-01

    Life-cycle cost (LCC) analysis is an economic technique used to assess the total costs associated with the lifetime of a system in order to support decision making in long term strategic planning. For complex systems, such as railway and road infrastructures, the cost of maintenance plays an important role in the LCC analysis. Costs associated with maintenance interventions can be more reliably estimated by integrating the probabilistic nature of the failures associated to these interventions in the LCC models. Reliability, Maintainability, Availability and Safety (RAMS) parameters describe the maintenance needs of an asset in a quantitative way by using probabilistic information extracted from registered maintenance activities. Therefore, the integration of RAMS in the LCC analysis allows obtaining reliable predictions of system maintenance costs and the dependencies of these costs with specific cost drivers through sensitivity analyses. This paper presents an innovative approach for a combined RAMS & LCC methodology for railway and road transport infrastructures being developed under the on-going H2020 project INFRALERT. Such RAMS & LCC analysis provides relevant probabilistic information to be used for condition and risk-based planning of maintenance activities as well as for decision support in long term strategic investment planning.

  1. Hazard function analysis for flood planning under nonstationarity

    NASA Astrophysics Data System (ADS)

    Read, Laura K.; Vogel, Richard M.

    2016-05-01

    The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.

  2. Convergence Properties of a Class of Probabilistic Adaptive Schemes Called Sequential Reproductive Plans. Psychology and Education Series, Technical Report No. 210.

    ERIC Educational Resources Information Center

    Martin, Nancy

    Presented is a technical report concerning the use of a mathematical model describing certain aspects of the duplication and selection processes in natural genetic adaptation. This reproductive plan/model occurs in artificial genetics (the use of ideas from genetics to develop general problem solving techniques for computers). The reproductive…

  3. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  4. Bias Characterization in Probabilistic Genotype Data and Improved Signal Detection with Multiple Imputation

    PubMed Central

    Palmer, Cameron; Pe’er, Itsik

    2016-01-01

    Missing data are an unavoidable component of modern statistical genetics. Different array or sequencing technologies cover different single nucleotide polymorphisms (SNPs), leading to a complicated mosaic pattern of missingness where both individual genotypes and entire SNPs are sporadically absent. Such missing data patterns cannot be ignored without introducing bias, yet cannot be inferred exclusively from nonmissing data. In genome-wide association studies, the accepted solution to missingness is to impute missing data using external reference haplotypes. The resulting probabilistic genotypes may be analyzed in the place of genotype calls. A general-purpose paradigm, called Multiple Imputation (MI), is known to model uncertainty in many contexts, yet it is not widely used in association studies. Here, we undertake a systematic evaluation of existing imputed data analysis methods and MI. We characterize biases related to uncertainty in association studies, and find that bias is introduced both at the imputation level, when imputation algorithms generate inconsistent genotype probabilities, and at the association level, when analysis methods inadequately model genotype uncertainty. We find that MI performs at least as well as existing methods or in some cases much better, and provides a straightforward paradigm for adapting existing genotype association methods to uncertain data. PMID:27310603

  5. Application of Probabilistic Methods for the Determination of an Economically Robust HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.

    1996-01-01

    This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.

  6. Verification of recursive probabilistic integration (RPI) method for fatigue life management using non-destructive inspections

    NASA Astrophysics Data System (ADS)

    Chen, Tzikang J.; Shiao, Michael

    2016-04-01

    This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.

  7. ZERO: probabilistic routing for deploy and forget Wireless Sensor Networks.

    PubMed

    Vilajosana, Xavier; Llosa, Jordi; Pacho, Jose Carlos; Vilajosana, Ignasi; Juan, Angel A; Vicario, Jose Lopez; Morell, Antoni

    2010-01-01

    As Wireless Sensor Networks are being adopted by industry and agriculture for large-scale and unattended deployments, the need for reliable and energy-conservative protocols become critical. Physical and Link layer efforts for energy conservation are not mostly considered by routing protocols that put their efforts on maintaining reliability and throughput. Gradient-based routing protocols route data through most reliable links aiming to ensure 99% packet delivery. However, they suffer from the so-called "hot spot" problem. Most reliable routes waste their energy fast, thus partitioning the network and reducing the area monitored. To cope with this "hot spot" problem we propose ZERO a combined approach at Network and Link layers to increase network lifespan while conserving reliability levels by means of probabilistic load balancing techniques.

  8. Advanced probabilistic methods for quantifying the effects of various uncertainties in structural response

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.

    1988-01-01

    The effects of actual variations, also called uncertainties, in geometry and material properties on the structural response of a space shuttle main engine turbopump blade are evaluated. A normal distribution was assumed to represent the uncertainties statistically. Uncertainties were assumed to be totally random, partially correlated, and fully correlated. The magnitude of these uncertainties were represented in terms of mean and variance. Blade responses, recorded in terms of displacements, natural frequencies, and maximum stress, was evaluated and plotted in the form of probabilistic distributions under combined uncertainties. These distributions provide an estimate of the range of magnitudes of the response and probability of occurrence of a given response. Most importantly, these distributions provide the information needed to estimate quantitatively the risk in a structural design.

  9. Probabilistic tsunami hazard assessment at Seaside, Oregon, for near-and far-field seismic sources

    USGS Publications Warehouse

    Gonzalez, F.I.; Geist, E.L.; Jaffe, B.; Kanoglu, U.; Mofjeld, H.; Synolakis, C.E.; Titov, V.V.; Areas, D.; Bellomo, D.; Carlton, D.; Horning, T.; Johnson, J.; Newman, J.; Parsons, T.; Peters, R.; Peterson, C.; Priest, G.; Venturato, A.; Weber, J.; Wong, F.; Yalciner, A.

    2009-01-01

    The first probabilistic tsunami flooding maps have been developed. The methodology, called probabilistic tsunami hazard assessment (PTHA), integrates tsunami inundation modeling with methods of probabilistic seismic hazard assessment (PSHA). Application of the methodology to Seaside, Oregon, has yielded estimates of the spatial distribution of 100- and 500-year maximum tsunami amplitudes, i.e., amplitudes with 1% and 0.2% annual probability of exceedance. The 100-year tsunami is generated most frequently by far-field sources in the Alaska-Aleutian Subduction Zone and is characterized by maximum amplitudes that do not exceed 4 m, with an inland extent of less than 500 m. In contrast, the 500-year tsunami is dominated by local sources in the Cascadia Subduction Zone and is characterized by maximum amplitudes in excess of 10 m and an inland extent of more than 1 km. The primary sources of uncertainty in these results include those associated with interevent time estimates, modeling of background sea level, and accounting for temporal changes in bathymetry and topography. Nonetheless, PTHA represents an important contribution to tsunami hazard assessment techniques; viewed in the broader context of risk analysis, PTHA provides a method for quantifying estimates of the likelihood and severity of the tsunami hazard, which can then be combined with vulnerability and exposure to yield estimates of tsunami risk. Copyright 2009 by the American Geophysical Union.

  10. Willingness-to-pay for a probabilistic flood forecast: a risk-based decision-making game

    NASA Astrophysics Data System (ADS)

    Arnal, Louise; Ramos, Maria-Helena; Coughlan de Perez, Erin; Cloke, Hannah Louise; Stephens, Elisabeth; Wetterhall, Fredrik; van Andel, Schalk Jan; Pappenberger, Florian

    2016-08-01

    Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecast uncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty in transforming the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called "How much are you prepared to pay for a forecast?". The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydro-meteorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants' willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.

  11. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  12. A Probabilistic Tool that Aids Logistics Engineers in the Establishment of High Confidence Repair Need-Dates at the NASA Shuttle Logistics Depot

    NASA Technical Reports Server (NTRS)

    Bullington, J. V.; Winkler, J. C.; Linton, D. G.; Khajenoori, S.

    1995-01-01

    The NASA Shuttle Logistics Depot (NSLD) is tasked with the responsibility for repair and manufacture of Line Replaceable Unit (LRU) hardware and components to support the Space Shuttle Orbiter. Due to shrinking budgets, cost effective repair of LRU's becomes a primary objective. To achieve this objective, is imperative that resources be assigned to those LRU's which have the greatest expectation of being needed as a spare. Forecasting the times at which spares are needed requires consideration of many significant factors including: failure rate, flight rate, spares availability, and desired level of support, among others. This paper summarizes the results of the research and development work that has been accomplished in producing an automated tool that assists in the assignment of effective repair start-times for LRU's at the NSLD. This system, called the Repair Start-time Assessment System (RSAS), uses probabilistic modeling technology to calculate a need date for a repair that considers the current repair pipeline status, as well as, serviceable spares and projections of future demands. The output from the system is a date for beginning the repair that has significantly greater confidence (in the sense that a desired probability of support is ensured) than times produced using other techniques. Since an important output of RSAS is the longest repair turn-around time that will ensure a desired probability of support, RSAS has the potential for being applied to operations at any repair depot where spares are on-hand and repair start-times are of interest. In addition, RSAS incorporates tenants of Just-in-Time (JIT) techniques in that the latest repair start-time (i.e., the latest time at which repair resources must be committed) may be calculated for every failed unit This could reduce the spares inventory for certain items, without significantly increasing the risk of unsatisfied demand.

  13. Interactive Reliability Model for Whisker-toughened Ceramics

    NASA Technical Reports Server (NTRS)

    Palko, Joseph L.

    1993-01-01

    Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.

  14. Cost-effectiveness of early compared to late inhaled nitric oxide therapy in near-term infants.

    PubMed

    Armstrong, Edward P; Dhanda, Rahul

    2010-12-01

    The purpose of this study was to determine the cost-effectiveness of early versus late inhaled nitric oxide (INO) therapy in neonates with hypoxic respiratory failure initially managed on conventional mechanical ventilation. A decision analytic model was created to compare the use of early INO compared to delayed INO for neonates receiving mechanical ventilation due to hypoxic respiratory failure. The perspective of the model was that of a hospital. Patients who did not respond to either early or delayed INO were assumed to have been treated with extracorporeal membrane oxygenation (ECMO). The effectiveness measure was defined as a neonate discharged alive without requiring ECMO therapy. A Monte Carlo simulation of 10,000 cases was conducted using first and second order probabilistic analysis. Direct medical costs that differed between early versus delayed INO treatment were estimated until time to hospital discharge. The proportion of successfully treated patients and costs were determined from the probabilistic sensitivity analysis. The mean (± SD) effectiveness rate for early INO was 0.75 (± 0.08) and 0.61 (± 0.09) for delayed INO. The mean hospital cost for early INO was $21,462 (± $2695) and $27,226 (± $3532) for delayed INO. In 87% of scenarios, early INO dominated delayed INO by being both more effective and less costly. The acceptability curve between products demonstrated that early INO had over a 90% probability of being the most cost-effective treatment across a wide range of willingness to pay values. This analysis indicated that early INO therapy was cost-effective in neonates with hypoxic respiratory failure requiring mechanical ventilation compared to delayed INO by reducing the probability of developing severe hypoxic respiratory failure. There was a 90% or higher probability that early INO was more cost-effective than delayed INO across a wide range of willingness to pay values in this analysis.

  15. Problem Solving as Probabilistic Inference with Subgoaling: Explaining Human Successes and Pitfalls in the Tower of Hanoi

    PubMed Central

    Donnarumma, Francesco; Maisto, Domenico; Pezzulo, Giovanni

    2016-01-01

    How do humans and other animals face novel problems for which predefined solutions are not available? Human problem solving links to flexible reasoning and inference rather than to slow trial-and-error learning. It has received considerable attention since the early days of cognitive science, giving rise to well known cognitive architectures such as SOAR and ACT-R, but its computational and brain mechanisms remain incompletely known. Furthermore, it is still unclear whether problem solving is a “specialized” domain or module of cognition, in the sense that it requires computations that are fundamentally different from those supporting perception and action systems. Here we advance a novel view of human problem solving as probabilistic inference with subgoaling. In this perspective, key insights from cognitive architectures are retained such as the importance of using subgoals to split problems into subproblems. However, here the underlying computations use probabilistic inference methods analogous to those that are increasingly popular in the study of perception and action systems. To test our model we focus on the widely used Tower of Hanoi (ToH) task, and show that our proposed method can reproduce characteristic idiosyncrasies of human problem solvers: their sensitivity to the “community structure” of the ToH and their difficulties in executing so-called “counterintuitive” movements. Our analysis reveals that subgoals have two key roles in probabilistic inference and problem solving. First, prior beliefs on (likely) useful subgoals carve the problem space and define an implicit metric for the problem at hand—a metric to which humans are sensitive. Second, subgoals are used as waypoints in the probabilistic problem solving inference and permit to find effective solutions that, when unavailable, lead to problem solving deficits. Our study thus suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits. PMID:27074140

  16. Problem Solving as Probabilistic Inference with Subgoaling: Explaining Human Successes and Pitfalls in the Tower of Hanoi.

    PubMed

    Donnarumma, Francesco; Maisto, Domenico; Pezzulo, Giovanni

    2016-04-01

    How do humans and other animals face novel problems for which predefined solutions are not available? Human problem solving links to flexible reasoning and inference rather than to slow trial-and-error learning. It has received considerable attention since the early days of cognitive science, giving rise to well known cognitive architectures such as SOAR and ACT-R, but its computational and brain mechanisms remain incompletely known. Furthermore, it is still unclear whether problem solving is a "specialized" domain or module of cognition, in the sense that it requires computations that are fundamentally different from those supporting perception and action systems. Here we advance a novel view of human problem solving as probabilistic inference with subgoaling. In this perspective, key insights from cognitive architectures are retained such as the importance of using subgoals to split problems into subproblems. However, here the underlying computations use probabilistic inference methods analogous to those that are increasingly popular in the study of perception and action systems. To test our model we focus on the widely used Tower of Hanoi (ToH) task, and show that our proposed method can reproduce characteristic idiosyncrasies of human problem solvers: their sensitivity to the "community structure" of the ToH and their difficulties in executing so-called "counterintuitive" movements. Our analysis reveals that subgoals have two key roles in probabilistic inference and problem solving. First, prior beliefs on (likely) useful subgoals carve the problem space and define an implicit metric for the problem at hand-a metric to which humans are sensitive. Second, subgoals are used as waypoints in the probabilistic problem solving inference and permit to find effective solutions that, when unavailable, lead to problem solving deficits. Our study thus suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits.

  17. Probabilistic finite elements for fatigue and fracture analysis

    NASA Astrophysics Data System (ADS)

    Belytschko, Ted; Liu, Wing Kam

    1993-04-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  18. International Space Station End-of-Life Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Duncan, Gary

    2014-01-01

    Although there are ongoing efforts to extend the ISS life cycle through 2028, the International Space Station (ISS) end-of-life (EOL) cycle is currently scheduled for 2020. The EOL for the ISS will require de-orbiting the ISS. This will be the largest manmade object ever to be de-orbited, therefore safely de-orbiting the station will be a very complex problem. This process is being planned by NASA and its international partners. Numerous factors will need to be considered to accomplish this such as target corridors, orbits, altitude, drag, maneuvering capabilities, debris mapping etc. The ISS EOL Probabilistic Risk Assessment (PRA) will play a part in this process by estimating the reliability of the hardware supplying the maneuvering capabilities. The PRA will model the probability of failure of the systems supplying and controlling the thrust needed to aid in the de-orbit maneuvering.

  19. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1993-01-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  20. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  1. Jaundice and breastfeeding

    MedlinePlus

    Hyperbilirubinemia - breast milk; Breast milk jaundice; Breastfeeding failure jaundice ... first few days of life. It is called "breastfeeding failure jaundice," "breast-non-feeding jaundice," or even " ...

  2. Ensemble reconstruction of spatio-temporal extreme low-flow events in France since 1871

    NASA Astrophysics Data System (ADS)

    Caillouet, Laurie; Vidal, Jean-Philippe; Sauquet, Eric; Devers, Alexandre; Graff, Benjamin

    2017-06-01

    The length of streamflow observations is generally limited to the last 50 years even in data-rich countries like France. It therefore offers too small a sample of extreme low-flow events to properly explore the long-term evolution of their characteristics and associated impacts. To overcome this limit, this work first presents a daily 140-year ensemble reconstructed streamflow dataset for a reference network of near-natural catchments in France. This dataset, called SCOPE Hydro (Spatially COherent Probabilistic Extended Hydrological dataset), is based on (1) a probabilistic precipitation, temperature, and reference evapotranspiration downscaling of the Twentieth Century Reanalysis over France, called SCOPE Climate, and (2) continuous hydrological modelling using SCOPE Climate as forcings over the whole period. This work then introduces tools for defining spatio-temporal extreme low-flow events. Extreme low-flow events are first locally defined through the sequent peak algorithm using a novel combination of a fixed threshold and a daily variable threshold. A dedicated spatial matching procedure is then established to identify spatio-temporal events across France. This procedure is furthermore adapted to the SCOPE Hydro 25-member ensemble to characterize in a probabilistic way unrecorded historical events at the national scale. Extreme low-flow events are described and compared in a spatially and temporally homogeneous way over 140 years on a large set of catchments. Results highlight well-known recent events like 1976 or 1989-1990, but also older and relatively forgotten ones like the 1878 and 1893 events. These results contribute to improving our knowledge of historical events and provide a selection of benchmark events for climate change adaptation purposes. Moreover, this study allows for further detailed analyses of the effect of climate variability and anthropogenic climate change on low-flow hydrology at the scale of France.

  3. Operational Performance Risk Assessment in Support of A Supervisory Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denning, Richard S.; Muhlheim, Michael David; Cetiner, Sacit M.

    Supervisory control system (SCS) is developed for multi-unit advanced small modular reactors to minimize human interventions in both normal and abnormal operations. In SCS, control action decisions made based on probabilistic risk assessment approach via Event Trees/Fault Trees. Although traditional PRA tools are implemented, their scope is extended to normal operations and application is reversed; success of non-safety related system instead failure of safety systems this extended PRA approach called as operational performance risk assessment (OPRA). OPRA helps to identify success paths, combination of control actions for transients and to quantify these success paths to provide possible actions without activatingmore » plant protection system. In this paper, a case study of the OPRA in supervisory control system is demonstrated within the context of the ALMR PRISM design, specifically power conversion system. The scenario investigated involved a condition that the feed water control valve is observed to be drifting to the closed position. Alternative plant configurations were identified via OPRA that would allow the plant to continue to operate at full or reduced power. Dynamic analyses were performed with a thermal-hydraulic model of the ALMR PRISM system using Modelica to evaluate remained safety margins. Successful recovery paths for the selected scenario are identified and quantified via SCS.« less

  4. Fast probabilistic file fingerprinting for big data

    PubMed Central

    2013-01-01

    Background Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. Results We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Conclusions Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff. PMID:23445565

  5. A Probabilistic Model for Predicting Attenuation of Viruses During Percolation in Unsaturated Natural Barriers

    NASA Astrophysics Data System (ADS)

    Faulkner, B. R.; Lyon, W. G.

    2001-12-01

    We present a probabilistic model for predicting virus attenuation. The solution employs the assumption of complete mixing. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve 4-log attenuation. We tabulated data from related studies to develop probability density functions for input parameters, and utilized a database of soil hydraulic parameters based on the 12 USDA soil categories. Regulators can use the model based on limited information such as boring logs, climate data, and soil survey reports for a particular site of interest. Plackett-Burman sensitivity analysis indicated the most important main effects on probability of failure to achieve 4-log attenuation in our model were mean logarithm of saturated hydraulic conductivity (+0.396), mean water content (+0.203), mean solid-water mass transfer coefficient (-0.147), and the mean solid-water equilibrium partitioning coefficient (-0.144). Using the model, we predicted the probability of failure of a one-meter thick proposed hydrogeologic barrier and a water content of 0.3. With the currently available data and the associated uncertainty, we predicted soils classified as sand would fail (p=0.999), silt loams would also fail (p=0.292), but soils classified as clays would provide the required 4-log attenuation (p=0.001). The model is extendible in the sense that probability density functions of parameters can be modified as future studies refine the uncertainty, and the lightweight object-oriented design of the computer model (implemented in Java) will facilitate reuse with modified classes. This is an abstract of a proposed presentation and does not necessarily reflect EPA policy.

  6. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  7. Effects of the August 2003 blackout on the New York City healthcare delivery system: a lesson for disaster preparedness.

    PubMed

    Prezant, David J; Clair, John; Belyaev, Stanislav; Alleyne, Dawn; Banauch, Gisela I; Davitt, Michelle; Vandervoorts, Kathy; Kelly, Kerry J; Currie, Brian; Kalkut, Gary

    2005-01-01

    On August 14, 2003, the United States and Canada suffered the largest power failure in history. We report the effects of this blackout on New York City's healthcare system by examining the following: 1) citywide 911 emergency medical service (EMS) calls and ambulance responses; and 2) emergency department (ED) visits and hospital admissions to one of New York City's largest hospitals. Citywide EMS calls and ambulance responses were categorized by 911 call type. Montefiore Medical Center (MMC) ED visits and hospital admissions were categorized by diagnosis and physician-reviewed for relationship to the blackout. Comparisons were made to the week pre- and postblackout. Citywide EMS calls numbered 5,299 on August 14, 2003, and 5,021 on August 15, 2003, a 58% increase (p < .001). During the blackout, there were increases in "respiratory" (189%; p < .001), "cardiac" (68%; p = .016), and "other" (40%; p < .001) EMS call categories, but when expressed as a percent of daily totals, "cardiac" was no longer significant. The MMC-ED reflected this surge with only "respiratory" visits significantly increased (expressed as percent of daily total visits; p < .001). Respiratory device failure (mechanical ventilators, positive pressure breathing assist devices, nebulizers, and oxygen compressors) was responsible for the greatest burden (65 MMC-ED visits, with 37 admissions) as compared with 0 pre- and postblackout. The blackout dramatically increased EMS and hospital activity, with unexpected increases resulting from respiratory device failures in community-based patients. Our findings suggest that current capacity to respond to public health emergencies could be easily overwhelmed by widespread/prolonged power failure(s). Disaster preparedness planning would be greatly enhanced if fully operational, backup power systems were mandated, not only for acute care facilities, but also for community-based patients dependent on electrically powered lifesaving devices.

  8. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  9. A performance study of unmanned aerial vehicle-based sensor networks under cyber attack

    NASA Astrophysics Data System (ADS)

    Puchaty, Ethan M.

    In UAV-based sensor networks, an emerging area of interest is the performance of these networks under cyber attack. This study seeks to evaluate the performance trade-offs from a System-of-Systems (SoS) perspective between various UAV communications architecture options in the context two missions: tracking ballistic missiles and tracking insurgents. An agent-based discrete event simulation is used to model a sensor communication network consisting of UAVs, military communications satellites, ground relay stations, and a mission control center. Network susceptibility to cyber attack is modeled with probabilistic failures and induced data variability, with performance metrics focusing on information availability, latency, and trustworthiness. Results demonstrated that using UAVs as routers increased network availability with a minimal latency penalty and communications satellite networks were best for long distance operations. Redundancy in the number of links between communication nodes helped mitigate cyber-caused link failures and add robustness in cases of induced data variability by an adversary. However, when failures were not independent, redundancy and UAV routing were detrimental in some cases to network performance. Sensitivity studies indicated that long cyber-caused downtimes and increasing failure dependencies resulted in build-ups of failures and caused significant degradations in network performance.

  10. Fracture mechanics methodology: Evaluation of structural components integrity

    NASA Astrophysics Data System (ADS)

    Sih, G. C.; de Oliveira Faria, L.

    1984-09-01

    The application of fracture mechanics to structural-design problems is discussed in lectures presented in the AGARD Fracture Mechanics Methodology course held in Lisbon, Portugal, in June 1981. The emphasis is on aeronautical design, and chapters are included on fatigue-life prediction for metals and composites, the fracture mechanics of engineering structural components, failure mechanics and damage evaluation of structural components, flaw-acceptance methods, and reliability in probabilistic design. Graphs, diagrams, drawings, and photographs are provided.

  11. A probabilistic approach to aircraft design emphasizing stability and control uncertainties

    NASA Astrophysics Data System (ADS)

    Delaurentis, Daniel Andrew

    In order to address identified deficiencies in current approaches to aerospace systems design, a new method has been developed. This new method for design is based on the premise that design is a decision making activity, and that deterministic analysis and synthesis can lead to poor, or misguided decision making. This is due to a lack of disciplinary knowledge of sufficient fidelity about the product, to the presence of uncertainty at multiple levels of the aircraft design hierarchy, and to a failure to focus on overall affordability metrics as measures of goodness. Design solutions are desired which are robust to uncertainty and are based on the maximum knowledge possible. The new method represents advances in the two following general areas. 1. Design models and uncertainty. The research performed completes a transition from a deterministic design representation to a probabilistic one through a modeling of design uncertainty at multiple levels of the aircraft design hierarchy, including: (1) Consistent, traceable uncertainty classification and representation; (2) Concise mathematical statement of the Probabilistic Robust Design problem; (3) Variants of the Cumulative Distribution Functions (CDFs) as decision functions for Robust Design; (4) Probabilistic Sensitivities which identify the most influential sources of variability. 2. Multidisciplinary analysis and design. Imbedded in the probabilistic methodology is a new approach for multidisciplinary design analysis and optimization (MDA/O), employing disciplinary analysis approximations formed through statistical experimentation and regression. These approximation models are a function of design variables common to the system level as well as other disciplines. For aircraft, it is proposed that synthesis/sizing is the proper avenue for integrating multiple disciplines. Research hypotheses are translated into a structured method, which is subsequently tested for validity. Specifically, the implementation involves the study of the relaxed static stability technology for a supersonic commercial transport aircraft. The probabilistic robust design method is exercised resulting in a series of robust design solutions based on different interpretations of "robustness". Insightful results are obtained and the ability of the method to expose trends in the design space are noted as a key advantage.

  12. A probabilistic approach for shallow rainfall-triggered landslide modeling at basin scale. A case study in the Luquillo Forest, Puerto Rico

    NASA Astrophysics Data System (ADS)

    Dialynas, Y. G.; Arnone, E.; Noto, L. V.; Bras, R. L.

    2013-12-01

    Slope stability depends on geotechnical and hydrological factors that exhibit wide natural spatial variability, yet sufficient measurements of the related parameters are rarely available over entire study areas. The uncertainty associated with the inability to fully characterize hydrologic behavior has an impact on any attempt to model landslide hazards. This work suggests a way to systematically account for this uncertainty in coupled distributed hydrological-stability models for shallow landslide hazard assessment. A probabilistic approach for the prediction of rainfall-triggered landslide occurrence at basin scale was implemented in an existing distributed eco-hydrological and landslide model, tRIBS-VEGGIE -landslide (Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator - VEGetation Generator for Interactive Evolution). More precisely, we upgraded tRIBS-VEGGIE- landslide to assess the likelihood of shallow landslides by accounting for uncertainty related to geotechnical and hydrological factors that directly affect slope stability. Natural variability of geotechnical soil characteristics was considered by randomizing soil cohesion and friction angle. Hydrological uncertainty related to the estimation of matric suction was taken into account by considering soil retention parameters as correlated random variables. The probability of failure is estimated through an assumed theoretical Factor of Safety (FS) distribution, conditioned on soil moisture content. At each cell, the temporally variant FS statistics are approximated by the First Order Second Moment (FOSM) method, as a function of parameters statistical properties. The model was applied on the Rio Mameyes Basin, located in the Luquillo Experimental Forest in Puerto Rico, where previous landslide analyses have been carried out. At each time step, model outputs include the probability of landslide occurrence across the basin, and the most probable depth of failure at each soil column. The use of the proposed probabilistic approach for shallow landslide prediction is able to reveal and quantify landslide risk at slopes assessed as stable by simpler deterministic methods.

  13. Probabilistic Risk Assessment (PRA): A Practical and Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia L.; Ingegneri, Antonino J.; Djam, Melody

    2006-01-01

    The Lunar Reconnaissance Orbiter (LRO) is the first mission of the Robotic Lunar Exploration Program (RLEP), a space exploration venture to the Moon, Mars and beyond. The LRO mission includes spacecraft developed by NASA Goddard Space Flight Center (GSFC) and seven instruments built by GSFC, Russia, and contractors across the nation. LRO is defined as a measurement mission, not a science mission. It emphasizes the overall objectives of obtaining data to facilitate returning mankind safely to the Moon in preparation for an eventual manned mission to Mars. As the first mission in response to the President's commitment of the journey of exploring the solar system and beyond: returning to the Moon in the next decade, then venturing further into the solar system, ultimately sending humans to Mars and beyond, LRO has high-visibility to the public but limited resources and a tight schedule. This paper demonstrates how NASA's Lunar Reconnaissance Orbiter Mission project office incorporated reliability analyses in assessing risks and performing design tradeoffs to ensure mission success. Risk assessment is performed using NASA Procedural Requirements (NPR) 8705.5 - Probabilistic Risk Assessment (PRA) Procedures for NASA Programs and Projects to formulate probabilistic risk assessment (PRA). As required, a limited scope PRA is being performed for the LRO project. The PRA is used to optimize the mission design within mandated budget, manpower, and schedule constraints. The technique that LRO project office uses to perform PRA relies on the application of a component failure database to quantify the potential mission success risks. To ensure mission success in an efficient manner, low cost and tight schedule, the traditional reliability analyses, such as reliability predictions, Failure Modes and Effects Analysis (FMEA), and Fault Tree Analysis (FTA), are used to perform PRA for the large system of LRO with more than 14,000 piece parts and over 120 purchased or contractor built components.

  14. Developing EHR-driven heart failure risk prediction models using CPXR(Log) with the probabilistic loss function.

    PubMed

    Taslimitehrani, Vahid; Dong, Guozhu; Pereira, Naveen L; Panahiazar, Maryam; Pathak, Jyotishman

    2016-04-01

    Computerized survival prediction in healthcare identifying the risk of disease mortality, helps healthcare providers to effectively manage their patients by providing appropriate treatment options. In this study, we propose to apply a classification algorithm, Contrast Pattern Aided Logistic Regression (CPXR(Log)) with the probabilistic loss function, to develop and validate prognostic risk models to predict 1, 2, and 5year survival in heart failure (HF) using data from electronic health records (EHRs) at Mayo Clinic. The CPXR(Log) constructs a pattern aided logistic regression model defined by several patterns and corresponding local logistic regression models. One of the models generated by CPXR(Log) achieved an AUC and accuracy of 0.94 and 0.91, respectively, and significantly outperformed prognostic models reported in prior studies. Data extracted from EHRs allowed incorporation of patient co-morbidities into our models which helped improve the performance of the CPXR(Log) models (15.9% AUC improvement), although did not improve the accuracy of the models built by other classifiers. We also propose a probabilistic loss function to determine the large error and small error instances. The new loss function used in the algorithm outperforms other functions used in the previous studies by 1% improvement in the AUC. This study revealed that using EHR data to build prediction models can be very challenging using existing classification methods due to the high dimensionality and complexity of EHR data. The risk models developed by CPXR(Log) also reveal that HF is a highly heterogeneous disease, i.e., different subgroups of HF patients require different types of considerations with their diagnosis and treatment. Our risk models provided two valuable insights for application of predictive modeling techniques in biomedicine: Logistic risk models often make systematic prediction errors, and it is prudent to use subgroup based prediction models such as those given by CPXR(Log) when investigating heterogeneous diseases. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  16. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    NASA Astrophysics Data System (ADS)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  17. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng

    2017-01-01

    Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm. PMID:28587084

  18. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors.

    PubMed

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng

    2017-05-25

    Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ -connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.

  19. Probabilistic fatigue life prediction of metallic and composite materials

    NASA Astrophysics Data System (ADS)

    Xiang, Yibing

    Fatigue is one of the most common failure modes for engineering structures, such as aircrafts, rotorcrafts and aviation transports. Both metallic materials and composite materials are widely used and affected by fatigue damage. Huge uncertainties arise from material properties, measurement noise, imperfect models, future anticipated loads and environmental conditions. These uncertainties are critical issues for accurate remaining useful life (RUL) prediction for engineering structures in service. Probabilistic fatigue prognosis considering various uncertainties is of great importance for structural safety. The objective of this study is to develop probabilistic fatigue life prediction models for metallic materials and composite materials. A fatigue model based on crack growth analysis and equivalent initial flaw size concept is proposed for metallic materials. Following this, the developed model is extended to include structural geometry effects (notch effect), environmental effects (corroded specimens) and manufacturing effects (shot peening effects). Due to the inhomogeneity and anisotropy, the fatigue model suitable for metallic materials cannot be directly applied to composite materials. A composite fatigue model life prediction is proposed based on a mixed-mode delamination growth model and a stiffness degradation law. After the development of deterministic fatigue models of metallic and composite materials, a general probabilistic life prediction methodology is developed. The proposed methodology combines an efficient Inverse First-Order Reliability Method (IFORM) for the uncertainty propogation in fatigue life prediction. An equivalent stresstransformation has been developed to enhance the computational efficiency under realistic random amplitude loading. A systematical reliability-based maintenance optimization framework is proposed for fatigue risk management and mitigation of engineering structures.

  20. NASA Applications and Lessons Learned in Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  1. Boosting Probabilistic Graphical Model Inference by Incorporating Prior Knowledge from Multiple Sources

    PubMed Central

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lall, Pradeep; Wei, Junchao; Sakalaukus, Peter

    A new method has been developed for assessment of the onset of degradation in solid state luminaires to classify failure mechanisms by using metrics beyond lumen degradation that are currently used for identification of failure. Luminous Flux output, Correlated Color Temperature Data on Philips LED Lamps has been gathered under 85°C/85%RH till lamp failure. Failure modes of the test population of the lamps have been studied to understand the failure mechanisms in 85°C/85%RH accelerated test. Results indicate that the dominant failure mechanism is the discoloration of the LED encapsulant inside the lamps which is the likely cause for the luminousmore » flux degradation and the color shift. The acquired data has been used in conjunction with Bayesian Probabilistic Models to identify luminaires with onset of degradation much prior to failure through identification of decision boundaries between lamps with accrued damage and lamps beyond the failure threshold in the feature space. In addition luminaires with different failure modes have been classified separately from healthy pristine luminaires. The α-λ plots have been used to evaluate the robustness of the proposed methodology. Results show that the predicted degradation for the lamps tracks the true degradation observed during 85°C/85%RH during accelerated life test fairly closely within the ±20% confidence bounds. Correlation of model prediction with experimental results indicates that the presented methodology allows the early identification of the onset of failure much prior to development of complete failure distributions and can be used for assessing the damage state of SSLs in fairly large deployments. It is expected that, the new prediction technique will allow the development of failure distributions without testing till L70 life for the manifestation of failure.« less

  3. PROBABILISTIC PROGRAMMING FOR ADVANCED MACHINE LEARNING (PPAML) DISCRIMINATIVE LEARNING FOR GENERATIVE TASKS (DILIGENT)

    DTIC Science & Technology

    2017-11-29

    Structural connections of the frames (fragments) in the knowledge. We call the fundamental elements of the knowledge a limited number of elements...the result of contracted fundamental research deemed exempt from public affairs security and policy review in accordance with SAF/AQR memorandum dated...AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited. This report is the result of contracted fundamental research deemed exempt from

  4. Methodology for the Incorporation of Passive Component Aging Modeling into the RAVEN/ RELAP-7 Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua

    2014-11-01

    Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper representsmore » an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation environment such as RELAP-7. • Identify the risk-significant passive components, their failure modes and anticipated rates of degradation • Incorporate surveillance and maintenance activities and their effects into the plant state and into component aging progress. • Asses aging affects in a dynamic simulation environment 1. C. L. SMITH, V. N. SHAH, T. KAO, G. APOSTOLAKIS, “Incorporating Ageing Effects into Probabilistic Risk Assessment –A Feasibility Study Utilizing Reliability Physics Models,” NUREG/CR-5632, USNRC, (2001). 2. T. ALDEMIR, “A Survey of Dynamic Methodologies for Probabilistic Safety Assessment of Nuclear Power Plants, Annals of Nuclear Energy, 52, 113-124, (2013). 3. C. RABITI, A. ALFONSI, J. COGLIATI, D. MANDELLI and R. KINOSHITA “Reactor Analysis and Virtual Control Environment (RAVEN) FY12 Report,” INL/EXT-12-27351, (2012). 4. D. ANDERS et.al, "RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7," INL/EXT-12-25924, (2012).« less

  5. Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana Kelly; Song-Hua Shen; Gary DeMoss

    2010-06-01

    Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less

  6. Mass and Reliability System (MaRS)

    NASA Technical Reports Server (NTRS)

    Barnes, Sarah

    2016-01-01

    The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including: operation hours, random/nonrandom failures, software/hardware failures, quantity, orbital replaceable units (ORU), date of placement, unit weight, frequency of part, etc. The motivation for creating such a database will be the development of a mass/reliability parametric model to estimate mass required for replacement parts. Once complete, engineers working on future space flight missions will have access a mean time to failures and on parts along with their mass, this will be used to make proper decisions for long duration space flight missions

  7. Thoracentesis

    MedlinePlus

    ... the layers of the pleura is called a pleural effusion . The test is performed to determine the cause ... will help your provider determine the cause of pleural effusion. Possible causes include: Cancer Liver failure Heart failure ...

  8. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  9. Design of Composite Structures for Reliability and Damage Tolerance

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1999-01-01

    A summary of research conducted during the first year is presented. The research objectives were sought by conducting two tasks: (1) investigation of probabilistic design techniques for reliability-based design of composite sandwich panels, and (2) examination of strain energy density failure criterion in conjunction with response surface methodology for global-local design of damage tolerant helicopter fuselage structures. This report primarily discusses the efforts surrounding the first task and provides a discussion of some preliminary work involving the second task.

  10. Recent and Future Enhancements in NDI for Aircraft Structures

    DTIC Science & Technology

    2015-09-10

    1]. Four of the B-47 losses were attributed to fatigue , which led to a probabilistic approach for establishing the aircraft service life...sufficient to preclude in-service structural failures attributable to fatigue . The safe- life approach was the basis for all new designs during the 1960s...and was also used to establish the safe-life of earlier designs that were subjected to a fatigue test. Losses of an F-111 in December 1969 and an F-5

  11. Recent and Future Enhancements in NDI for Aircraft Structures (Postprint)

    DTIC Science & Technology

    2015-09-10

    1]. Four of the B-47 losses were attributed to fatigue , which led to a probabilistic approach for establishing the aircraft service life...sufficient to preclude in-service structural failures attributable to fatigue . The safe- life approach was the basis for all new designs during the 1960s...and was also used to establish the safe-life of earlier designs that were subjected to a fatigue test. Losses of an F-111 in December 1969 and an F-5

  12. RECENT AND FUTURE ENHANCEMENTS IN NDI FOR AIRCRAFT STRUCTURES POSTPRINT

    DTIC Science & Technology

    2015-09-10

    1]. Four of the B-47 losses were attributed to fatigue , which led to a probabilistic approach for establishing the aircraft service life...sufficient to preclude in-service structural failures attributable to fatigue . The safe- life approach was the basis for all new designs during the 1960s...and was also used to establish the safe-life of earlier designs that were subjected to a fatigue test. Losses of an F-111 in December 1969 and an F-5

  13. Recent and Future Enhancements in NDI for Aircraft Structures (Postprint)

    DTIC Science & Technology

    2015-11-01

    1]. Four of the B-47 losses were attributed to fatigue , which led to a probabilistic approach for establishing the aircraft service life...sufficient to preclude in-service structural failures attributable to fatigue . The safe- life approach was the basis for all new designs during the 1960s...and was also used to establish the safe-life of earlier designs that were subjected to a fatigue test. Losses of an F-111 in December 1969 and an F-5

  14. RECENT AND FUTURE ENHANCEMENTS IN NDI FOR AIRCRAFT STRUCTURES (POSTPRINT)

    DTIC Science & Technology

    2015-09-10

    1]. Four of the B-47 losses were attributed to fatigue , which led to a probabilistic approach for establishing the aircraft service life...sufficient to preclude in-service structural failures attributable to fatigue . The safe- life approach was the basis for all new designs during the 1960s...and was also used to establish the safe-life of earlier designs that were subjected to a fatigue test. Losses of an F-111 in December 1969 and an F-5

  15. Self-ordering and complexity in epizonal mineral deposits

    USGS Publications Warehouse

    Henley, Richard W.; Berger, Byron R.

    2000-01-01

    Giant deposits are relatively rare and develop where efficient metal deposition is spatially focused by repetitive brittle failure in active fault arrays. Some brief case histories are provided for epithermal, replacement, and porphyry mineralization. These highlight how rock competency contrasts and feedback between processes, rather than any single component of a hydrothermal system, govern the size of individual deposits. In turn, the recognition of the probabilistic nature of mineralization provides a firmer foundation through which exploration investment and risk management decisions can be made.

  16. Probabilistically Perfect Cloning of Two Pure States: Geometric Approach.

    PubMed

    Yerokhin, V; Shehu, A; Feldman, E; Bagan, E; Bergou, J A

    2016-05-20

    We solve the long-standing problem of making n perfect clones from m copies of one of two known pure states with minimum failure probability in the general case where the known states have arbitrary a priori probabilities. The solution emerges from a geometric formulation of the problem. This formulation reveals that cloning converges to state discrimination followed by state preparation as the number of clones goes to infinity. The convergence exhibits a phenomenon analogous to a second-order symmetry-breaking phase transition.

  17. Browns Ferry Nuclear Plant: variation in test intervals for high-pressure coolant injection (HPCI) system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christie, R.F.; Stetkar, J.W.

    1985-01-01

    The change in availability of the high-pressure coolant injection system (HPCIS) due to a change in pump and valve test interval from monthly to quarterly was analyzed. This analysis started by using the HPCIS base line evaluation produced as part of the Browns Ferry Nuclear Plant (BFN) Probabilistic Risk Assessment (PRA). The base line evaluation showed that the dominant contributors to the unavailability of the HPCI system are hardware failures and the resultant downtime for unscheduled maintenance.

  18. Free-Swinging Failure Tolerance for Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    English, James

    1997-01-01

    Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.

  19. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 3

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    Structural failure is rarely a "sudden death" type of event, such sudden failures may occur only under abnormal loadings like bomb or gas explosions and very strong earthquakes. In most cases, structures fail due to damage accumulated under normal loadings such as wind loads, dead and live loads. The consequence of cumulative damage will affect the reliability of surviving components and finally causes collapse of the system. The cumulative damage effects on system reliability under time-invariant loadings are of practical interest in structural design and therefore will be investigated in this study. The scope of this study is, however, restricted to the consideration of damage accumulation as the increase in the number of failed components due to the violation of their strength limits.

  20. Space transportation architecture: Reliability sensitivities

    NASA Technical Reports Server (NTRS)

    Williams, A. M.

    1992-01-01

    A sensitivity analysis is given of the benefits and drawbacks associated with a proposed Earth to orbit vehicle architecture. The architecture represents a fleet of six vehicles (two existing, four proposed) that would be responsible for performing various missions as mandated by NASA and the U.S. Air Force. Each vehicle has a prescribed flight rate per year for a period of 31 years. By exposing this fleet of vehicles to a probabilistic environment where the fleet experiences failures, downtimes, setbacks, etc., the analysis involves determining the resiliency and costs associated with the fleet of specific vehicle/subsystem reliabilities. The resources required were actual observed data on the failures and downtimes associated with existing vehicles, data based on engineering judgement for proposed vehicles, and the development of a sensitivity analysis program.

  1. Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.

    2008-01-01

    High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight, enable higher operating temperatures requiring less cooling and thus leading to increased engine efficiencies. However, these materials are brittle and show degradation with time at high operating temperatures due to creep as well as cyclic mechanical and thermal loads. In addition, these materials are heterogeneous in their make-up and various factors affect their properties in a specific design environment. Most of these advanced composites involve two- and three-dimensional fiber architectures and require a complex multi-step high temperature processing. Since there are uncertainties associated with each of these in addition to the variability in the constituent material properties, the observed behavior of composite materials exhibits scatter. Traditional material failure analyses employing a deterministic approach, where failure is assumed to occur when some allowable stress level or equivalent stress is exceeded, are not adequate for brittle material component design. Such phenomenological failure theories are reasonably successful when applied to ductile materials such as metals. Analysis of failure in structural components is governed by the observed scatter in strength, stiffness and loading conditions. In such situations, statistical design approaches must be used. Accounting for these phenomena requires a change in philosophy on the design engineer s part that leads to a reduced focus on the use of safety factors in favor of reliability analyses. The reliability approach demands that the design engineer must tolerate a finite risk of unacceptable performance. This risk of unacceptable performance is identified as a component's probability of failure (or alternatively, component reliability). The primary concern of the engineer is minimizing this risk in an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.

  2. Design for Reliability and Safety Approach for the New NASA Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Weldon, Danny M.

    2007-01-01

    The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program intended for sending crew and cargo to the international Space Station (ISS), to the moon, and beyond. This program is called Constellation. As part of the Constellation program, NASA is developing new launch vehicles aimed at significantly increase safety and reliability, reduce the cost of accessing space, and provide a growth path for manned space exploration. Achieving these goals requires a rigorous process that addresses reliability, safety, and cost upfront and throughout all the phases of the life cycle of the program. This paper discusses the "Design for Reliability and Safety" approach for the NASA new launch vehicles, the ARES I and ARES V. Specifically, the paper addresses the use of an integrated probabilistic functional analysis to support the design analysis cycle and a probabilistic risk assessment (PRA) to support the preliminary design and beyond.

  3. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  4. Impact of Distributed Energy Resources on the Reliability of Critical Telecommunications Facilities: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D. G.; Arent, D. J.; Johnson, L.

    2006-06-01

    This paper documents a probabilistic risk assessment of existing and alternative power supply systems at a large telecommunications office. The analysis characterizes the increase in the reliability of power supply through the use of two alternative power configurations. Failures in the power systems supporting major telecommunications service nodes are a main contributor to significant telecommunications outages. A logical approach to improving the robustness of telecommunication facilities is to increase the depth and breadth of technologies available to restore power during power outages. Distributed energy resources such as fuel cells and gas turbines could provide additional on-site electric power sources tomore » provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.« less

  5. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  6. Extreme Threshold Failures Within a Heterogeneous Elastic Thin Sheet and the Spatial-Temporal Development of Induced Seismicity Within the Groningen Gas Field

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.

    2017-12-01

    Measurements of the strains and earthquakes induced by fluid extraction from a subsurface reservoir reveal a transient, exponential-like increase in seismicity relative to the volume of fluids extracted. If the frictional strength of these reactivating faults is heterogeneously and randomly distributed, then progressive failures of the weakest fault patches account in a general manner for this initial exponential-like trend. Allowing for the observable elastic and geometric heterogeneity of the reservoir, the spatiotemporal evolution of induced seismicity over 5 years is predictable without significant bias using a statistical physics model of poroelastic reservoir deformations inducing extreme threshold frictional failures of previously inactive faults. This model is used to forecast the temporal and spatial probability density of earthquakes within the Groningen natural gas reservoir, conditional on future gas production plans. Probabilistic seismic hazard and risk assessments based on these forecasts inform the current gas production policy and building strengthening plans.

  7. A probabilisitic based failure model for components fabricated from anisotropic graphite

    NASA Astrophysics Data System (ADS)

    Xiao, Chengfeng

    The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.

  8. Heart failure - fluids and diuretics

    MedlinePlus

    ... are often called "water pills." There are many brands of diuretics. Some are taken 1 time a ... failure: a scientific statement from the American Heart Association. Circulation . 2009;120(12):1141-1163. PMID: 19720935 ...

  9. Robot Path Planning in Uncertain Environments: A Language Measure-theoretic Approach

    DTIC Science & Technology

    2014-01-01

    Paper DS-14-1028 to appear in the Special Issue on Stochastic Models, Control and Algorithms in Robotics, ASME Journal of Dynamic Systems...Measurement and Control Robot Path Planning in Uncertain Environments: A Language Measure-theoretic Approach⋆ Devesh K. Jha† Yue Li† Thomas A. Wettergren‡† Asok...algorithm, called ν⋆, that was formulated in the framework of probabilistic finite state automata (PFSA) and language measure from a control -theoretic

  10. Reading and Understanding: Tim O'Brien and the Narrative of Failure

    ERIC Educational Resources Information Center

    Buchanan, Jeffrey

    2008-01-01

    Tim O'Brien's "The Things They Carried" is an example of what the author calls a narrative of failure. A narrative of failure is a term for a narrative that both fails in the enactment of its own telling and that takes failure or failing as one of its subjects. This paper discusses how, as a form for telling a teaching story, narratives…

  11. A Bayesian-based two-stage inexact optimization method for supporting stream water quality management in the Three Gorges Reservoir region.

    PubMed

    Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W

    2016-05-01

    In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.

  12. Probabilistic Risk Assessment for Decision Making During Spacecraft Operations

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila

    2009-01-01

    Decisions made during the operational phase of a space mission often have significant and immediate consequences. Without the explicit consideration of the risks involved and their representation in a solid model, it is very likely that these risks are not considered systematically in trade studies. Wrong decisions during the operational phase of a space mission can lead to immediate system failure whereas correct decisions can help recover the system even from faulty conditions. A problem of special interest is the determination of the system fault protection strategies upon the occurrence of faults within the system. Decisions regarding the fault protection strategy also heavily rely on a correct understanding of the state of the system and an integrated risk model that represents the various possible scenarios and their respective likelihoods. Probabilistic Risk Assessment (PRA) modeling is applicable to the full lifecycle of a space mission project, from concept development to preliminary design, detailed design, development and operations. The benefits and utilities of the model, however, depend on the phase of the mission for which it is used. This is because of the difference in the key strategic decisions that support each mission phase. The focus of this paper is on describing the particular methods used for PRA modeling during the operational phase of a spacecraft by gleaning insight from recently conducted case studies on two operational Mars orbiters. During operations, the key decisions relate to the commands sent to the spacecraft for any kind of diagnostics, anomaly resolution, trajectory changes, or planning. Often, faults and failures occur in the parts of the spacecraft but are contained or mitigated before they can cause serious damage. The failure behavior of the system during operations provides valuable data for updating and adjusting the related PRA models that are built primarily based on historical failure data. The PRA models, in turn, provide insight into the effect of various faults or failures on the risk and failure drivers of the system and the likelihood of possible end case scenarios, thereby facilitating the decision making process during operations. This paper describes the process of adjusting PRA models based on observed spacecraft data, on one hand, and utilizing the models for insight into the future system behavior on the other hand. While PRA models are typically used as a decision aid during the design phase of a space mission, we advocate adjusting them based on the observed behavior of the spacecraft and utilizing them for decision support during the operations phase.

  13. A Review of Statistical Failure Time Models with Application of a Discrete Hazard Based Model to 1Cr1Mo-0.25V Steel for Turbine Rotors and Shafts

    PubMed Central

    2017-01-01

    Producing predictions of the probabilistic risks of operating materials for given lengths of time at stated operating conditions requires the assimilation of existing deterministic creep life prediction models (that only predict the average failure time) with statistical models that capture the random component of creep. To date, these approaches have rarely been combined to achieve this objective. The first half of this paper therefore provides a summary review of some statistical models to help bridge the gap between these two approaches. The second half of the paper illustrates one possible assimilation using 1Cr1Mo-0.25V steel. The Wilshire equation for creep life prediction is integrated into a discrete hazard based statistical model—the former being chosen because of its novelty and proven capability in accurately predicting average failure times and the latter being chosen because of its flexibility in modelling the failure time distribution. Using this model it was found that, for example, if this material had been in operation for around 15 years at 823 K and 130 MPa, the chances of failure in the next year is around 35%. However, if this material had been in operation for around 25 years, the chance of failure in the next year rises dramatically to around 80%. PMID:29039773

  14. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  15. Common Cause Failure Modeling in Space Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Ring, Rob; Novack, Steven D.; Britton, Paul

    2015-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFs are a set of dependent type of failures that can be caused for example by system environments, manufacturing, transportation, storage, maintenance, and assembly. Since there are many factors that contribute to CCFs, they can be reduced, but are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and dependent CCF. Because common cause failure data is limited in the aerospace industry, the Probabilistic Risk Assessment (PRA) Team at Bastion Technology Inc. is estimating CCF risk using generic data collected by the Nuclear Regulatory Commission (NRC). Consequently, common cause risk estimates based on this database, when applied to other industry applications, are highly uncertain. Therefore, it is important to account for a range of values for independent and CCF risk and to communicate the uncertainty to decision makers. There is an existing methodology for reducing CCF risk during design, which includes a checklist of 40+ factors grouped into eight categories. Using this checklist, an approach to produce a beta factor estimate is being investigated that quantitatively relates these factors. In this example, the checklist will be tailored to space launch vehicles, a quantitative approach will be described, and an example of the method will be presented.

  16. Integrated software health management for aerospace guidance, navigation, and control systems: A probabilistic reasoning approach

    NASA Astrophysics Data System (ADS)

    Mbaya, Timmy

    Embedded Aerospace Systems have to perform safety and mission critical operations in a real-time environment where timing and functional correctness are extremely important. Guidance, Navigation, and Control (GN&C) systems substantially rely on complex software interfacing with hardware in real-time; any faults in software or hardware, or their interaction could result in fatal consequences. Integrated Software Health Management (ISWHM) provides an approach for detection and diagnosis of software failures while the software is in operation. The ISWHM approach is based on probabilistic modeling of software and hardware sensors using a Bayesian network. To meet memory and timing constraints of real-time embedded execution, the Bayesian network is compiled into an Arithmetic Circuit, which is used for on-line monitoring. This type of system monitoring, using an ISWHM, provides automated reasoning capabilities that compute diagnoses in a timely manner when failures occur. This reasoning capability enables time-critical mitigating decisions and relieves the human agent from the time-consuming and arduous task of foraging through a multitude of isolated---and often contradictory---diagnosis data. For the purpose of demonstrating the relevance of ISWHM, modeling and reasoning is performed on a simple simulated aerospace system running on a real-time operating system emulator, the OSEK/Trampoline platform. Models for a small satellite and an F-16 fighter jet GN&C (Guidance, Navigation, and Control) system have been implemented. Analysis of the ISWHM is then performed by injecting faults and analyzing the ISWHM's diagnoses.

  17. Intelligent Hardware-Enabled Sensor and Software Safety and Health Management for Autonomous UAS

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Schumann, Johann; Ippolito, Corey

    2015-01-01

    Unmanned Aerial Systems (UAS) can only be deployed if they can effectively complete their mission and respond to failures and uncertain environmental conditions while maintaining safety with respect to other aircraft as well as humans and property on the ground. We propose to design a real-time, onboard system health management (SHM) capability to continuously monitor essential system components such as sensors, software, and hardware systems for detection and diagnosis of failures and violations of safety or performance rules during the ight of a UAS. Our approach to SHM is three-pronged, providing: (1) real-time monitoring of sensor and software signals; (2) signal analysis, preprocessing, and advanced on-the- y temporal and Bayesian probabilistic fault diagnosis; (3) an unobtrusive, lightweight, read-only, low-power hardware realization using Field Programmable Gate Arrays (FPGAs) in order to avoid overburdening limited computing resources or costly re-certi cation of ight software due to instrumentation. No currently available SHM capabilities (or combinations of currently existing SHM capabilities) come anywhere close to satisfying these three criteria yet NASA will require such intelligent, hardwareenabled sensor and software safety and health management for introducing autonomous UAS into the National Airspace System (NAS). We propose a novel approach of creating modular building blocks for combining responsive runtime monitoring of temporal logic system safety requirements with model-based diagnosis and Bayesian network-based probabilistic analysis. Our proposed research program includes both developing this novel approach and demonstrating its capabilities using the NASA Swift UAS as a demonstration platform.

  18. PRA (Probabilistic Risk Assessments) Participation versus Validation

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Banke, Richard

    2013-01-01

    Probabilistic Risk Assessments (PRAs) are performed for projects or programs where the consequences of failure are highly undesirable. PRAs primarily address the level of risk those projects or programs posed during operations. PRAs are often developed after the design has been completed. Design and operational details used to develop models include approved and accepted design information regarding equipment, components, systems and failure data. This methodology basically validates the risk parameters of the project or system design. For high risk or high dollar projects, using PRA methodologies during the design process provides new opportunities to influence the design early in the project life cycle to identify, eliminate or mitigate potential risks. Identifying risk drivers before the design has been set allows the design engineers to understand the inherent risk of their current design and consider potential risk mitigation changes. This can become an iterative process where the PRA model can be used to determine if the mitigation technique is effective in reducing risk. This can result in more efficient and cost effective design changes. PRA methodology can be used to assess the risk of design alternatives and can demonstrate how major design changes or program modifications impact the overall program or project risk. PRA has been used for the last two decades to validate risk predictions and acceptability. Providing risk information which can positively influence final system and equipment design the PRA tool can also participate in design development, providing a safe and cost effective product.

  19. Hazard function theory for nonstationary natural hazards

    NASA Astrophysics Data System (ADS)

    Read, L.; Vogel, R. M.

    2015-12-01

    Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.

  20. Quantum probabilistic logic programming

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan

    2015-05-01

    We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.

  1. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    PubMed Central

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-01-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830

  2. Probabilistic seasonal Forecasts to deterministic Farm Leve Decisions: Innovative Approach

    NASA Astrophysics Data System (ADS)

    Mwangi, M. W.

    2015-12-01

    Climate change and vulnerability are major challenges in ensuring household food security. Climate information services have the potential to cushion rural households from extreme climate risks. However, most the probabilistic nature of climate information products is not easily understood by majority of smallholder farmers. Despite the probabilistic nature, climate information have proved to be a valuable climate risk adaptation strategy at the farm level. This calls for innovative ways to help farmers understand and apply climate information services to inform their farm level decisions. The study endeavored to co-design and test appropriate innovation systems for climate information services uptake and scale up necessary for achieving climate risk development. In addition it also determined the conditions necessary to support the effective performance of the proposed innovation system. Data and information sources included systematic literature review, secondary sources, government statistics, focused group discussions, household surveys and semi-structured interviews. Data wasanalyzed using both quantitative and qualitative data analysis techniques. Quantitative data was analyzed using the Statistical Package for Social Sciences (SPSS) software. Qualitative data was analyzed using qualitative techniques, which involved establishing the categories and themes, relationships/patterns and conclusions in line with the study objectives. Sustainable livelihood, reduced household poverty and climate change resilience were the impact that resulted from the study.

  3. A tesselated probabilistic representation for spatial robot perception and navigation

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto

    1989-01-01

    The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations.

  4. Identifiability of tree-child phylogenetic networks under a probabilistic recombination-mutation model of evolution.

    PubMed

    Francis, Andrew; Moulton, Vincent

    2018-06-07

    Phylogenetic networks are an extension of phylogenetic trees which are used to represent evolutionary histories in which reticulation events (such as recombination and hybridization) have occurred. A central question for such networks is that of identifiability, which essentially asks under what circumstances can we reliably identify the phylogenetic network that gave rise to the observed data? Recently, identifiability results have appeared for networks relative to a model of sequence evolution that generalizes the standard Markov models used for phylogenetic trees. However, these results are quite limited in terms of the complexity of the networks that are considered. In this paper, by introducing an alternative probabilistic model for evolution along a network that is based on some ground-breaking work by Thatte for pedigrees, we are able to obtain an identifiability result for a much larger class of phylogenetic networks (essentially the class of so-called tree-child networks). To prove our main theorem, we derive some new results for identifying tree-child networks combinatorially, and then adapt some techniques developed by Thatte for pedigrees to show that our combinatorial results imply identifiability in the probabilistic setting. We hope that the introduction of our new model for networks could lead to new approaches to reliably construct phylogenetic networks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Feature aided Monte Carlo probabilistic data association filter for ballistic missile tracking

    NASA Astrophysics Data System (ADS)

    Ozdemir, Onur; Niu, Ruixin; Varshney, Pramod K.; Drozd, Andrew L.; Loe, Richard

    2011-05-01

    The problem of ballistic missile tracking in the presence of clutter is investigated. Probabilistic data association filter (PDAF) is utilized as the basic filtering algorithm. We propose to use sequential Monte Carlo methods, i.e., particle filters, aided with amplitude information (AI) in order to improve the tracking performance of a single target in clutter when severe nonlinearities exist in the system. We call this approach "Monte Carlo probabilistic data association filter with amplitude information (MCPDAF-AI)." Furthermore, we formulate a realistic problem in the sense that we use simulated radar cross section (RCS) data for a missile warhead and a cylinder chaff using Lucernhammer1, a state of the art electromagnetic signature prediction software, to model target and clutter amplitude returns as additional amplitude features which help to improve data association and tracking performance. A performance comparison is carried out between the extended Kalman filter (EKF) and the particle filter under various scenarios using single and multiple sensors. The results show that, when only one sensor is used, the MCPDAF performs significantly better than the EKF in terms of tracking accuracy under severe nonlinear conditions for ballistic missile tracking applications. However, when the number of sensors is increased, even under severe nonlinear conditions, the EKF performs as well as the MCPDAF.

  6. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Li, Weixuan; Zeng, Lingzao

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.

    We report that many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has beenmore » examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Lastly, although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.« less

  8. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2011-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  9. Probabilistic Simulation for Combined Cycle Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multifactor interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  10. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  11. Free-Swinging Failure Tolerance for Robotic Manipulators. Degree awarded by Purdue Univ.

    NASA Technical Reports Server (NTRS)

    English, James

    1997-01-01

    Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.

  12. Availability and mean time between failures of redundant systems with random maintenance of subsystems

    NASA Technical Reports Server (NTRS)

    Schneeweiss, W.

    1977-01-01

    It is shown how the availability and MTBF (Mean Time Between Failures) of a redundant system with subsystems maintenanced at the points of so-called stationary renewal processes can be determined from the distributions of the intervals between maintenance actions and of the failure-free operating intervals of the subsystems. The results make it possible, for example, to determine the frequency and duration of hidden failure states in computers which are incidentally corrected during the repair of observed failures.

  13. Fuzzy Bayesian Network-Bow-Tie Analysis of Gas Leakage during Biomass Gasification

    PubMed Central

    Yan, Fang; Xu, Kaili; Yao, Xiwen; Li, Yang

    2016-01-01

    Biomass gasification technology has been rapidly developed recently. But fire and poisoning accidents caused by gas leakage restrict the development and promotion of biomass gasification. Therefore, probabilistic safety assessment (PSA) is necessary for biomass gasification system. Subsequently, Bayesian network-bow-tie (BN-bow-tie) analysis was proposed by mapping bow-tie analysis into Bayesian network (BN). Causes of gas leakage and the accidents triggered by gas leakage can be obtained by bow-tie analysis, and BN was used to confirm the critical nodes of accidents by introducing corresponding three importance measures. Meanwhile, certain occurrence probability of failure was needed in PSA. In view of the insufficient failure data of biomass gasification, the occurrence probability of failure which cannot be obtained from standard reliability data sources was confirmed by fuzzy methods based on expert judgment. An improved approach considered expert weighting to aggregate fuzzy numbers included triangular and trapezoidal numbers was proposed, and the occurrence probability of failure was obtained. Finally, safety measures were indicated based on the obtained critical nodes. The theoretical occurrence probabilities in one year of gas leakage and the accidents caused by it were reduced to 1/10.3 of the original values by these safety measures. PMID:27463975

  14. Fatigue of restorative materials.

    PubMed

    Baran, G; Boberick, K; McCool, J

    2001-01-01

    Failure due to fatigue manifests itself in dental prostheses and restorations as wear, fractured margins, delaminated coatings, and bulk fracture. Mechanisms responsible for fatigue-induced failure depend on material ductility: Brittle materials are susceptible to catastrophic failure, while ductile materials utilize their plasticity to reduce stress concentrations at the crack tip. Because of the expense associated with the replacement of failed restorations, there is a strong desire on the part of basic scientists and clinicians to evaluate the resistance of materials to fatigue in laboratory tests. Test variables include fatigue-loading mode and test environment, such as soaking in water. The outcome variable is typically fracture strength, and these data typically fit the Weibull distribution. Analysis of fatigue data permits predictive inferences to be made concerning the survival of structures fabricated from restorative materials under specified loading conditions. Although many dental-restorative materials are routinely evaluated, only limited use has been made of fatigue data collected in vitro: Wear of materials and the survival of porcelain restorations has been modeled by both fracture mechanics and probabilistic approaches. A need still exists for a clinical failure database and for the development of valid test methods for the evaluation of composite materials.

  15. Detecting failure of climate predictions

    USGS Publications Warehouse

    Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve

    2016-01-01

    The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.

  16. Enhanced Component Performance Study: Emergency Diesel Generators 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using (1) Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2014 and (2) maintenance unavailability (UA) performance data from Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2014. The objective is to show estimates of current failure probabilities and rates related to EDGs, trend these data on an annual basis, determine if the current data are consistent with the probability distributions currently recommended for use inmore » NRC probabilistic risk assessments, show how the reliability data differ for different EDG manufacturers and for EDGs with different ratings; and summarize the subcomponents, causes, detection methods, and recovery associated with each EDG failure mode. Engineering analyses were performed with respect to time period and failure mode without regard to the actual number of EDGs at each plant. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating. Six trends with varying degrees of statistical significance were identified in the data.« less

  17. Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Brunett, Acacia

    2015-04-26

    The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less

  18. Cabin Environment Physics Risk Model

    NASA Technical Reports Server (NTRS)

    Mattenberger, Christopher J.; Mathias, Donovan Leigh

    2014-01-01

    This paper presents a Cabin Environment Physics Risk (CEPR) model that predicts the time for an initial failure of Environmental Control and Life Support System (ECLSS) functionality to propagate into a hazardous environment and trigger a loss-of-crew (LOC) event. This physics-of failure model allows a probabilistic risk assessment of a crewed spacecraft to account for the cabin environment, which can serve as a buffer to protect the crew during an abort from orbit and ultimately enable a safe return. The results of the CEPR model replace the assumption that failure of the crew critical ECLSS functionality causes LOC instantly, and provide a more accurate representation of the spacecraft's risk posture. The instant-LOC assumption is shown to be excessively conservative and, moreover, can impact the relative risk drivers identified for the spacecraft. This, in turn, could lead the design team to allocate mass for equipment to reduce overly conservative risk estimates in a suboptimal configuration, which inherently increases the overall risk to the crew. For example, available mass could be poorly used to add redundant ECLSS components that have a negligible benefit but appear to make the vehicle safer due to poor assumptions about the propagation time of ECLSS failures.

  19. Some Advances in Downscaling Probabilistic Climate Forecasts for Agricultural Decision Support

    NASA Astrophysics Data System (ADS)

    Han, E.; Ines, A.

    2015-12-01

    Seasonal climate forecasts, commonly provided in tercile-probabilities format (below-, near- and above-normal), need to be translated into more meaningful information for decision support of practitioners in agriculture. In this paper, we will present two new novel approaches to temporally downscale probabilistic seasonal climate forecasts: one non-parametric and another parametric method. First, the non-parametric downscaling approach called FResampler1 uses the concept of 'conditional block sampling' of weather data to create daily weather realizations of a tercile-based seasonal climate forecasts. FResampler1 randomly draws time series of daily weather parameters (e.g., rainfall, maximum and minimum temperature and solar radiation) from historical records, for the season of interest from years that belong to a certain rainfall tercile category (e.g., being below-, near- and above-normal). In this way, FResampler1 preserves the covariance between rainfall and other weather parameters as if conditionally sampling maximum and minimum temperature and solar radiation if that day is wet or dry. The second approach called predictWTD is a parametric method based on a conditional stochastic weather generator. The tercile-based seasonal climate forecast is converted into a theoretical forecast cumulative probability curve. Then the deviates for each percentile is converted into rainfall amount or frequency or intensity to downscale the 'full' distribution of probabilistic seasonal climate forecasts. Those seasonal deviates are then disaggregated on a monthly basis and used to constrain the downscaling of forecast realizations at different percentile values of the theoretical forecast curve. As well as the theoretical basis of the approaches we will discuss sensitivity analysis (length of data and size of samples) of them. In addition their potential applications for managing climate-related risks in agriculture will be shown through a couple of case studies based on actual seasonal climate forecasts for: rice cropping in the Philippines and maize cropping in India and Kenya.

  20. 37 CFR 41.47 - Oral hearing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... on and call the Board's attention to a recent court or Board opinion which could have an effect on... and one copy to be added to the Record. (l) Failure to attend oral hearing. Failure of an appellant to...

  1. Holographic multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garriga, J.; Vilenkin, A., E-mail: jaume.garriga@ub.edu, E-mail: vilenkin@cosmos.phy.tufts.edu

    2009-01-15

    We explore the idea that the dynamics of the inflationary multiverse is encoded in its future boundary, where it is described by a lower dimensional theory which is conformally invariant in the UV. We propose that a measure for the multiverse, which is needed in order to extract quantitative probabilistic predictions, can be derived in terms of the boundary theory by imposing a UV cutoff. In the inflationary bulk, this is closely related (though not identical) to the so-called scale factor cutoff measure.

  2. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  3. Fault tree analysis for system modeling in case of intentional EMI

    NASA Astrophysics Data System (ADS)

    Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.

    2011-08-01

    The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.

  4. On splice site prediction using weight array models: a comparison of smoothing techniques

    NASA Astrophysics Data System (ADS)

    Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard

    2007-11-01

    In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.

  5. Optimized Vertex Method and Hybrid Reliability

    NASA Technical Reports Server (NTRS)

    Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.

    2002-01-01

    A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.

  6. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  7. Cost-effectiveness of left ventricular assist devices for patients with end-stage heart failure: analysis of the French hospital discharge database.

    PubMed

    Tadmouri, Abir; Blomkvist, Josefin; Landais, Cécile; Seymour, Jerome; Azmoun, Alexandre

    2018-02-01

    Although left ventricular assist devices (LVADs) are currently approved for coverage and reimbursement in France, no French cost-effectiveness (CE) data are available to support this decision. This study aimed at estimating the CE of LVAD compared with medical management in the French health system. Individual patient data from the 'French hospital discharge database' (Medicalization of information systems program) were analysed using Kaplan-Meier method. Outcomes were time to death, time to heart transplantation (HTx), and time to death after HTx. A micro-costing method was used to calculate the monthly costs extracted from the Program for the Medicalization of Information Systems. A multistate Markov monthly cycle model was developed to assess CE. The analysis over a lifetime horizon was performed from the perspective of the French healthcare payer; discount rates were 4%. Probabilistic and deterministic sensitivity analyses were performed. Outcomes were quality-adjusted life years (QALYs) and incremental CE ratio (ICER). Mean QALY for an LVAD patient was 1.5 at a lifetime cost of €190 739, delivering a probabilistic ICER of €125 580/QALY [95% confidence interval: 105 587 to 150 314]. The sensitivity analysis showed that the ICER was mainly sensitive to two factors: (i) the high acquisition cost of the device and (ii) the device performance in terms of patient survival. Our economic evaluation showed that the use of LVAD in patients with end-stage heart failure yields greater benefit in terms of survival than medical management at an extra lifetime cost exceeding the €100 000/QALY. Technological advances and device costs reduction shall hence lead to an improvement in overall CE. © 2017 The Authors. ESC Heart Failure published by John Wiley & Sons Ltd on behalf of the European Society of Cardiology.

  8. Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie

    2006-01-01

    A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.

  9. Canceled to Be Called Back: A Retrospective Cohort Study of Canceled Helicopter Emergency Medical Service Scene Calls That Are Later Transferred to a Trauma Center.

    PubMed

    Nolan, Brodie; Ackery, Alun; Nathens, Avery; Sawadsky, Bruce; Tien, Homer

    In our trauma system, helicopter emergency medical services (HEMS) can be requested to attend a scene call for an injured patient before arrival by land paramedics. Land paramedics can cancel this response if they deem it unnecessary. The purpose of this study is to describe the frequency of canceled HEMS scene calls that were subsequently transferred to 2 trauma centers and to assess for any impact on morbidity and mortality. Probabilistic matching was used to identify canceled HEMS scene call patients who were later transported to 2 trauma centers over a 48-month period. Registry data were used to compare canceled scene call patients with direct from scene patients. There were 290 requests for HEMS scene calls, of which 35.2% were canceled. Of those canceled, 24.5% were later transported to our trauma centers. Canceled scene call patients were more likely to be older and to be discharged home from the trauma center without being admitted. There is a significant amount of undertriage of patients for whom an HEMS response was canceled and later transported to a trauma center. These patients face similar morbidity and mortality as patients who are brought directly from scene to a trauma center. Copyright © 2018 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  10. PROBABILISTIC SAFETY ASSESSMENT OF OPERATIONAL ACCIDENTS AT THE WASTE ISOLATION PILOT PLANT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucker, D.F.

    2000-09-01

    This report presents a probabilistic safety assessment of radioactive doses as consequences from accident scenarios to complement the deterministic assessment presented in the Waste Isolation Pilot Plant (WIPP) Safety Analysis Report (SAR). The International Council of Radiation Protection (ICRP) recommends both assessments be conducted to ensure that ''an adequate level of safety has been achieved and that no major contributors to risk are overlooked'' (ICRP 1993). To that end, the probabilistic assessment for the WIPP accident scenarios addresses the wide range of assumptions, e.g. the range of values representing the radioactive source of an accident, that could possibly have beenmore » overlooked by the SAR. Routine releases of radionuclides from the WIPP repository to the environment during the waste emplacement operations are expected to be essentially zero. In contrast, potential accidental releases from postulated accident scenarios during waste handling and emplacement could be substantial, which necessitates the need for radiological air monitoring and confinement barriers (DOE 1999). The WIPP Safety Analysis Report (SAR) calculated doses from accidental releases to the on-site (at 100 m from the source) and off-site (at the Exclusive Use Boundary and Site Boundary) public by a deterministic approach. This approach, as demonstrated in the SAR, uses single-point values of key parameters to assess the 50-year, whole-body committed effective dose equivalent (CEDE). The basic assumptions used in the SAR to formulate the CEDE are retained for this report's probabilistic assessment. However, for the probabilistic assessment, single-point parameter values were replaced with probability density functions (PDF) and were sampled over an expected range. Monte Carlo simulations were run, in which 10,000 iterations were performed by randomly selecting one value for each parameter and calculating the dose. Statistical information was then derived from the 10,000 iteration batch, which included 5%, 50%, and 95% dose likelihood, and the sensitivity of each assumption to the calculated doses. As one would intuitively expect, the doses from the probabilistic assessment for most scenarios were found to be much less than the deterministic assessment. The lower dose of the probabilistic assessment can be attributed to a ''smearing'' of values from the high and low end of the PDF spectrum of the various input parameters. The analysis also found a potential weakness in the deterministic analysis used in the SAR, a detail on drum loading was not taken into consideration. Waste emplacement operations thus far have handled drums from each shipment as a single unit, i.e. drums from each shipment are kept together. Shipments typically come from a single waste stream, and therefore the curie loading of each drum can be considered nearly identical to that of its neighbor. Calculations show that if there are large numbers of drums used in the accident scenario assessment, e.g. 28 drums in the waste hoist failure scenario (CH5), then the probabilistic dose assessment calculations will diverge from the deterministically determined doses. As it is currently calculated, the deterministic dose assessment assumes one drum loaded to the maximum allowable (80 PE-Ci), and the remaining are 10% of the maximum. The effective average of drum curie content is therefore less in the deterministic assessment than the probabilistic assessment for a large number of drums. EEG recommends that the WIPP SAR calculations be revisited and updated to include a probabilistic safety assessment.« less

  11. Probabilistic Assessment of a CMC Turbine Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Brewer, Dave; Mital, Subodh K.

    2004-01-01

    In order to demonstrate the advanced CMC technology under development within the Ultra Efficient Engine Technology (UEET) program, it has been planned to fabricate, test and analyze an all CMC turbine vane made of a SiC/SiC composite material. The objective was to utilize a 5-II Satin Weave SiC/CVI SiC/ and MI SiC matrix material that was developed in-house under the Enabling Propulsion Materials (EPM) program, to design and fabricate a stator vane that can endure successfully 1000 hours of engine service conditions operation. The design requirements for the vane are to be able to withstand a maximum of 2400 F within the substrate and the hot surface temperature of 2700 F with the aid of an in-house developed Environmental/Thermal Barrier Coating (EBC/TBC) system. The vane will be tested in a High Pressure Burner Rig at NASA Glenn Research Center facility. This rig is capable of simulating the engine service environment. The present paper focuses on a probabilistic assessment of the vane. The material stress/strain relationship shows a bilinear behavior with a distinct knee corresponding to what is often termed as first matrix cracking strength. This is a critical life limiting consideration for these materials. The vane is therefore designed such that the maximum stresses are within this limit so that the structure is never subjected to loads beyond the first matrix cracking strength. Any violation of this design requirement is considered as failure. Probabilistic analysis is performed in order to determine the probability of failure based on this assumption. In the analysis, material properties, strength, and pressures are considered random variables. The variations in properties and strength are based on the actual experimental data generated in house. The mean values for the pressures on the upper surface and the lower surface are known but their distributions are unknown. In the present analysis the pressures are considered normally distributed with a nominal variation. Temperature profile on the vane is obtained by performing a CFD analysis and is assumed to be deterministic.

  12. T-F and S/DOE Gladys McCall No. 1 well, Cameron Parish, Louisiana. Geopressured-geothermal well report, Volume II. Well workover and production testing, February 1982-October 1985. Final report. Appendices 1-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-01-01

    These appendices contain the following reports: (1) investigation of coupling failure from the Gladys McCall No. 1 well; (2) failure analysis - oil well casing coupling; (3) technical remedial requirements for 5-inch production tubing string; (4) reservoir limit test data for sand zone No. 9; (5) reservoir fluid study - sand zone No. 9; (6) engineering interpretation of exploration drawdown tests; and (7) reservoir analysis. (ACR)

  13. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Astrophysics Data System (ADS)

    Godines, Cody R.; Manteufel, Randall D.

    2002-12-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  14. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  15. Improved water allocation utilizing probabilistic climate forecasts: Short-term water contracts in a risk management framework

    NASA Astrophysics Data System (ADS)

    Sankarasubramanian, A.; Lall, Upmanu; Souza Filho, Francisco Assis; Sharma, Ashish

    2009-11-01

    Probabilistic, seasonal to interannual streamflow forecasts are becoming increasingly available as the ability to model climate teleconnections is improving. However, water managers and practitioners have been slow to adopt such products, citing concerns with forecast skill. Essentially, a management risk is perceived in "gambling" with operations using a probabilistic forecast, while a system failure upon following existing operating policies is "protected" by the official rules or guidebook. In the presence of a prescribed system of prior allocation of releases under different storage or water availability conditions, the manager has little incentive to change. Innovation in allocation and operation is hence key to improved risk management using such forecasts. A participatory water allocation process that can effectively use probabilistic forecasts as part of an adaptive management strategy is introduced here. Users can express their demand for water through statements that cover the quantity needed at a particular reliability, the temporal distribution of the "allocation," the associated willingness to pay, and compensation in the event of contract nonperformance. The water manager then assesses feasible allocations using the probabilistic forecast that try to meet these criteria across all users. An iterative process between users and water manager could be used to formalize a set of short-term contracts that represent the resulting prioritized water allocation strategy over the operating period for which the forecast was issued. These contracts can be used to allocate water each year/season beyond long-term contracts that may have precedence. Thus, integrated supply and demand management can be achieved. In this paper, a single period multiuser optimization model that can support such an allocation process is presented. The application of this conceptual model is explored using data for the Jaguaribe Metropolitan Hydro System in Ceara, Brazil. The performance relative to the current allocation process is assessed in the context of whether such a model could support the proposed short-term contract based participatory process. A synthetic forecasting example is also used to explore the relative roles of forecast skill and reservoir storage in this framework.

  16. A Summary of Taxonomies of Digital System Failure Modes Provided by the DigRel Task Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu T. L.; Yue M.; Postma, W.

    2012-06-25

    Recently, the CSNI directed WGRisk to set up a task group called DIGREL to initiate a new task on developing a taxonomy of failure modes of digital components for the purposes of PSA. It is an important step towards standardized digital I&C reliability assessment techniques for PSA. The objective of this paper is to provide a comparison of the failure mode taxonomies provided by the participants. The failure modes are classified in terms of their levels of detail. Software and hardware failure modes are discussed separately.

  17. Probabilistic topic modeling for the analysis and classification of genomic sequences

    PubMed Central

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  18. Computation of probabilistic hazard maps and source parameter estimation for volcanic ash transport and dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu

    Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less

  19. Toward Failure Modeling In Complex Dynamic Systems: Impact of Design and Manufacturing Variations

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; McAdams, Daniel A.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    When designing vehicle vibration monitoring systems for aerospace devices, it is common to use well-established models of vibration features to determine whether failures or defects exist. Most of the algorithms used for failure detection rely on these models to detect significant changes during a flight environment. In actual practice, however, most vehicle vibration monitoring systems are corrupted by high rates of false alarms and missed detections. Research conducted at the NASA Ames Research Center has determined that a major reason for the high rates of false alarms and missed detections is the numerous sources of statistical variations that are not taken into account in the. modeling assumptions. In this paper, we address one such source of variations, namely, those caused during the design and manufacturing of rotating machinery components that make up aerospace systems. We present a novel way of modeling the vibration response by including design variations via probabilistic methods. The results demonstrate initial feasibility of the method, showing great promise in developing a general methodology for designing more accurate aerospace vehicle vibration monitoring systems.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronold, K.O.; Nielsen, N.J.R.; Tura, F.

    This paper demonstrates how a structural reliability method can be applied as a rational means to analyze free spans of submarine pipelines with respect to failure in ultimate loading, and to establish partial safety factors for design of such free spans against this failure mode. It is important to note that the described procedure shall be considered as an illustration of a structural reliability methodology, and that the results do not represent a set of final design recommendations. A scope of design cases, consisting of a number of available site-specific pipeline spans, is established and is assumed representative for themore » future occurrence of submarine pipeline spans. Probabilistic models for the wave and current loading and its transfer to stresses in the pipe wall of a pipeline span is established together with a stochastic representation of the material resistance. The event of failure in ultimate loading is considered as based on a limit state which is reached when the maximum stress over the design life of the pipeline exceeds the yield strength of the pipe material. The yielding limit state is considered an ultimate limit state (ULS).« less

  1. Improving online risk assessment with equipment prognostics and health monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coble, Jamie B.; Liu, Xiaotong; Briere, Chris

    The current approach to evaluating the risk of nuclear power plant (NPP) operation relies on static probabilities of component failure, which are based on industry experience with the existing fleet of nominally similar light water reactors (LWRs). As the nuclear industry looks to advanced reactor designs that feature non-light water coolants (e.g., liquid metal, high temperature gas, molten salt), this operating history is not available. Many advanced reactor designs use advanced components, such as electromagnetic pumps, that have not been used in the US commercial nuclear fleet. Given the lack of rich operating experience, we cannot accurately estimate the evolvingmore » probability of failure for basic components to populate the fault trees and event trees that typically comprise probabilistic risk assessment (PRA) models. Online equipment prognostics and health management (PHM) technologies can bridge this gap to estimate the failure probabilities for components under operation. The enhanced risk monitor (ERM) incorporates equipment condition assessment into the existing PRA and risk monitor framework to provide accurate and timely estimates of operational risk.« less

  2. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  3. Impact of distributed energy resources on the reliability of a critical telecommunications facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, David; Zuffranieri, Jason V.; Atcitty, Christopher B.

    2006-03-01

    This report documents a probabilistic risk assessment of an existing power supply system at a large telecommunications office. The focus is on characterizing the increase in the reliability of power supply through the use of two alternative power configurations. Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages. A logical approach to improve the robustness of telecommunication facilities would be to increase the depth and breadth of technologies available to restore power inmore » the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankamo, T.; Kim, I.S.; Yang, Ji Wu

    Failures in the auxiliary feedwater (AFW) system of pressurized water reactors (PWRs) are considered to involve substantial risk whether a decision is made to either continue power operation while repair is being done, or to shut down the plant to undertake repairs. Technical specification action requirements usually require immediate plant shutdown in the case of multiple failures in the system (in some cases, immediate repair of one train is required when all AFW trains fail). This paper presents a probabilistic risk assessment-based method to quantitatively evaluate and compare both the risks of continued power operation and of shutting the plantmore » down, given known failures in the system. The method is applied to the AFW system for four different PWRs. Results show that the risk of continued power operation and plant shutdown both are substantial, but the latter is larger than the former over the usual repair time. This was proven for four plants with different designs: two operating Westinghouse plants, one operating Asea-Brown Boveri Combustion Engineering Plant, and one of evolutionary design. The method can be used to analyze individual plant design and to improve AFW action requirements using risk-informed evaluations.« less

  5. Observations and Bayesian location methodology of transient acoustic signals (likely blue whales) in the Indian Ocean, using a hydrophone triplet.

    PubMed

    Le Bras, Ronan J; Kuzma, Heidi; Sucic, Victor; Bokelmann, Götz

    2016-05-01

    A notable sequence of calls was encountered, spanning several days in January 2003, in the central part of the Indian Ocean on a hydrophone triplet recording acoustic data at a 250 Hz sampling rate. This paper presents signal processing methods applied to the waveform data to detect, group, extract amplitude and bearing estimates for the recorded signals. An approximate location for the source of the sequence of calls is inferred from extracting the features from the waveform. As the source approaches the hydrophone triplet, the source level (SL) of the calls is estimated at 187 ± 6 dB re: 1 μPa-1 m in the 15-60 Hz frequency range. The calls are attributed to a subgroup of blue whales, Balaenoptera musculus, with a characteristic acoustic signature. A Bayesian location method using probabilistic models for bearing and amplitude is demonstrated on the calls sequence. The method is applied to the case of detection at a single triad of hydrophones and results in a probability distribution map for the origin of the calls. It can be extended to detections at multiple triads and because of the Bayesian formulation, additional modeling complexity can be built-in as needed.

  6. Quantification of Road Network Vulnerability and Traffic Impacts to Regional Landslide Hazards.

    NASA Astrophysics Data System (ADS)

    Postance, Benjamin; Hillier, John; Dixon, Neil; Dijkstra, Tom

    2015-04-01

    Slope instability represents a prevalent hazard to transport networks. In the UK regional road networks are frequently disrupted by multiple slope failures triggered during intense precipitation events; primarily due to a degree of regional homogeneity of slope materials, geomorphology and weather conditions. It is of interest to examine how different locations and combinations of slope failure impact road networks, particularly in the context of projected climate change and a 40% increase in UK road demand by 2040. In this study an extensive number (>50 000) of multiple failure event scenarios are simulated within a dynamic micro simulation to assess traffic impacts during peak flow (7 - 10 AM). Possible failure locations are selected within the county of Gloucestershire (3150 km2) using historic failure sites and British Geological Survey GeoSure data. Initial investigations employ a multiple linear regression analyses to consider the severity of traffic impacts, as measured by time, in respect of spatial and topographical network characteristics including connectivity, density and capacity in proximity to failure sites; the network distance between disruptions in multiple failure scenarios is used to consider the effects of spatial clustering. The UK Department of Transport road travel demand and UKCP09 weather projection data to 2080 provide a suitable basis for traffic simulations and probabilistic slope stability assessments. Future work will thus focus on the development of a catastrophe risk model to simulate traffic impacts under various narratives of future travel demand and slope instability under climatic change. The results of this investigation shall contribute to the understanding of road network vulnerabilities and traffic impacts from climate driven slope hazards.

  7. Evolution of Fairness in the Not Quite Ultimatum Game

    NASA Astrophysics Data System (ADS)

    Ichinose, Genki; Sayama, Hiroki

    2014-05-01

    The Ultimatum Game (UG) is an economic game where two players (proposer and responder) decide how to split a certain amount of money. While traditional economic theories based on rational decision making predict that the proposer should make a minimal offer and the responder should accept it, human subjects tend to behave more fairly in UG. Previous studies suggested that extra information such as reputation, empathy, or spatial structure is needed for fairness to evolve in UG. Here we show that fairness can evolve without additional information if players make decisions probabilistically and may continue interactions when the offer is rejected, which we call the Not Quite Ultimatum Game (NQUG). Evolutionary simulations of NQUG showed that the probabilistic decision making contributes to the increase of proposers' offer amounts to avoid rejection, while the repetition of the game works to responders' advantage because they can wait until a good offer comes. These simple extensions greatly promote evolution of fairness in both proposers' offers and responders' acceptance thresholds.

  8. Characterization of essential proteins based on network topology in proteins interaction networks

    NASA Astrophysics Data System (ADS)

    Bakar, Sakhinah Abu; Taheri, Javid; Zomaya, Albert Y.

    2014-06-01

    The identification of essential proteins is theoretically and practically important as (1) it is essential to understand the minimal surviving requirements for cellular lives, and (2) it provides fundamental for development of drug. As conducting experimental studies to identify essential proteins are both time and resource consuming, here we present a computational approach in predicting them based on network topology properties from protein-protein interaction networks of Saccharomyces cerevisiae. The proposed method, namely EP3NN (Essential Proteins Prediction using Probabilistic Neural Network) employed a machine learning algorithm called Probabilistic Neural Network as a classifier to identify essential proteins of the organism of interest; it uses degree centrality, closeness centrality, local assortativity and local clustering coefficient of each protein in the network for such predictions. Results show that EP3NN managed to successfully predict essential proteins with an accuracy of 95% for our studied organism. Results also show that most of the essential proteins are close to other proteins, have assortativity behavior and form clusters/sub-graph in the network.

  9. Analytical resource assessment method for continuous (unconventional) oil and gas accumulations - The "ACCESS" Method

    USGS Publications Warehouse

    Crovelli, Robert A.; revised by Charpentier, Ronald R.

    2012-01-01

    The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.

  10. Probabilistic Design Methodology and its Application to the Design of an Umbilical Retract Mechanism

    NASA Technical Reports Server (NTRS)

    Onyebueke, Landon; Ameye, Olusesan

    2002-01-01

    A lot has been learned from past experience with structural and machine element failures. The understanding of failure modes and the application of an appropriate design analysis method can lead to improved structural and machine element safety as well as serviceability. To apply Probabilistic Design Methodology (PDM), all uncertainties are modeled as random variables with selected distribution types, means, and standard deviations. It is quite difficult to achieve a robust design without considering the randomness of the design parameters which is the case in the use of the Deterministic Design Approach. The US Navy has a fleet of submarine-launched ballistic missiles. An umbilical plug joins the missile to the submarine in order to provide electrical and cooling water connections. As the missile leaves the submarine, an umbilical retract mechanism retracts the umbilical plug clear of the advancing missile after disengagement during launch and retrains the plug in the retracted position. The design of the current retract mechanism in use was based on the deterministic approach which puts emphasis on factor of safety. A new umbilical retract mechanism that is simpler in design, lighter in weight, more reliable, easier to adjust, and more cost effective has become desirable since this will increase the performance and efficiency of the system. This paper reports on a recent project performed at Tennessee State University for the US Navy that involved the application of PDM to the design of an umbilical retract mechanism. This paper demonstrates how the use of PDM lead to the minimization of weight and cost, and the maximization of reliability and performance.

  11. Hazard function theory for nonstationary natural hazards

    NASA Astrophysics Data System (ADS)

    Read, L. K.; Vogel, R. M.

    2015-11-01

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied Generalized Pareto (GP) model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series X, with corresponding failure time series T, should have application to a wide class of natural hazards with rich opportunities for future extensions.

  12. Hazard function theory for nonstationary natural hazards

    NASA Astrophysics Data System (ADS)

    Read, Laura K.; Vogel, Richard M.

    2016-04-01

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.

  13. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  14. Diagnosing true virtue? Remote scenarios, warranted virtue attributions, and virtuous medical practice.

    PubMed

    Oakley, Justin

    2016-02-01

    Immanuel Kant argues in the Foundations that remote scenarios are diagnostic of genuine virtue. When agents commonly thought to have a particular virtue fail to exhibit that virtue in an extreme situation, he argues, they do not truly have the virtue at all, and our propensities to fail in such ways indicate that true virtue might never have existed. Kant's suggestion that failure to show, say, courage in extraordinary circumstances necessarily silences one's claim to have genuine courage seems to rely on an implausibly demanding standard for warranted virtue attributions. In contrast to this approach, some philosophers-such as Robert Adams and John Doris-have argued for probabilistic accounts of warranted virtue attributions. But despite the initial plausibility of such accounts, I argue that a sole reliance on probabilistic approaches is inadequate, as they are insufficiently sensitive to considerations of credit and fault, which emerge when agents have developed various insurance strategies and protective capacities against their responding poorly to particular eventualities. I also argue that medical graduates should develop the sorts of virtuous dispositions necessary to protect patient welfare against various countervailing influences (even where such influences might be encountered only rarely), and that repeated failures to uphold the proper goals of medicine in emergency scenarios might indeed be diagnostic of whether an individual practitioner does have the relevant medical virtue. In closing, I consider the dispositions involved in friendship. I seek to develop a principled way of determining when remote scenarios can be illuminating of genuine friendship and genuine virtue.

  15. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2016-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  16. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2015-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  17. Risk-Informed Safety Assurance and Probabilistic Assessment of Mission-Critical Software-Intensive Systems

    NASA Technical Reports Server (NTRS)

    Guarro, Sergio B.

    2010-01-01

    This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center.

  18. Probabilistic models of genetic variation in structured populations applied to global human studies.

    PubMed

    Hao, Wei; Song, Minsun; Storey, John D

    2016-03-01

    Modern population genetics studies typically involve genome-wide genotyping of individuals from a diverse network of ancestries. An important problem is how to formulate and estimate probabilistic models of observed genotypes that account for complex population structure. The most prominent work on this problem has focused on estimating a model of admixture proportions of ancestral populations for each individual. Here, we instead focus on modeling variation of the genotypes without requiring a higher-level admixture interpretation. We formulate two general probabilistic models, and we propose computationally efficient algorithms to estimate them. First, we show how principal component analysis can be utilized to estimate a general model that includes the well-known Pritchard-Stephens-Donnelly admixture model as a special case. Noting some drawbacks of this approach, we introduce a new 'logistic factor analysis' framework that seeks to directly model the logit transformation of probabilities underlying observed genotypes in terms of latent variables that capture population structure. We demonstrate these advances on data from the Human Genome Diversity Panel and 1000 Genomes Project, where we are able to identify SNPs that are highly differentiated with respect to structure while making minimal modeling assumptions. A Bioconductor R package called lfa is available at http://www.bioconductor.org/packages/release/bioc/html/lfa.html jstorey@princeton.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  19. Adaptive Localization of Focus Point Regions via Random Patch Probabilistic Density from Whole-Slide, Ki-67-Stained Brain Tumor Tissue

    PubMed Central

    Alomari, Yazan M.; MdZin, Reena Rahayu

    2015-01-01

    Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with the k-means and fuzzy c-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved. PMID:25793010

  20. Sarma-based key-group method for rock slope reliability analyses

    NASA Astrophysics Data System (ADS)

    Yarahmadi Bafghi, A. R.; Verdel, T.

    2005-08-01

    The methods used in conducting static stability analyses have remained pertinent to this day for reasons of both simplicity and speed of execution. The most well-known of these methods for purposes of stability analysis of fractured rock masses is the key-block method (KBM).This paper proposes an extension to the KBM, called the key-group method (KGM), which combines not only individual key-blocks but also groups of collapsable blocks into an iterative and progressive analysis of the stability of discontinuous rock slopes. To take intra-group forces into account, the Sarma method has been implemented within the KGM in order to generate a Sarma-based KGM, abbreviated SKGM. We will discuss herein the hypothesis behind this new method, details regarding its implementation, and validation through comparison with results obtained from the distinct element method.Furthermore, as an alternative to deterministic methods, reliability analyses or probabilistic analyses have been proposed to take account of the uncertainty in analytical parameters and models. The FOSM and ASM probabilistic methods could be implemented within the KGM and SKGM framework in order to take account of the uncertainty due to physical and mechanical data (density, cohesion and angle of friction). We will then show how such reliability analyses can be introduced into SKGM to give rise to the probabilistic SKGM (PSKGM) and how it can be used for rock slope reliability analyses. Copyright

  1. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew D.; Grabaskas, David; Brunett, Acacia J.

    2016-01-01

    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended due to deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologiesmore » for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Centering on an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive reactor cavity cooling system following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. While this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability for the reactor cavity cooling system (and the reactor system in general) to the postulated transient event.« less

  2. A Probabilistic Approach for Real-Time Volcano Surveillance

    NASA Astrophysics Data System (ADS)

    Cannavo, F.; Cannata, A.; Cassisi, C.; Di Grazia, G.; Maronno, P.; Montalto, P.; Prestifilippo, M.; Privitera, E.; Gambino, S.; Coltelli, M.

    2016-12-01

    Continuous evaluation of the state of potentially dangerous volcanos plays a key role for civil protection purposes. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the coupling of highly non-linear and complex volcanic dynamic processes leads to measurable effects that can show a large variety of different behaviors. Moreover, due to intrinsic uncertainties and possible failures in some recorded data, the volcano state needs to be expressed in probabilistic terms, thus making the fast volcano state assessment sometimes impracticable for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, we present a probabilistic graphical model to estimate automatically the ongoing volcano state from all the available different kind of measurements. The model consists of a Bayesian network able to represent a set of variables and their conditional dependencies via a directed acyclic graph. The model variables are both the measurements and the possible states of the volcano through the time. The model output is an estimation of the probability distribution of the feasible volcano states. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision making purposes.

  3. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    DOE PAGES

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; ...

    2017-01-24

    We report that many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has beenmore » examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Lastly, although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.« less

  4. Novel composites for wing and fuselage applications

    NASA Technical Reports Server (NTRS)

    Sobel, L. H.; Buttitta, C.; Suarez, J. A.

    1995-01-01

    Probabilistic predictions based on the IPACS code are presented for the material and structural response of unnotched and notched, IM6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is judged poor because IPACS did not have a progressive failure capability at the time this work was performed. The report also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.

  5. Source processes for the probabilistic assessment of tsunami hazards

    USGS Publications Warehouse

    Geist, Eric L.; Lynett, Patrick J.

    2014-01-01

    The importance of tsunami hazard assessment has increased in recent years as a result of catastrophic consequences from events such as the 2004 Indian Ocean and 2011 Japan tsunamis. In particular, probabilistic tsunami hazard assessment (PTHA) methods have been emphasized to include all possible ways a tsunami could be generated. Owing to the scarcity of tsunami observations, a computational approach is used to define the hazard. This approach includes all relevant sources that may cause a tsunami to impact a site and all quantifiable uncertainty. Although only earthquakes were initially considered for PTHA, recent efforts have also attempted to include landslide tsunami sources. Including these sources into PTHA is considerably more difficult because of a general lack of information on relating landslide area and volume to mean return period. The large variety of failure types and rheologies associated with submarine landslides translates to considerable uncertainty in determining the efficiency of tsunami generation. Resolution of these and several other outstanding problems are described that will further advance PTHA methodologies leading to a more accurate understanding of tsunami hazard.

  6. State Tracking and Fault Diagnosis for Dynamic Systems Using Labeled Uncertainty Graph.

    PubMed

    Zhou, Gan; Feng, Wenquan; Zhao, Qi; Zhao, Hongbo

    2015-11-05

    Cyber-physical systems such as autonomous spacecraft, power plants and automotive systems become more vulnerable to unanticipated failures as their complexity increases. Accurate tracking of system dynamics and fault diagnosis are essential. This paper presents an efficient state estimation method for dynamic systems modeled as concurrent probabilistic automata. First, the Labeled Uncertainty Graph (LUG) method in the planning domain is introduced to describe the state tracking and fault diagnosis processes. Because the system model is probabilistic, the Monte Carlo technique is employed to sample the probability distribution of belief states. In addition, to address the sample impoverishment problem, an innovative look-ahead technique is proposed to recursively generate most likely belief states without exhaustively checking all possible successor modes. The overall algorithms incorporate two major steps: a roll-forward process that estimates system state and identifies faults, and a roll-backward process that analyzes possible system trajectories once the faults have been detected. We demonstrate the effectiveness of this approach by applying it to a real world domain: the power supply control unit of a spacecraft.

  7. Probabilistic Analysis of Aircraft Gas Turbine Disk Life and Reliability

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Zaretsky, Erwin V.; August, Richard

    1999-01-01

    Two series of low cycle fatigue (LCF) test data for two groups of different aircraft gas turbine engine compressor disk geometries were reanalyzed and compared using Weibull statistics. Both groups of disks were manufactured from titanium (Ti-6Al-4V) alloy. A NASA Glenn Research Center developed probabilistic computer code Probable Cause was used to predict disk life and reliability. A material-life factor A was determined for titanium (Ti-6Al-4V) alloy based upon fatigue disk data and successfully applied to predict the life of the disks as a function of speed. A comparison was made with the currently used life prediction method based upon crack growth rate. Applying an endurance limit to the computer code did not significantly affect the predicted lives under engine operating conditions. Failure location prediction correlates with those experimentally observed in the LCF tests. A reasonable correlation was obtained between the predicted disk lives using the Probable Cause code and a modified crack growth method for life prediction. Both methods slightly overpredict life for one disk group and significantly under predict it for the other.

  8. Life Modeling and Design Analysis for Ceramic Matrix Composite Materials

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The primary research efforts focused on characterizing and modeling static failure, environmental durability, and creep-rupture behavior of two classes of ceramic matrix composites (CMC), silicon carbide fibers in a silicon carbide matrix (SiC/SiC) and carbon fibers in a silicon carbide matrix (C/SiC). An engineering life prediction model (Probabilistic Residual Strength model) has been developed specifically for CMCs. The model uses residual strength as the damage metric for evaluating remaining life and is posed probabilistically in order to account for the stochastic nature of the material s response. In support of the modeling effort, extensive testing of C/SiC in partial pressures of oxygen has been performed. This includes creep testing, tensile testing, half life and residual tensile strength testing. C/SiC is proposed for airframe and propulsion applications in advanced reusable launch vehicles. Figures 1 and 2 illustrate the models predictive capabilities as well as the manner in which experimental tests are being selected in such a manner as to ensure sufficient data is available to aid in model validation.

  9. The criterion for time symmetry of probabilistic theories and the reversibility of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Holster, A. T.

    2003-10-01

    Physicists routinely claim that the fundamental laws of physics are 'time symmetric' or 'time reversal invariant' or 'reversible'. In particular, it is claimed that the theory of quantum mechanics is time symmetric. But it is shown in this paper that the orthodox analysis suffers from a fatal conceptual error, because the logical criterion for judging the time symmetry of probabilistic theories has been incorrectly formulated. The correct criterion requires symmetry between future-directed laws and past-directed laws. This criterion is formulated and proved in detail. The orthodox claim that quantum mechanics is reversible is re-evaluated. The property demonstrated in the orthodox analysis is shown to be quite distinct from time reversal invariance. The view of Satosi Watanabe that quantum mechanics is time asymmetric is verified, as well as his view that this feature does not merely show a de facto or 'contingent' asymmetry, as commonly supposed, but implies a genuine failure of time reversal invariance of the laws of quantum mechanics. The laws of quantum mechanics would be incompatible with a time-reversed version of our universe.

  10. PCI fuel failure analysis: a report on a cooperative program undertaken by Pacific Northwest Laboratory and Chalk River Nuclear Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohr, C.L.; Pankaskie, P.J.; Heasler, P.G.

    Reactor fuel failure data sets in the form of initial power (P/sub i/), final power (P/sub f/), transient increase in power (..delta..P), and burnup (Bu) were obtained for pressurized heavy water reactors (PHWRs), boiling water reactors (BWRs), and pressurized water reactors (PWRs). These data sets were evaluated and used as the basis for developing two predictive fuel failure models, a graphical concept called the PCI-OGRAM, and a nonlinear regression based model called PROFIT. The PCI-OGRAM is an extension of the FUELOGRAM developed by AECL. It is based on a critical threshold concept for stress dependent stress corrosion cracking. The PROFITmore » model, developed at Pacific Northwest Laboratory, is the result of applying standard statistical regression methods to the available PCI fuel failure data and an analysis of the environmental and strain rate dependent stress-strain properties of the Zircaloy cladding.« less

  11. Top-down and bottom-up definitions of human failure events in human reliability analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald Laurids

    2014-10-01

    In the probabilistic risk assessments (PRAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question is crucial, however, as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PRAs tend to be top-down—defined as a subset of the PRA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) often tend to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less

  12. A probabilistic topic model for clinical risk stratification from electronic health records.

    PubMed

    Huang, Zhengxing; Dong, Wei; Duan, Huilong

    2015-12-01

    Risk stratification aims to provide physicians with the accurate assessment of a patient's clinical risk such that an individualized prevention or management strategy can be developed and delivered. Existing risk stratification techniques mainly focus on predicting the overall risk of an individual patient in a supervised manner, and, at the cohort level, often offer little insight beyond a flat score-based segmentation from the labeled clinical dataset. To this end, in this paper, we propose a new approach for risk stratification by exploring a large volume of electronic health records (EHRs) in an unsupervised fashion. Along this line, this paper proposes a novel probabilistic topic modeling framework called probabilistic risk stratification model (PRSM) based on Latent Dirichlet Allocation (LDA). The proposed PRSM recognizes a patient clinical state as a probabilistic combination of latent sub-profiles, and generates sub-profile-specific risk tiers of patients from their EHRs in a fully unsupervised fashion. The achieved stratification results can be easily recognized as high-, medium- and low-risk, respectively. In addition, we present an extension of PRSM, called weakly supervised PRSM (WS-PRSM) by incorporating minimum prior information into the model, in order to improve the risk stratification accuracy, and to make our models highly portable to risk stratification tasks of various diseases. We verify the effectiveness of the proposed approach on a clinical dataset containing 3463 coronary heart disease (CHD) patient instances. Both PRSM and WS-PRSM were compared with two established supervised risk stratification algorithms, i.e., logistic regression and support vector machine, and showed the effectiveness of our models in risk stratification of CHD in terms of the Area Under the receiver operating characteristic Curve (AUC) analysis. As well, in comparison with PRSM, WS-PRSM has over 2% performance gain, on the experimental dataset, demonstrating that incorporating risk scoring knowledge as prior information can improve the performance in risk stratification. Experimental results reveal that our models achieve competitive performance in risk stratification in comparison with existing supervised approaches. In addition, the unsupervised nature of our models makes them highly portable to the risk stratification tasks of various diseases. Moreover, patient sub-profiles and sub-profile-specific risk tiers generated by our models are coherent and informative, and provide significant potential to be explored for the further tasks, such as patient cohort analysis. We hypothesize that the proposed framework can readily meet the demand for risk stratification from a large volume of EHRs in an open-ended fashion. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. GRIDSS: sensitive and specific genomic rearrangement detection using positional de Bruijn graph assembly

    PubMed Central

    Do, Hongdo; Molania, Ramyar

    2017-01-01

    The identification of genomic rearrangements with high sensitivity and specificity using massively parallel sequencing remains a major challenge, particularly in precision medicine and cancer research. Here, we describe a new method for detecting rearrangements, GRIDSS (Genome Rearrangement IDentification Software Suite). GRIDSS is a multithreaded structural variant (SV) caller that performs efficient genome-wide break-end assembly prior to variant calling using a novel positional de Bruijn graph-based assembler. By combining assembly, split read, and read pair evidence using a probabilistic scoring, GRIDSS achieves high sensitivity and specificity on simulated, cell line, and patient tumor data, recently winning SV subchallenge #5 of the ICGC-TCGA DREAM8.5 Somatic Mutation Calling Challenge. On human cell line data, GRIDSS halves the false discovery rate compared to other recent methods while matching or exceeding their sensitivity. GRIDSS identifies nontemplate sequence insertions, microhomologies, and large imperfect homologies, estimates a quality score for each breakpoint, stratifies calls into high or low confidence, and supports multisample analysis. PMID:29097403

  14. A Methodology for Modeling Nuclear Power Plant Passive Component Aging in Probabilistic Risk Assessment under the Impact of Operating Conditions, Surveillance and Maintenance Activities

    NASA Astrophysics Data System (ADS)

    Guler Yigitoglu, Askin

    In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.

  15. Improving FMEA risk assessment through reprioritization of failures

    NASA Astrophysics Data System (ADS)

    Ungureanu, A. L.; Stan, G.

    2016-08-01

    Most of the current methods used to assess the failure and to identify the industrial equipment defects are based on the determination of Risk Priority Number (RPN). Although conventional RPN calculation is easy to understand and use, the methodology presents some limitations, such as the large number of duplicates and the difficulty of assessing the RPN indices. In order to eliminate the afore-mentioned shortcomings, this paper puts forward an easy and efficient computing method, called Failure Developing Mode and Criticality Analysis (FDMCA), which takes into account the failures and the defect evolution in time, from failure appearance to a breakdown.

  16. Spiking neuron network Helmholtz machine.

    PubMed

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  17. Spiking neuron network Helmholtz machine

    PubMed Central

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191

  18. Cloudy outlook. Supercommittee failure leaves healthcare providers questioning future cuts, impact on hospitals.

    PubMed

    Zigmond, Jessica

    2011-11-28

    President Barack Obama responded to the failure of the so-called supercommittee with a message that he won't allow Congress to water down the automatic cuts triggered under the August deficit law, which would include a 2% reduction in Medicare payments.

  19. Moving Aerospace Structural Design Practice to a Load and Resistance Factor Approach

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.; Raju, Ivatury S.

    2016-01-01

    Aerospace structures are traditionally designed using the factor of safety (FOS) approach. The limit load on the structure is determined and the structure is then designed for FOS times the limit load - the ultimate load. Probabilistic approaches utilize distributions for loads and strengths. Failures are predicted to occur in the region of intersection of the two distributions. The load and resistance factor design (LRFD) approach judiciously combines these two approaches by intensive calibration studies on loads and strength to result in structures that are efficient and reliable. This paper discusses these three approaches.

  20. Shuttle payload vibroacoustic test plan evaluation

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    Statistical decision theory is used to evaluate seven alternate vibro-acoustic test plans for Space Shuttle payloads; test plans include component, subassembly and payload testing and combinations of component and assembly testing. The optimum test levels and the expected cost are determined for each test plan. By including all of the direct cost associated with each test plan and the probabilistic costs due to ground test and flight failures, the test plans which minimize project cost are determined. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level.

  1. Philosophy of ATHEANA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bley, D.C.; Cooper, S.E.; Forester, J.A.

    ATHEANA, a second-generation Human Reliability Analysis (HRA) method integrates advances in psychology with engineering, human factors, and Probabilistic Risk Analysis (PRA) disciplines to provide an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant events. The method uses the characteristics of serious accidents identified through retrospective analysis of serious operational events to set priorities in a search process for significant human failure events, unsafe acts, and error-forcing context (unfavorable plant conditions combined with negative performance-shaping factors). ATHEANA has been tested in a demonstration project at an operating pressurized water reactor.

  2. Aircraft Conflict Analysis and Real-Time Conflict Probing Using Probabilistic Trajectory Modeling

    NASA Technical Reports Server (NTRS)

    Yang, Lee C.; Kuchar, James K.

    2000-01-01

    Methods for maintaining separation between aircraft in the current airspace system have been built from a foundation of structured routes and evolved procedures. However, as the airspace becomes more congested and the chance of failures or operational error become more problematic, automated conflict alerting systems have been proposed to help provide decision support and to serve as traffic monitoring aids. The problem of conflict detection and resolution has been tackled from a number of different ways, but in this thesis, it is recast as a problem of prediction in the presence of uncertainties. Much of the focus is concentrated on the errors and uncertainties from the working trajectory model used to estimate future aircraft positions. The more accurate the prediction, the more likely an ideal (no false alarms, no missed detections) alerting system can be designed. Additional insights into the problem were brought forth by a review of current operational and developmental approaches found in the literature. An iterative, trial and error approach to threshold design was identified. When examined from a probabilistic perspective, the threshold parameters were found to be a surrogate to probabilistic performance measures. To overcome the limitations in the current iterative design method, a new direct approach is presented where the performance measures are directly computed and used to perform the alerting decisions. The methodology is shown to handle complex encounter situations (3-D, multi-aircraft, multi-intent, with uncertainties) with relative ease. Utilizing a Monte Carlo approach, a method was devised to perform the probabilistic computations in near realtime. Not only does this greatly increase the method's potential as an analytical tool, but it also opens up the possibility for use as a real-time conflict alerting probe. A prototype alerting logic was developed and has been utilized in several NASA Ames Research Center experimental studies.

  3. Probabilistic exposure assessment model to estimate aseptic-UHT product failure rate.

    PubMed

    Pujol, Laure; Albert, Isabelle; Magras, Catherine; Johnson, Nicholas Brian; Membré, Jeanne-Marie

    2015-01-02

    Aseptic-Ultra-High-Temperature (UHT) products are manufactured to be free of microorganisms capable of growing in the food at normal non-refrigerated conditions at which the food is likely to be held during manufacture, distribution and storage. Two important phases within the process are widely recognised as critical in controlling microbial contamination: the sterilisation steps and the following aseptic steps. Of the microbial hazards, the pathogen spore formers Clostridium botulinum and Bacillus cereus are deemed the most pertinent to be controlled. In addition, due to a relatively high thermal resistance, Geobacillus stearothermophilus spores are considered a concern for spoilage of low acid aseptic-UHT products. A probabilistic exposure assessment model has been developed in order to assess the aseptic-UHT product failure rate associated with these three bacteria. It was a Modular Process Risk Model, based on nine modules. They described: i) the microbial contamination introduced by the raw materials, either from the product (i.e. milk, cocoa and dextrose powders and water) or the packaging (i.e. bottle and sealing component), ii) the sterilisation processes, of either the product or the packaging material, iii) the possible recontamination during subsequent processing of both product and packaging. The Sterility Failure Rate (SFR) was defined as the sum of bottles contaminated for each batch, divided by the total number of bottles produced per process line run (10(6) batches simulated per process line). The SFR associated with the three bacteria was estimated at the last step of the process (i.e. after Module 9) but also after each module, allowing for the identification of modules, and responsible contamination pathways, with higher or lower intermediate SFR. The model contained 42 controlled settings associated with factory environment, process line or product formulation, and more than 55 probabilistic inputs corresponding to inputs with variability conditional to a mean uncertainty. It was developed in @Risk and run through Monte Carlo simulations. Overall, the highest SFR was associated with G. stearothermophilus (380000 bottles contaminated in 10(11) bottles produced) and the lowest to C. botulinum (3 bottles contaminated in 10(11) bottles produced). Unsurprisingly, SFR due to G. stearothermophilus was due to its ability to survive the UHT treatment. More interestingly, it was identified that SFR due to B. cereus (17000 bottles contaminated in 10(11) bottles produced) was due to an airborne recontamination of the aseptic tank (49%) and a post-sterilisation packaging contamination (33%). A deeper analysis (sensitivity and scenario analyses) was done to investigate how the SFR due to B. cereus could be reduced by changing the process settings related to potential air recontamination source. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Towards health impact assessment of drinking-water privatization--the example of waterborne carcinogens in North Rhine-Westphalia (Germany).

    PubMed Central

    Fehr, Rainer; Mekel, Odile; Lacombe, Martin; Wolf, Ulrike

    2003-01-01

    Worldwide there is a tendency towards deregulation in many policy sectors - this, for example, includes liberalization and privatization of drinking-water management. However, concerns about the negative impacts this might have on human health call for prospective health impact assessment (HIA) on the management of drinking-water. On the basis of an established generic 10-step HIA procedure and on risk assessment methodology, this paper aims to produce quantitative estimates concerning health effects from increased exposure to carcinogens in drinking-water. Using data from North Rhine-Westphalia in Germany, probabilistic estimates of excess lifetime cancer risk, as well as estimates of additional cases of cancer from increased carcinogen exposure levels are presented. The results show how exposure to contaminants that are strictly within current limits could increase cancer risks and case-loads substantially. On the basis of the current analysis, we suggest that with uniform increases in pollutant levels, a single chemical (arsenic) is responsible for a large fraction of expected additional risk. The study also illustrates the uncertainty involved in predicting the health impacts of changes in water quality. Future analysis should include additional carcinogens, non-cancer risks including those due to microbial contamination, and the impacts of system failures and of illegal action, which may be increasingly likely to occur under changed management arrangements. If, in spite of concerns, water is privatized, it is particularly important to provide adequate surveillance of water quality. PMID:12894324

  5. Reducing a Knowledge-Base Search Space When Data Are Missing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  6. Fortran programs for reliability analysis

    Treesearch

    John J. Zahn

    1992-01-01

    This report contains a set of FORTRAN subroutines written to calculate the Hasofer-Lind reliability index. Nonlinear failure criteria and correlated basic variables are permitted. Users may incorporate these routines into their own calling program (an example program, RELANAL, is included) and must provide a failure criterion subroutine (two example subroutines,...

  7. Fall down 7 Times, Get up 8: Teaching Kids to Succeed

    ERIC Educational Resources Information Center

    Silver, Debbie

    2012-01-01

    As teachers and parents, our job is to teach students to tackle challenges rather than avoid them. Award-winning teacher and best-selling author Debbie Silver addresses the relationship between student motivation and risking failure, calling failure a temporary "glitch" that provides valuable learning opportunities. She explains motivational…

  8. A remote monitoring and telephone nurse coaching intervention to reduce readmissions among patients with heart failure: study protocol for the Better Effectiveness After Transition - Heart Failure (BEAT-HF) randomized controlled trial.

    PubMed

    Black, Jeanne T; Romano, Patrick S; Sadeghi, Banafsheh; Auerbach, Andrew D; Ganiats, Theodore G; Greenfield, Sheldon; Kaplan, Sherrie H; Ong, Michael K

    2014-04-13

    Heart failure is a prevalent health problem associated with costly hospital readmissions. Transitional care programs have been shown to reduce readmissions but are costly to implement. Evidence regarding the effectiveness of telemonitoring in managing the care of this chronic condition is mixed. The objective of this randomized controlled comparative effectiveness study is to evaluate the effectiveness of a care transition intervention that includes pre-discharge education about heart failure and post-discharge telephone nurse coaching combined with home telemonitoring of weight, blood pressure, heart rate, and symptoms in reducing all-cause 180-day hospital readmissions for older adults hospitalized with heart failure. A multi-center, randomized controlled trial is being conducted at six academic health systems in California. A total of 1,500 patients aged 50 years and older will be enrolled during a hospitalization for treatment of heart failure. Patients in the intervention group will receive intensive patient education using the 'teach-back' method and receive instruction in using the telemonitoring equipment. Following hospital discharge, they will receive a series of nine scheduled health coaching telephone calls over 6 months from nurses located in a centralized call center. The nurses also will call patients and patients' physicians in response to alerts generated by the telemonitoring system, based on predetermined parameters. The primary outcome is readmission for any cause within 180 days. Secondary outcomes include 30-day readmission, mortality, hospital days, emergency department (ED) visits, hospital cost, and health-related quality of life. BEAT-HF is one of the largest randomized controlled trials of telemonitoring in patients with heart failure, and the first explicitly to adapt the care transition approach and combine it with remote telemonitoring. The study population also includes patients with a wide range of demographic and socioeconomic characteristics. Once completed, the study will be a rich resource of information on how best to use remote technology in the care management of patients with chronic heart failure. ClinicalTrials.gov # NCT01360203.

  9. Application of reliability-centered maintenance to boiling water reactor emergency core cooling systems fault-tree analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Y.A.; Feltus, M.A.

    1995-07-01

    Reliability-centered maintenance (RCM) methods are applied to boiling water reactor plant-specific emergency core cooling system probabilistic risk assessment (PRA) fault trees. The RCM is a technique that is system function-based, for improving a preventive maintenance (PM) program, which is applied on a component basis. Many PM programs are based on time-directed maintenance tasks, while RCM methods focus on component condition-directed maintenance tasks. Stroke time test data for motor-operated valves (MOVs) are used to address three aspects concerning RCM: (a) to determine if MOV stroke time testing was useful as a condition-directed PM task; (b) to determine and compare the plant-specificmore » MOV failure data from a broad RCM philosophy time period compared with a PM period and, also, compared with generic industry MOV failure data; and (c) to determine the effects and impact of the plant-specific MOV failure data on core damage frequency (CDF) and system unavailabilities for these emergency systems. The MOV stroke time test data from four emergency core cooling systems [i.e., high-pressure coolant injection (HPCI), reactor core isolation cooling (RCIC), low-pressure core spray (LPCS), and residual heat removal/low-pressure coolant injection (RHR/LPCI)] were gathered from Philadelphia Electric Company`s Peach Bottom Atomic Power Station Units 2 and 3 between 1980 and 1992. The analyses showed that MOV stroke time testing was not a predictor for eminent failure and should be considered as a go/no-go test. The failure data from the broad RCM philosophy showed an improvement compared with the PM-period failure rates in the emergency core cooling system MOVs. Also, the plant-specific MOV failure rates for both maintenance philosophies were shown to be lower than the generic industry estimates.« less

  10. 45 CFR 1386.19 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 1386.25 of this part the following definitions apply: Abuse means any act or failure to act which was... such acts as: Verbal, nonverbal, mental and emotional harassment; rape or sexual assault; striking; the..., telephone calls (including anonymous calls), from any source alleging abuse or neglect of an individual with...

  11. 45 CFR 1386.19 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 1386.25 of this part the following definitions apply: Abuse means any act or failure to act which was... such acts as: Verbal, nonverbal, mental and emotional harassment; rape or sexual assault; striking; the..., telephone calls (including anonymous calls), from any source alleging abuse or neglect of an individual with...

  12. 45 CFR 1386.19 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 1386.25 of this part the following definitions apply: Abuse means any act or failure to act which was... such acts as: Verbal, nonverbal, mental and emotional harassment; rape or sexual assault; striking; the..., telephone calls (including anonymous calls), from any source alleging abuse or neglect of an individual with...

  13. Analysis of Jordan’s Proposed Emergency Communication Interoperability Plan (JECIP) for Disaster Response

    DTIC Science & Technology

    2008-12-01

    Transmission quality measurements start once the call is established which includes low voice volume, level of noise , echo, crosstalk, and garbling...to failure, and finally, there is restorability which is a measure of how easy the system is restored upon failure. To reduce frequency of failure...Silicon and Germanium. These systems are friendly for the environment, have low - noise , have no fuel consumption, are maintenance-free, and have no

  14. UAV Swarm Behavior Modeling for Early Exposure of Failure Modes

    DTIC Science & Technology

    2016-09-01

    Systems Center Atlantic, for his patience with me through this two-year process. He worked with my schedule and was very understanding of the...emergence of new failure modes? The MP modeling environment provides a breakdown of all potential event traces. Given that the research questions call...for the revelation of potential failure modes, MP was selected as the modeling environment because it provides a substantial set of results and data

  15. PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  16. Probabilistic Cross-identification in Crowded Fields as an Assignment Problem

    NASA Astrophysics Data System (ADS)

    Budavári, Tamás; Basu, Amitabh

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  17. The Pot Calling the Kettle Black: Distancing Response to Ethical Dissonance

    ERIC Educational Resources Information Center

    Barkan, Rachel; Ayal, Shahar; Gino, Francesca; Ariely, Dan

    2012-01-01

    Six studies demonstrate the "pot calling the kettle black" phenomenon whereby people are guilty of the very fault they identify in others. Recalling an undeniable ethical failure, people experience ethical dissonance between their moral values and their behavioral misconduct. Our findings indicate that to reduce ethical dissonance,…

  18. Hazard function theory for nonstationary natural hazards

    DOE PAGES

    Read, Laura K.; Vogel, Richard M.

    2016-04-11

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field ofmore » hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series ( X) with its failure time series ( T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. As a result, our theoretical analysis linking hazard random variable  X with corresponding failure time series  T should have application to a wide class of natural hazards with opportunities for future extensions.« less

  19. A Physically-Based and Distributed Tool for Modeling the Hydrological and Mechanical Processes of Shallow Landslides

    NASA Astrophysics Data System (ADS)

    Arnone, E.; Noto, L. V.; Dialynas, Y. G.; Caracciolo, D.; Bras, R. L.

    2015-12-01

    This work presents the capabilities of a model, i.e. the tRIBS-VEGGIE-Landslide, in two different versions, i.e. developed within a probabilistic framework and coupled with a root cohesion module. The probabilistic model treats geotechnical and soil retention curve parameters as random variables across the basin and estimates theoretical probability distributions of slope stability and the associated "factor of safety" commonly used to describe the occurrence of shallow landslides. The derived distributions are used to obtain the spatio-temporal dynamics of probability of failure, conditioned on soil moisture dynamics at each watershed location. The framework has been tested in the Luquillo Experimental Forest (Puerto Rico) where shallow landslides are common. In particular, the methodology was used to evaluate how the spatial and temporal patterns of precipitation, whose variability is significant over the basin, affect the distribution of probability of failure. Another version of the model accounts for the additional cohesion exerted by vegetation roots. The approach is to use the Fiber Bundle Model (FBM) framework that allows for the evaluation of the root strength as a function of the stress-strain relationships of bundles of fibers. The model requires the knowledge of the root architecture to evaluate the additional reinforcement from each root diameter class. The root architecture is represented with a branching topology model based on Leonardo's rule. The methodology has been tested on a simple case study to explore the role of both hydrological and mechanical root effects. Results demonstrate that the effects of root water uptake can at times be more significant than the mechanical reinforcement; and that the additional resistance provided by roots depends heavily on the vegetation root structure and length.

  20. Damage prognosis of adhesively-bonded joints in laminated composite structural components of unmanned aerial vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrar, Charles R; Gobbato, Maurizio; Conte, Joel

    2009-01-01

    The extensive use of lightweight advanced composite materials in unmanned aerial vehicles (UAVs) drastically increases the sensitivity to both fatigue- and impact-induced damage of their critical structural components (e.g., wings and tail stabilizers) during service life. The spar-to-skin adhesive joints are considered one of the most fatigue sensitive subcomponents of a lightweight UAV composite wing with damage progressively evolving from the wing root. This paper presents a comprehensive probabilistic methodology for predicting the remaining service life of adhesively-bonded joints in laminated composite structural components of UAVs. Non-destructive evaluation techniques and Bayesian inference are used to (i) assess the current statemore » of damage of the system and, (ii) update the probability distribution of the damage extent at various locations. A probabilistic model for future loads and a mechanics-based damage model are then used to stochastically propagate damage through the joint. Combined local (e.g., exceedance of a critical damage size) and global (e.g.. flutter instability) failure criteria are finally used to compute the probability of component failure at future times. The applicability and the partial validation of the proposed methodology are then briefly discussed by analyzing the debonding propagation, along a pre-defined adhesive interface, in a simply supported laminated composite beam with solid rectangular cross section, subjected to a concentrated load applied at mid-span. A specially developed Eliler-Bernoulli beam finite element with interlaminar slip along the damageable interface is used in combination with a cohesive zone model to study the fatigue-induced degradation in the adhesive material. The preliminary numerical results presented are promising for the future validation of the methodology.« less

  1. Modeling the economic impact of linezolid versus vancomycin in confirmed nosocomial pneumonia caused by methicillin-resistant Staphylococcus aureus.

    PubMed

    Patel, Dipen A; Shorr, Andrew F; Chastre, Jean; Niederman, Michael; Simor, Andrew; Stephens, Jennifer M; Charbonneau, Claudie; Gao, Xin; Nathwani, Dilip

    2014-07-22

    We compared the economic impacts of linezolid and vancomycin for the treatment of hospitalized patients with methicillin-resistant Staphylococcus aureus (MRSA)-confirmed nosocomial pneumonia. We used a 4-week decision tree model incorporating published data and expert opinion on clinical parameters, resource use and costs (in 2012 US dollars), such as efficacy, mortality, serious adverse events, treatment duration and length of hospital stay. The results presented are from a US payer perspective. The base case first-line treatment duration for patients with MRSA-confirmed nosocomial pneumonia was 10 days. Clinical treatment success (used for the cost-effectiveness ratio) and failure due to lack of efficacy, serious adverse events or mortality were possible clinical outcomes that could impact costs. Cost of treatment and incremental cost-effectiveness per successfully treated patient were calculated for linezolid versus vancomycin. Univariate (one-way) and probabilistic sensitivity analyses were conducted. The model allowed us to calculate the total base case inpatient costs as $46,168 (linezolid) and $46,992 (vancomycin). The incremental cost-effectiveness ratio favored linezolid (versus vancomycin), with lower costs ($824 less) and greater efficacy (+2.7% absolute difference in the proportion of patients successfully treated for MRSA nosocomial pneumonia). Approximately 80% of the total treatment costs were attributed to hospital stay (primarily in the intensive care unit). The results of our probabilistic sensitivity analysis indicated that linezolid is the cost-effective alternative under varying willingness to pay thresholds. These model results show that linezolid has a favorable incremental cost-effectiveness ratio compared to vancomycin for MRSA-confirmed nosocomial pneumonia, largely attributable to the higher clinical trial response rate of patients treated with linezolid. The higher drug acquisition cost of linezolid was offset by lower treatment failure-related costs and fewer days of hospitalization.

  2. Probabilistic Risk Assessment of Hydraulic Fracturing in Unconventional Reservoirs by Means of Fault Tree Analysis: An Initial Discussion

    NASA Astrophysics Data System (ADS)

    Rodak, C. M.; McHugh, R.; Wei, X.

    2016-12-01

    The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.

  3. Probabilistic seismic hazard in the San Francisco Bay area based on a simplified viscoelastic cycle model of fault interactions

    USGS Publications Warehouse

    Pollitz, F.F.; Schwartz, D.P.

    2008-01-01

    We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.

  4. Hazard function theory for nonstationary natural hazards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Read, Laura K.; Vogel, Richard M.

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field ofmore » hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series ( X) with its failure time series ( T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. As a result, our theoretical analysis linking hazard random variable  X with corresponding failure time series  T should have application to a wide class of natural hazards with opportunities for future extensions.« less

  5. Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners (Second Edition)

    NASA Technical Reports Server (NTRS)

    Stamatelatos,Michael; Dezfuli, Homayoon; Apostolakis, George; Everline, Chester; Guarro, Sergio; Mathias, Donovan; Mosleh, Ali; Paulos, Todd; Riha, David; Smith, Curtis; hide

    2011-01-01

    Probabilistic Risk Assessment (PRA) is a comprehensive, structured, and logical analysis method aimed at identifying and assessing risks in complex technological systems for the purpose of cost-effectively improving their safety and performance. NASA's objective is to better understand and effectively manage risk, and thus more effectively ensure mission and programmatic success, and to achieve and maintain high safety standards at NASA. NASA intends to use risk assessment in its programs and projects to support optimal management decision making for the improvement of safety and program performance. In addition to using quantitative/probabilistic risk assessment to improve safety and enhance the safety decision process, NASA has incorporated quantitative risk assessment into its system safety assessment process, which until now has relied primarily on a qualitative representation of risk. Also, NASA has recently adopted the Risk-Informed Decision Making (RIDM) process [1-1] as a valuable addition to supplement existing deterministic and experience-based engineering methods and tools. Over the years, NASA has been a leader in most of the technologies it has employed in its programs. One would think that PRA should be no exception. In fact, it would be natural for NASA to be a leader in PRA because, as a technology pioneer, NASA uses risk assessment and management implicitly or explicitly on a daily basis. NASA has probabilistic safety requirements (thresholds and goals) for crew transportation system missions to the International Space Station (ISS) [1-2]. NASA intends to have probabilistic requirements for any new human spaceflight transportation system acquisition. Methods to perform risk and reliability assessment in the early 1960s originated in U.S. aerospace and missile programs. Fault tree analysis (FTA) is an example. It would have been a reasonable extrapolation to expect that NASA would also become the world leader in the application of PRA. That was, however, not to happen. Early in the Apollo program, estimates of the probability for a successful roundtrip human mission to the moon yielded disappointingly low (and suspect) values and NASA became discouraged from further performing quantitative risk analyses until some two decades later when the methods were more refined, rigorous, and repeatable. Instead, NASA decided to rely primarily on the Hazard Analysis (HA) and Failure Modes and Effects Analysis (FMEA) methods for system safety assessment.

  6. Velocity field measurements in tailings dam failure experiments using a combined PIV-PTV approach

    USDA-ARS?s Scientific Manuscript database

    Tailings dams are built to impound mining waste, also called tailings, which consists of a mixture of fine-sized sediments and water contaminated with some hazardous chemicals used for extracting the ore by leaching. Non-Newtonian flow of sediment-water mixture resulting from a failure of tailings d...

  7. Who is responsible for business failures?

    PubMed

    Cleverley, William O

    2002-10-01

    Two high-profile business failures should be a wake-up call for hospital leaders. Management and boards cannot rely on unaudited financial statements as the principal indicator of financial health. Boards must ensure diligent analysis of key financial measures. Board members need better education about healthcare finance. Hospitals are not immune to financial collapse.

  8. Probabilistic Hazard Estimation at a Densely Urbanised Area: the Neaples Volcanoes

    NASA Astrophysics Data System (ADS)

    de Natale, G.; Mastrolorenzo, G.; Panizza, A.; Pappalardo, L.; Claudia, T.

    2005-12-01

    The Neaples volcanic area (Southern Italy), including Vesuvius, Campi Flegrei caldera and Ischia island, is the highest risk one in the World, where more than 2 million people live within about 10 km from an active volcanic vent. Such an extreme risk calls for accurate methodologies aimed to quantify it, in a probabilistic way, considering all the available volcanological information as well as modelling results. In fact, simple hazard maps based on the observation of deposits from past eruptions have the major problem that eruptive history generally samples a very limited number of possible outcomes, thus resulting almost meaningless to get the event probability in the area. This work describes a methodology making the best use (from a Bayesian point of view) of volcanological data and modelling results, to compute probabilistic hazard maps from multi-vent explosive eruptions. The method, which follows an approach recently developed by the same authors for pyroclastic flows hazard, has been here improved and extended to compute also fall-out hazard. The application of the method to the Neapolitan volcanic area, including the densely populated city of Naples, allows, for the first time, to get a global picture of the areal distribution for the main hazards from multi-vent explosive eruptions. From a joint consideration of the hazard contributions from all the three volcanic areas, new insight on the volcanic hazard distribution emerges, which will have strong implications for urban and emergency planning in the area.

  9. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  10. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  11. Probabilistic solutions of nonlinear oscillators excited by combined colored and white noise excitations

    NASA Astrophysics Data System (ADS)

    Siu-Siu, Guo; Qingxuan, Shi

    2017-03-01

    In this paper, single-degree-of-freedom (SDOF) systems combined to Gaussian white noise and Gaussian/non-Gaussian colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations with four-coupled first-order differential equations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions, especially the ones in the tail regions of the PDFs. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis. Hopefully, our present work could provide insights into the investigation of structures under random loadings.

  12. Task Decomposition in Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald Laurids; Joe, Jeffrey Clark

    2014-06-01

    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less

  13. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  14. Simulating fail-stop in asynchronous distributed systems

    NASA Technical Reports Server (NTRS)

    Sabel, Laura; Marzullo, Keith

    1994-01-01

    The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.

  15. On some methods for assessing earthquake predictions

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.; Peresan, A.

    2017-09-01

    A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.

  16. Synchronization Analysis of Master-Slave Probabilistic Boolean Networks.

    PubMed

    Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W C; Cao, Jinde

    2015-08-28

    In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results.

  17. Automated Probabilistic Reconstruction of White-Matter Pathways in Health and Disease Using an Atlas of the Underlying Anatomy

    PubMed Central

    Yendiki, Anastasia; Panneck, Patricia; Srinivasan, Priti; Stevens, Allison; Zöllei, Lilla; Augustinack, Jean; Wang, Ruopeng; Salat, David; Ehrlich, Stefan; Behrens, Tim; Jbabdi, Saad; Gollub, Randy; Fischl, Bruce

    2011-01-01

    We have developed a method for automated probabilistic reconstruction of a set of major white-matter pathways from diffusion-weighted MR images. Our method is called TRACULA (TRActs Constrained by UnderLying Anatomy) and utilizes prior information on the anatomy of the pathways from a set of training subjects. By incorporating this prior knowledge in the reconstruction procedure, our method obviates the need for manual interaction with the tract solutions at a later stage and thus facilitates the application of tractography to large studies. In this paper we illustrate the application of the method on data from a schizophrenia study and investigate whether the inclusion of both patients and healthy subjects in the training set affects our ability to reconstruct the pathways reliably. We show that, since our method does not constrain the exact spatial location or shape of the pathways but only their trajectory relative to the surrounding anatomical structures, a set a of healthy training subjects can be used to reconstruct the pathways accurately in patients as well as in controls. PMID:22016733

  18. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  19. Synchronization Analysis of Master-Slave Probabilistic Boolean Networks

    PubMed Central

    Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W. C.; Cao, Jinde

    2015-01-01

    In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results. PMID:26315380

  20. Probabilistic models of eukaryotic evolution: time for integration

    PubMed Central

    Lartillot, Nicolas

    2015-01-01

    In spite of substantial work and recent progress, a global and fully resolved picture of the macroevolutionary history of eukaryotes is still under construction. This concerns not only the phylogenetic relations among major groups, but also the general characteristics of the underlying macroevolutionary processes, including the patterns of gene family evolution associated with endosymbioses, as well as their impact on the sequence evolutionary process. All these questions raise formidable methodological challenges, calling for a more powerful statistical paradigm. In this direction, model-based probabilistic approaches have played an increasingly important role. In particular, improved models of sequence evolution accounting for heterogeneities across sites and across lineages have led to significant, although insufficient, improvement in phylogenetic accuracy. More recently, one main trend has been to move away from simple parametric models and stepwise approaches, towards integrative models explicitly considering the intricate interplay between multiple levels of macroevolutionary processes. Such integrative models are in their infancy, and their application to the phylogeny of eukaryotes still requires substantial improvement of the underlying models, as well as additional computational developments. PMID:26323768

  1. A Probabilistic Approach to Mitigate Composition Attacks on Privacy in Non-Coordinated Environments

    PubMed Central

    Sarowar Sattar, A.H.M.; Li, Jiuyong; Liu, Jixue; Heatherly, Raymond; Malin, Bradley

    2014-01-01

    Organizations share data about individuals to drive business and comply with law and regulation. However, an adversary may expose confidential information by tracking an individual across disparate data publications using quasi-identifying attributes (e.g., age, geocode and sex) associated with the records. Various studies have shown that well-established privacy protection models (e.g., k-anonymity and its extensions) fail to protect an individual’s privacy against this “composition attack”. This type of attack can be thwarted when organizations coordinate prior to data publication, but such a practice is not always feasible. In this paper, we introduce a probabilistic model called (d, α)-linkable, which mitigates composition attack without coordination. The model ensures that d confidential values are associated with a quasi-identifying group with a likelihood of α. We realize this model through an efficient extension to k-anonymization and use extensive experiments to show our strategy significantly reduces the likelihood of a successful composition attack and can preserve more utility than alternative privacy models, such as differential privacy. PMID:25598581

  2. NESSUS (Numerical Evaluation of Stochastic Structures Under Stress)/EXPERT: Bridging the gap between artificial intelligence and FORTRAN

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.; Palmer, Karol K.

    1988-01-01

    The development of a probabilistic structural analysis methodology (PSAM) is described. In the near-term, the methodology will be applied to designing critical components of the next generation space shuttle main engine. In the long-term, PSAM will be applied very broadly, providing designers with a new technology for more effective design of structures whose character and performance are significantly affected by random variables. The software under development to implement the ideas developed in PSAM resembles, in many ways, conventional deterministic structural analysis code. However, several additional capabilities regarding the probabilistic analysis makes the input data requirements and the resulting output even more complex. As a result, an intelligent front- and back-end to the code is being developed to assist the design engineer in providing the input data in a correct and appropriate manner. The type of knowledge that this entails is, in general, heuristically-based, allowing the fairly well-understood technology of production rules to apply with little difficulty. However, the PSAM code, called NESSUS, is written in FORTRAN-77 and runs on a DEC VAX. Thus, the associated expert system, called NESSUS/EXPERT, must run on a DEC VAX as well, and integrate effectively and efficiently with the existing FORTRAN code. This paper discusses the process undergone to select a suitable tool, identify an appropriate division between the functions that should be performed in FORTRAN and those that should be performed by production rules, and how integration of the conventional and AI technologies was achieved.

  3. Estimating the total incidence of kidney failure in Australia including individuals who are not treated by dialysis or transplantation.

    PubMed

    Sparke, Claire; Moon, Lynelle; Green, Frances; Mathew, Tim; Cass, Alan; Chadban, Steve; Chapman, Jeremy; Hoy, Wendy; McDonald, Stephen

    2013-03-01

    To date, incidence data for kidney failure in Australia have been available for only those who start renal replacement therapy (RRT). Information about the total incidence of kidney failure, including non-RRT-treated cases, is important to help understand the burden of kidney failure in the community and the characteristics of patients who die without receiving treatment. Data linkage study of national observational data sets. All incident treated cases recorded in the Australia and New Zealand Dialysis and Transplant Registry (ANZDATA) probabilistically linked to incident untreated kidney failure cases derived from national death registration data for 2003-2007. Age, sex, and year. Kidney failure, a combination of incident RRT or death attributed to kidney failure (without RRT). Total incidence of kidney failure (treated and untreated) and treatment rates. There were 21,370 incident cases of kidney failure in 2003-2007. The incidence rate was 20.9/100,000 population (95% CI, 18.3-24.0) and was significantly higher among older people and males (26.1/100,000 population; 95% CI, 22.5-30.0) compared with females (17.0/100,000 population; 95% CI, 14.9-19.2). There were similars number of treated (10,949) and untreated (10,421) cases, but treatment rates were influenced highly by age. More than 90% of cases in all age groups between 5 and 60 years were treated, but this percentage decreased sharply for older people; only 4% of cases in persons 85 years or older were treated (ORs for no treatment of 115 [95% CI, 118-204] for men ≥80 years and 400 [95% CI, 301-531] for women ≥80 years compared with women who were <50 years). Cross-sectional design, reliance on accurate coding of kidney failure in death registration data. Almost all Australians who develop kidney failure at younger than 60 years receive RRT, but treatment rates decrease substantially above that age. Copyright © 2013 National Kidney Foundation, Inc. All rights reserved.

  4. Risk assessment for furan contamination through the food chain in Belgian children.

    PubMed

    Scholl, Georges; Huybrechts, Inge; Humblet, Marie-France; Scippo, Marie-Louise; De Pauw, Edwin; Eppe, Gauthier; Saegerman, Claude

    2012-08-01

    Young, old, pregnant and immuno-compromised persons are of great concern for risk assessors as they represent the sub-populations most at risk. The present paper focuses on risk assessment linked to furan exposure in children. Only the Belgian population was considered because individual contamination and consumption data that are required for accurate risk assessment were available for Belgian children only. Two risk assessment approaches, the so-called deterministic and probabilistic, were applied and the results were compared for the estimation of daily intake. A significant difference between the average Estimated Daily Intake (EDI) was underlined between the deterministic (419 ng kg⁻¹ body weight (bw) day⁻¹) and the probabilistic (583 ng kg⁻¹ bw day⁻¹) approaches, which results from the mathematical treatment of the null consumption and contamination data. The risk was characterised by two ways: (1) the classical approach by comparison of the EDI to a reference dose (RfD(chronic-oral)) and (2) the most recent approach, namely the Margin of Exposure (MoE) approach. Both reached similar conclusions: the risk level is not of a major concern, but is neither negligible. In the first approach, only 2.7 or 6.6% (respectively in the deterministic and in the probabilistic way) of the studied population presented an EDI above the RfD(chronic-oral). In the second approach, the percentage of children displaying a MoE above 10,000 and below 100 is 3-0% and 20-0.01% in the deterministic and probabilistic modes, respectively. In addition, children were compared to adults and significant differences between the contamination patterns were highlighted. While major contamination was linked to coffee consumption in adults (55%), no item predominantly contributed to the contamination in children. The most important were soups (19%), dairy products (17%), pasta and rice (11%), fruit and potatoes (9% each).

  5. The European ASAMPSA_E project : towards guidance to model the impact of high amplitude natural hazards in the probabilistic safety assessment of nuclear power plants. Information on the project progress and needs from the geosciences.

    NASA Astrophysics Data System (ADS)

    Raimond, Emmanuel; Decker, Kurt; Guigueno, Yves; Klug, Joakim; Loeffler, Horst

    2015-04-01

    The Fukushima nuclear accident in Japan resulted from the combination of two correlated extreme external events (earthquake and tsunami). The consequences, in particular flooding, went beyond what was considered in the initial engineering design design of nuclear power plants (NPPs). Such situations can in theory be identified using probabilistic safety assessment (PSA) methodology. PSA results may then lead industry (system suppliers and utilities) or Safety Authorities to take appropriate decisions to reinforce the defence-in-depth of the NPP for low probability event but high amplitude consequences. In reality, the development of such PSA remains a challenging task. Definitions of the design basis of NPPs, for example, require data on events with occurrence probabilities not higher than 10-4 per year. Today, even lower probabilities, down to 10-8, are expected and typically used for probabilistic safety analyses (PSA) of NPPs and the examination of so-called design extension conditions. Modelling the combinations of natural or man-made hazards that can affect a NPP and affecting some meaningful probability of occurrence seems to be difficult. The European project ASAMPSAE (www.asampsa.eu) gathers more than 30 organizations (industry, research, safety control) from Europe, US and Japan and aims at identifying some meaningful practices to extend the scope and the quality of the existing probabilistic safety analysis developed for nuclear power plants. It offers a framework to discuss, at a technical level, how "extended PSA" can be developed efficiently and be used to verify if the robustness of Nuclear Power Plants (NPPs) in their environment is sufficient. The paper will present the objectives of this project, some first lessons and introduce which type of guidance is being developed. It will explain the need of expertise from geosciences to support the nuclear safety assessment in the different area (seismotectonic, hydrological, meteorological and biological hazards, …).

  6. Probabilistic Learning in Junior High School: Investigation of Student Probabilistic Thinking Levels

    NASA Astrophysics Data System (ADS)

    Kurniasih, R.; Sujadi, I.

    2017-09-01

    This paper was to investigate level on students’ probabilistic thinking. Probabilistic thinking level is level of probabilistic thinking. Probabilistic thinking is thinking about probabilistic or uncertainty matter in probability material. The research’s subject was students in grade 8th Junior High School students. The main instrument is a researcher and a supporting instrument is probabilistic thinking skills test and interview guidelines. Data was analyzed using triangulation method. The results showed that the level of students probabilistic thinking before obtaining a teaching opportunity at the level of subjective and transitional. After the students’ learning level probabilistic thinking is changing. Based on the results of research there are some students who have in 8th grade level probabilistic thinking numerically highest of levels. Level of students’ probabilistic thinking can be used as a reference to make a learning material and strategy.

  7. Constellation Probabilistic Risk Assessment (PRA): Design Consideration for the Crew Exploration Vehicle

    NASA Technical Reports Server (NTRS)

    Prassinos, Peter G.; Stamatelatos, Michael G.; Young, Jonathan; Smith, Curtis

    2010-01-01

    Managed by NASA's Office of Safety and Mission Assurance, a pilot probabilistic risk analysis (PRA) of the NASA Crew Exploration Vehicle (CEV) was performed in early 2006. The PRA methods used follow the general guidance provided in the NASA PRA Procedures Guide for NASA Managers and Practitioners'. Phased-mission based event trees and fault trees are used to model a lunar sortie mission of the CEV - involving the following phases: launch of a cargo vessel and a crew vessel; rendezvous of these two vessels in low Earth orbit; transit to th$: moon; lunar surface activities; ascension &om the lunar surface; and return to Earth. The analysis is based upon assumptions, preliminary system diagrams, and failure data that may involve large uncertainties or may lack formal validation. Furthermore, some of the data used were based upon expert judgment or extrapolated from similar componentssystemsT. his paper includes a discussion of the system-level models and provides an overview of the analysis results used to identify insights into CEV risk drivers, and trade and sensitivity studies. Lastly, the PRA model was used to determine changes in risk as the system configurations or key parameters are modified.

  8. A probabilistic mechanical model for prediction of aggregates’ size distribution effect on concrete compressive strength

    NASA Astrophysics Data System (ADS)

    Miled, Karim; Limam, Oualid; Sab, Karam

    2012-06-01

    To predict aggregates' size distribution effect on the concrete compressive strength, a probabilistic mechanical model is proposed. Within this model, a Voronoi tessellation of a set of non-overlapping and rigid spherical aggregates is used to describe the concrete microstructure. Moreover, aggregates' diameters are defined as statistical variables and their size distribution function is identified to the experimental sieve curve. Then, an inter-aggregate failure criterion is proposed to describe the compressive-shear crushing of the hardened cement paste when concrete is subjected to uniaxial compression. Using a homogenization approach based on statistical homogenization and on geometrical simplifications, an analytical formula predicting the concrete compressive strength is obtained. This formula highlights the effects of cement paste strength and aggregates' size distribution and volume fraction on the concrete compressive strength. According to the proposed model, increasing the concrete strength for the same cement paste and the same aggregates' volume fraction is obtained by decreasing both aggregates' maximum size and the percentage of coarse aggregates. Finally, the validity of the model has been discussed through a comparison with experimental results (15 concrete compressive strengths ranging between 46 and 106 MPa) taken from literature and showing a good agreement with the model predictions.

  9. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  10. Probabilistic Design of a Mars Sample Return Earth Entry Vehicle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Mitcheltree, Robert A.

    2002-01-01

    The driving requirement for design of a Mars Sample Return mission is to assure containment of the returned samples. Designing to, and demonstrating compliance with, such a requirement requires physics based tools that establish the relationship between engineer's sizing margins and probabilities of failure. The traditional method of determining margins on ablative thermal protection systems, while conservative, provides little insight into the actual probability of an over-temperature during flight. The objective of this paper is to describe a new methodology for establishing margins on sizing the thermal protection system (TPS). Results of this Monte Carlo approach are compared with traditional methods.

  11. Goal Based Testing: A Risk Informed Process

    NASA Technical Reports Server (NTRS)

    Everline, Chester; Smith, Clayton; Distefano, Sal; Goldin, Natalie

    2014-01-01

    A process for life demonstration testing is developed, which can reduce the number of resources required by conventional sampling theory while still maintaining the same degree of rigor and confidence level. This process incorporates state-of-the-art probabilistic thinking and is consistent with existing NASA guidance documentation. This view of life testing changes the paradigm of testing a system for many hours to show confidence that a system will last for the required number of years to one that focuses efforts and resources on exploring how the system can fail at end-of-life and building confidence that the failure mechanisms are understood and well mitigated.

  12. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  13. A methodology for estimating risks associated with landslides of contaminated soil into rivers.

    PubMed

    Göransson, Gunnel; Norrman, Jenny; Larson, Magnus; Alén, Claes; Rosén, Lars

    2014-02-15

    Urban areas adjacent to surface water are exposed to soil movements such as erosion and slope failures (landslides). A landslide is a potential mechanism for mobilisation and spreading of pollutants. This mechanism is in general not included in environmental risk assessments for contaminated sites, and the consequences associated with contamination in the soil are typically not considered in landslide risk assessments. This study suggests a methodology to estimate the environmental risks associated with landslides in contaminated sites adjacent to rivers. The methodology is probabilistic and allows for datasets with large uncertainties and the use of expert judgements, providing quantitative estimates of probabilities for defined failures. The approach is illustrated by a case study along the river Göta Älv, Sweden, where failures are defined and probabilities for those failures are estimated. Failures are defined from a pollution perspective and in terms of exceeding environmental quality standards (EQSs) and acceptable contaminant loads. Models are then suggested to estimate probabilities of these failures. A landslide analysis is carried out to assess landslide probabilities based on data from a recent landslide risk classification study along the river Göta Älv. The suggested methodology is meant to be a supplement to either landslide risk assessment (LRA) or environmental risk assessment (ERA), providing quantitative estimates of the risks associated with landslide in contaminated sites. The proposed methodology can also act as a basis for communication and discussion, thereby contributing to intersectoral management solutions. From the case study it was found that the defined failures are governed primarily by the probability of a landslide occurring. The overall probabilities for failure are low; however, if a landslide occurs the probabilities of exceeding EQS are high and the probability of having at least a 10% increase in the contamination load within one year is also high. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Too good to be true: when overwhelming evidence fails to convince.

    PubMed

    Gunn, Lachlan J; Chapeau-Blondeau, François; McDonnell, Mark D; Davis, Bruce R; Allison, Andrew; Abbott, Derek

    2016-03-01

    Is it possible for a large sequence of measurements or observations, which support a hypothesis, to counterintuitively decrease our confidence? Can unanimous support be too good to be true? The assumption of independence is often made in good faith; however, rarely is consideration given to whether a systemic failure has occurred. Taking this into account can cause certainty in a hypothesis to decrease as the evidence for it becomes apparently stronger. We perform a probabilistic Bayesian analysis of this effect with examples based on (i) archaeological evidence, (ii) weighing of legal evidence and (iii) cryptographic primality testing. In this paper, we investigate the effects of small error rates in a set of measurements or observations. We find that even with very low systemic failure rates, high confidence is surprisingly difficult to achieve; in particular, we find that certain analyses of cryptographically important numerical tests are highly optimistic, underestimating their false-negative rate by as much as a factor of 2 80 .

  15. Vibroacoustic test plan evaluation: Parameter variation study

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloef, H. R.

    1976-01-01

    Statistical decision models are shown to provide a viable method of evaluating the cost effectiveness of alternate vibroacoustic test plans and the associated test levels. The methodology developed provides a major step toward the development of a realistic tool to quantitatively tailor test programs to specific payloads. Testing is considered at the no test, component, subassembly, or system level of assembly. Component redundancy and partial loss of flight data are considered. Most and probabilistic costs are considered, and incipient failures resulting from ground tests are treated. Optimums defining both component and assembly test levels are indicated for the modified test plans considered. modeling simplifications must be considered in interpreting the results relative to a particular payload. New parameters introduced were a no test option, flight by flight failure probabilities, and a cost to design components for higher vibration requirements. Parameters varied were the shuttle payload bay internal acoustic environment, the STS launch cost, the component retest/repair cost, and the amount of redundancy in the housekeeping section of the payload reliability model.

  16. The analysis of probability task completion; Taxonomy of probabilistic thinking-based across gender in elementary school students

    NASA Astrophysics Data System (ADS)

    Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi

    2017-08-01

    Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.

  17. Effective technologies for noninvasive remote monitoring in heart failure.

    PubMed

    Conway, Aaron; Inglis, Sally C; Clark, Robyn A

    2014-06-01

    Trials of new technologies to remotely monitor for signs and symptoms of worsening heart failure are continually emerging. The extent to which technological differences impact the effectiveness of noninvasive remote monitoring for heart failure management is unknown. This study examined the effect of specific technology used for noninvasive remote monitoring of people with heart failure on all-cause mortality and heart failure-related hospitalizations. A subanalysis of a large systematic review and meta-analysis was conducted. Studies were stratified according to the specific type of technology used, and separate meta-analyses were performed. Four different types of noninvasive remote monitoring technologies were identified, including structured telephone calls, videophone, interactive voice response devices, and telemonitoring. Only structured telephone calls and telemonitoring were effective in reducing the risk of all-cause mortality (relative risk [RR]=0.87; 95% confidence interval [CI], 0.75-1.01; p=0.06; and RR=0.62; 95% CI, 0.50-0.77; p<0.0001, respectively) and heart failure-related hospitalizations (RR=0.77; 95% CI, 0.68-0.87; p<0.001; and RR=0.75; 95% CI, 0.63-0.91; p=0.003, respectively). More research data are required for videophone and interactive voice response technologies. This subanalysis identified that only two of the four specific technologies used for noninvasive remote monitoring in heart failure improved outcomes. When results of studies that involved these disparate technologies were combined in previous meta-analyses, significant improvements in outcomes were identified. As such, this study has highlighted implications for future meta-analyses of randomized controlled trials focused on evaluating the effectiveness of remote monitoring in heart failure.

  18. A Probabilistic Performance Assessment Study of Potential Low-Level Radioactive Waste Disposal Sites in Taiwan

    NASA Astrophysics Data System (ADS)

    Knowlton, R. G.; Arnold, B. W.; Mattie, P. D.; Kuo, M.; Tien, N.

    2006-12-01

    For several years now, Taiwan has been engaged in a process to select a low-level radioactive waste (LLW) disposal site. Taiwan is generating LLW from operational and decommissioning wastes associated with nuclear power reactors, as well as research, industrial, and medical radioactive wastes. The preliminary selection process has narrowed the search to four potential candidate sites. These sites are to be evaluated in a performance assessment analysis to determine the likelihood of meeting the regulatory criteria for disposal. Sandia National Laboratories and Taiwan's Institute of Nuclear Energy Research have been working together to develop the necessary performance assessment methodology and associated computer models to perform these analyses. The methodology utilizes both deterministic (e.g., single run) and probabilistic (e.g., multiple statistical realizations) analyses to achieve the goals. The probabilistic approach provides a means of quantitatively evaluating uncertainty in the model predictions and a more robust basis for performing sensitivity analyses to better understand what is driving the dose predictions from the models. Two types of disposal configurations are under consideration: a shallow land burial concept and a cavern disposal concept. The shallow land burial option includes a protective cover to limit infiltration potential to the waste. Both conceptual designs call for the disposal of 55 gallon waste drums within concrete lined trenches or tunnels, and backfilled with grout. Waste emplaced in the drums may be solidified. Both types of sites are underlain or placed within saturated fractured bedrock material. These factors have influenced the conceptual model development of each site, as well as the selection of the models to employ for the performance assessment analyses. Several existing codes were integrated in order to facilitate a comprehensive performance assessment methodology to evaluate the potential disposal sites. First, a need existed to simulate the failure processes of the waste containers, with subsequent leaching of the waste form to the underlying host rock. The Breach, Leach, and Transport Multiple Species (BLT-MS) code was selected to meet these needs. BLT-MS also has a 2-D finite-element advective-dispersive transport module, with radionuclide in-growth and decay. BLT-MS does not solve the groundwater flow equation, but instead requires the input of Darcy flow velocity terms. These terms were abstracted from a groundwater flow model using the FEHM code. For the shallow land burial site, the HELP code was also used to evaluate the performance of the protective cover. The GoldSim code was used for two purposes: quantifying uncertainties in the predictions, and providing a platform to evaluate an alternative conceptual model involving matrix-diffusion transport. Results of the preliminary performance assessment analyses using examples to illustrate the computational framework will be presented. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.

  19. Reliability of objects in aerospace technologies and beyond: Holistic risk management approach

    NASA Astrophysics Data System (ADS)

    Shai, Yair; Ingman, D.; Suhir, E.

    A “ high level” , deductive-reasoning-based (“ holistic” ), approach is aimed at the direct analysis of the behavior of a system as a whole, rather than with an attempt to understand the system's behavior by conducting first a “ low level” , inductive-reasoning-based, analysis of the behavior and the contributions of the system's elements. The holistic view on treatment is widely accepted in medical practice, and “ holistic health” concept upholds that all the aspects of people's needs (psychological, physical or social), should be seen as a whole, and that a disease is caused by the combined effect of physical, emotional, spiritual, social and environmental imbalances. Holistic reasoning is applied in our analysis to model the behavior of engineering products (“ species” ) subjected to various economic, marketing, and reliability “ health” factors. Vehicular products (cars, aircraft, boats, etc.), e.g., might be still robust enough, but could be out-of-date, or functionally obsolete, or their further use might be viewed as unjustifiably expensive. High-level-performance functions (HLPF) are the essential feature of the approach. HLPFs are, in effect, “ signatures” of the “ species” of interest. The HLPFs describe, in a “ holistic” , and certainly in a probabilistic, way, numerous complex multi-dependable relations among the representatives of the “ species” under consideration. ; umerous inter-related “ stresses” , both actual (“ physical” ) and nonphysical, which affect the probabilistic predictions are inherently being taken into account by the HLPFs. There is no need, and might even be counter-productive, to conduct tedious, time- and labor-consuming experimentations and to invest significant amount of time and resources to accumulate “ representative statistics” to predict - he governing probabilistic characteristics of the system behavior, such as, e.g., life expectancy of a particular type of products. “ Species” of military aircraft, commercial aircraft and private cars have been chosen in our analysis as illustrations of the fruitfulness of the “ holistic” approach. The obtained data show that both commercial “ species” exhibit similar “ survival dynamics” in compare with those of the military species of aircraft: lifetime distributions were found to be Weibull distributions for all “ species” however for commercial vehicles, the shape parameters were a little higher than 2, and scale parameters were 19.8 years (aircraft) and 21.7 (cars) whereas for military aircraft, the shape parameters were much higher and the mean time to failure much longer. The difference between the lifetime characteristics of the “ species” can be attributed to the differences in the social, operational, economic and safety-and-reliability requirements and constraints. The obtained information can be used to make tentative predictions for the most likely trends in the given field of vehicular technology. The following major conclusions can be drawn from our analysis: 1) The suggested concept based on the use of HLPFs reflects the current state and the general perceptions in the given field of engineering, including aerospace technologies, and allows for all the inherent and induced factors to be taken into account: any type of failures, usage profiles, economic factors, environmental conditions, etc. The concept requires only very general input data for the entire population. There is no need for the less available information about individual articles. 2) Failure modes are not restricted to the physical type of failures and include economic, cultural or social effects. All possible causes, which might lead to making a decision to terminate the use of a particular type

  20. Delivering heart failure disease management in 3 tertiary care centers: key clinical components and venues of care.

    PubMed

    Shah, Monica R; Whellan, David J; Peterson, Eric D; Nohria, Anju; Hasselblad, Vic; Xue, Zhenyi; Bowers, Margaret T; O'Connor, Christopher M; Califf, Robert M; Stevenson, Lynne W

    2008-04-01

    Little data exist to assist to help those organizing and managing heart failure (HF) disease management (DM) programs. We aimed to describe the intensity of outpatient HF care (clinic visits and telephone calls) and medical and nonpharmacological interventions in the outpatient setting. This was a prospective substudy of 130 patients enrolled in STARBRITE in HFDM programs at 3 centers. Follow-up occurred 10, 30, 60, 90, and 120 days after discharge. The number of clinic visits and calls made by HF cardiologists, nurse practitioners, and nurses were prospectively tracked. The results were reported as medians and interquartile ranges. There were a total of 581 calls with 4 (2, 6) per patient and 467 clinic visits with 3 (2, 5) per patient. Time spent per patient was 8.9 (6, 10.6) minutes per call and 23.8 (20, 28.3) minutes per clinic visit. Nurses and nurse practitioners spent 113 hours delivering care on the phone, and physicians and nurse practitioners spent 187.6 hours in clinic. Issues addressed during calls included HF education (341 times [52.6%]) and fluid overload (87 times [41.8%]). Medical interventions included adjustments to loop diuretics (calls 101 times, clinic 156 times); beta-blockers (calls 18 times, clinic 126 times); vasodilators (calls 8 times, clinic 55 times). More than a third of clinician time was spent on calls, during which >50% of patient contacts and HF education and >39% of diuretic adjustments occurred. Administrators and public and private insurers need to recognize the amount of medical care delivered over the telephone and should consider reimbursement for these activities.

  1. 77 FR 55174 - Medical Waivers for Merchant Mariner Credential Applicants With Anti-Tachycardia Devices or...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-07

    ... failure in the past two years? (8) Does the mariner's record contain any evidence of significant... FURTHER INFORMATION CONTACT: If you have questions about this notice, call or email Lieutenant Ashley Holm... [email protected] . If you have questions on viewing material in the docket, call Renee V. Wright, Program...

  2. The Crisis among Contemporary American Adolescents: A Call for the Integration of Research, Policies, and Programs.

    ERIC Educational Resources Information Center

    Lerner, Richard M.; And Others

    1994-01-01

    Points out the growing crisis among American adolescents, with approximately half of adolescents at moderate or greater risk for engaging in unsafe sexual behaviors, teenage pregnancy, and teenage child-bearing; drug and alcohol use and abuse; school underachievement, failure, and dropout; and delinquency and crime. Calls for increased research on…

  3. Probabilistic Open Set Recognition

    NASA Astrophysics Data System (ADS)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  4. Don’t get (sun) burned : exposing exterior wood to the weather prior to painting contributes to finish failure

    Treesearch

    R. Sam Williams

    2005-01-01

    Contrary to what might be called popular myth, research shows that allowing exterior wood surfaces to weather before applying paint does not help the cause of long-term coating performance. Instead, weathering prior to painting has been shown to contribute significantly to premature failure of the finish due to loss of adhesion.

  5. Two sides of one coin: massive hepatic necrosis and progenitor cell-mediated regeneration in acute liver failure

    PubMed Central

    Weng, Hong-Lei; Cai, Xiaobo; Yuan, Xiaodong; Liebe, Roman; Dooley, Steven; Li, Hai; Wang, Tai-Ling

    2015-01-01

    Massive hepatic necrosis is a key event underlying acute liver failure, a serious clinical syndrome with high mortality. Massive hepatic necrosis in acute liver failure has unique pathophysiological characteristics including extremely rapid parenchymal cell death and removal. On the other hand, massive necrosis rapidly induces the activation of liver progenitor cells, the so-called “second pathway of liver regeneration.” The final clinical outcome of acute liver failure depends on whether liver progenitor cell-mediated regeneration can efficiently restore parenchymal mass and function within a short time. This review summarizes the current knowledge regarding massive hepatic necrosis and liver progenitor cell-mediated regeneration in patients with acute liver failure, the two sides of one coin. PMID:26136687

  6. Two sides of one coin: massive hepatic necrosis and progenitor cell-mediated regeneration in acute liver failure.

    PubMed

    Weng, Hong-Lei; Cai, Xiaobo; Yuan, Xiaodong; Liebe, Roman; Dooley, Steven; Li, Hai; Wang, Tai-Ling

    2015-01-01

    Massive hepatic necrosis is a key event underlying acute liver failure, a serious clinical syndrome with high mortality. Massive hepatic necrosis in acute liver failure has unique pathophysiological characteristics including extremely rapid parenchymal cell death and removal. On the other hand, massive necrosis rapidly induces the activation of liver progenitor cells, the so-called "second pathway of liver regeneration." The final clinical outcome of acute liver failure depends on whether liver progenitor cell-mediated regeneration can efficiently restore parenchymal mass and function within a short time. This review summarizes the current knowledge regarding massive hepatic necrosis and liver progenitor cell-mediated regeneration in patients with acute liver failure, the two sides of one coin.

  7. On the Accuracy of Probabilistic Bucking Load Prediction

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.

    2001-01-01

    The buckling strength of thin-walled stiffened or unstiffened, metallic or composite shells is of major concern in aeronautical and space applications. The difficulty to predict the behavior of axially compressed thin-walled cylindrical shells continues to worry design engineers as we enter the third millennium. Thanks to extensive research programs in the late sixties and early seventies and the contributions of many eminent scientists, it is known that buckling strength calculations are affected by the uncertainties in the definition of the parameters of the problem such as definition of loads, material properties, geometric variables, edge support conditions, and the accuracy of the engineering models and analysis tools used in the design phase. The NASA design criteria monographs from the late sixties account for these design uncertainties by the use of a lump sum safety factor. This so-called 'empirical knockdown factor gamma' usually results in overly conservative design. Recently new reliability based probabilistic design procedure for buckling critical imperfect shells have been proposed. It essentially consists of a stochastic approach which introduces an improved 'scientific knockdown factor lambda(sub a)', that is not as conservative as the traditional empirical one. In order to incorporate probabilistic methods into a High Fidelity Analysis Approach one must be able to assess the accuracy of the various steps that must be executed to complete a reliability calculation. In the present paper the effect of size of the experimental input sample on the predicted value of the scientific knockdown factor lambda(sub a) calculated by the First-Order, Second-Moment Method is investigated.

  8. Bayesian Estimation of Small Effects in Exercise and Sports Science.

    PubMed

    Mengersen, Kerrie L; Drovandi, Christopher C; Robert, Christian P; Pyne, David B; Gore, Christopher J

    2016-01-01

    The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

  9. Evaluation of a National Call Center and a Local Alerts System for Detection of New Cases of Ebola Virus Disease - Guinea, 2014-2015.

    PubMed

    Lee, Christopher T; Bulterys, Marc; Martel, Lise D; Dahl, Benjamin A

    2016-03-11

    The epidemic of Ebola virus disease (Ebola) in West Africa began in Guinea in late 2013 (1), and on August 8, 2014, the World Health Organization (WHO) declared the epidemic a Public Health Emergency of International Concern (2). Guinea was declared Ebola-free on December 29, 2015, and is under a 90 day period of enhanced surveillance, following 3,351 confirmed and 453 probable cases of Ebola and 2,536 deaths (3). Passive surveillance for Ebola in Guinea has been conducted principally through the use of a telephone alert system. Community members and health facilities report deaths and suspected Ebola cases to local alert numbers operated by prefecture health departments or to a national toll-free call center. The national call center additionally functions as a source of public health information by responding to questions from the public about Ebola. To evaluate the sensitivity of the two systems and compare the sensitivity of the national call center with the local alerts system, the CDC country team performed probabilistic record linkage of the combined prefecture alerts database, as well as the national call center database, with the national viral hemorrhagic fever (VHF) database; the VHF database contains records of all known confirmed Ebola cases. Among 17,309 alert calls analyzed from the national call center, 71 were linked to 1,838 confirmed Ebola cases in the VHF database, yielding a sensitivity of 3.9%. The sensitivity of the national call center was highest in the capital city of Conakry (11.4%) and lower in other prefectures. In comparison, the local alerts system had a sensitivity of 51.1%. Local public health infrastructure plays an important role in surveillance in an epidemic setting.

  10. Efficient Probabilistic Diagnostics for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Chavira, Mark; Cascio, Keith; Poll, Scott; Darwiche, Adnan; Uckun, Serdar

    2008-01-01

    We consider in this work the probabilistic approach to model-based diagnosis when applied to electrical power systems (EPSs). Our probabilistic approach is formally well-founded, as it based on Bayesian networks and arithmetic circuits. We investigate the diagnostic task known as fault isolation, and pay special attention to meeting two of the main challenges . model development and real-time reasoning . often associated with real-world application of model-based diagnosis technologies. To address the challenge of model development, we develop a systematic approach to representing electrical power systems as Bayesian networks, supported by an easy-to-use speci.cation language. To address the real-time reasoning challenge, we compile Bayesian networks into arithmetic circuits. Arithmetic circuit evaluation supports real-time diagnosis by being predictable and fast. In essence, we introduce a high-level EPS speci.cation language from which Bayesian networks that can diagnose multiple simultaneous failures are auto-generated, and we illustrate the feasibility of using arithmetic circuits, compiled from Bayesian networks, for real-time diagnosis on real-world EPSs of interest to NASA. The experimental system is a real-world EPS, namely the Advanced Diagnostic and Prognostic Testbed (ADAPT) located at the NASA Ames Research Center. In experiments with the ADAPT Bayesian network, which currently contains 503 discrete nodes and 579 edges, we .nd high diagnostic accuracy in scenarios where one to three faults, both in components and sensors, were inserted. The time taken to compute the most probable explanation using arithmetic circuits has a small mean of 0.2625 milliseconds and standard deviation of 0.2028 milliseconds. In experiments with data from ADAPT we also show that arithmetic circuit evaluation substantially outperforms joint tree propagation and variable elimination, two alternative algorithms for diagnosis using Bayesian network inference.

  11. Evaluation of power system security and development of transmission pricing method

    NASA Astrophysics Data System (ADS)

    Kim, Hyungchul

    The electric power utility industry is presently undergoing a change towards the deregulated environment. This has resulted in unbundling of generation, transmission and distribution services. The introduction of competition into unbundled electricity services may lead system operation closer to its security boundaries resulting in smaller operating safety margins. The competitive environment is expected to lead to lower price rates for customers and higher efficiency for power suppliers in the long run. Under this deregulated environment, security assessment and pricing of transmission services have become important issues in power systems. This dissertation provides new methods for power system security assessment and transmission pricing. In power system security assessment, the following issues are discussed (1) The description of probabilistic methods for power system security assessment; (2) The computation time of simulation methods; (3) on-line security assessment for operation. A probabilistic method using Monte-Carlo simulation is proposed for power system security assessment. This method takes into account dynamic and static effects corresponding to contingencies. Two different Kohonen networks, Self-Organizing Maps and Learning Vector Quantization, are employed to speed up the probabilistic method. The combination of Kohonen networks and Monte-Carlo simulation can reduce computation time in comparison with straight Monte-Carlo simulation. A technique for security assessment employing Bayes classifier is also proposed. This method can be useful for system operators to make security decisions during on-line power system operation. This dissertation also suggests an approach for allocating transmission transaction costs based on reliability benefits in transmission services. The proposed method shows the transmission transaction cost of reliability benefits when transmission line capacities are considered. The ratio between allocation by transmission line capacity-use and allocation by reliability benefits is computed using the probability of system failure.

  12. Terminology inaccuracies in the interpretation of imaging results in detection of cervical lymph node metastases in papillary thyroid cancer

    PubMed Central

    Mulla, Mubashir; Schulte, Klaus-Martin

    2012-01-01

    Cervical lymph nodes (CLNs) are the most common site of metastases in papillary thyroid cancer (PTC). Ultrasound scan (US) is the most commonly used imaging modality in the evaluation of CLNs in PTC. Computerised tomography (CT) and 18fluorodeoxyglucose positron emission tomography (18FDG PET–CT) are used less commonly. It is widely believed that the above imaging techniques should guide the surgical approach to the patient with PTC. Methods We performed a systematic review of imaging studies from the literature assessing the usefulness for the detection of metastatic CLNs in PTC. We evaluated the author's interpretation of their numeric findings specifically with regard to ‘sensitivity’ and ‘negative predictive value’ (NPV) by comparing their use against standard definitions of these terms in probabilistic statistics. Results A total of 16 studies used probabilistic terms to describe the value of US for the detection of LN metastases. Only 6 (37.5%) calculated sensitivity and NPV correctly. For CT, out of the eight studies, only 1 (12.5%) used correct terms to describe analytical results. One study looked at magnetic resonance imaging, while three assessed 18FDG PET–CT, none of which provided correct calculations for sensitivity and NPV. Conclusion Imaging provides high specificity for the detection of cervical metastases of PTC. However, sensitivity and NPV are low. The majority of studies reporting on a high sensitivity have not used key terms according to standard definitions of probabilistic statistics. Against common opinion, there is no current evidence that failure to find LN metastases on ultrasound or cross-sectional imaging can be used to guide surgical decision making. PMID:23781308

  13. A Bayesian Framework for Analysis of Pseudo-Spatial Models of Comparable Engineered Systems with Application to Spacecraft Anomaly Prediction Based on Precedent Data

    NASA Astrophysics Data System (ADS)

    Ndu, Obibobi Kamtochukwu

    To ensure that estimates of risk and reliability inform design and resource allocation decisions in the development of complex engineering systems, early engagement in the design life cycle is necessary. An unfortunate constraint on the accuracy of such estimates at this stage of concept development is the limited amount of high fidelity design and failure information available on the actual system under development. Applying the human ability to learn from experience and augment our state of knowledge to evolve better solutions mitigates this limitation. However, the challenge lies in formalizing a methodology that takes this highly abstract, but fundamentally human cognitive, ability and extending it to the field of risk analysis while maintaining the tenets of generalization, Bayesian inference, and probabilistic risk analysis. We introduce an integrated framework for inferring the reliability, or other probabilistic measures of interest, of a new system or a conceptual variant of an existing system. Abstractly, our framework is based on learning from the performance of precedent designs and then applying the acquired knowledge, appropriately adjusted based on degree of relevance, to the inference process. This dissertation presents a method for inferring properties of the conceptual variant using a pseudo-spatial model that describes the spatial configuration of the family of systems to which the concept belongs. Through non-metric multidimensional scaling, we formulate the pseudo-spatial model based on rank-ordered subjective expert perception of design similarity between systems that elucidate the psychological space of the family. By a novel extension of Kriging methods for analysis of geospatial data to our "pseudo-space of comparable engineered systems", we develop a Bayesian inference model that allows prediction of the probabilistic measure of interest.

  14. Bayesian wavelet PCA methodology for turbomachinery damage diagnosis under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Shengli; Jiang, Xiaomo; Huang, Jinzhi; Yang, Shuhua; Wang, Xiaofang

    2016-12-01

    Centrifugal compressor often suffers various defects such as impeller cracking, resulting in forced outage of the total plant. Damage diagnostics and condition monitoring of such a turbomachinery system has become an increasingly important and powerful tool to prevent potential failure in components and reduce unplanned forced outage and further maintenance costs, while improving reliability, availability and maintainability of a turbomachinery system. This paper presents a probabilistic signal processing methodology for damage diagnostics using multiple time history data collected from different locations of a turbomachine, considering data uncertainty and multivariate correlation. The proposed methodology is based on the integration of three advanced state-of-the-art data mining techniques: discrete wavelet packet transform, Bayesian hypothesis testing, and probabilistic principal component analysis. The multiresolution wavelet analysis approach is employed to decompose a time series signal into different levels of wavelet coefficients. These coefficients represent multiple time-frequency resolutions of a signal. Bayesian hypothesis testing is then applied to each level of wavelet coefficient to remove possible imperfections. The ratio of posterior odds Bayesian approach provides a direct means to assess whether there is imperfection in the decomposed coefficients, thus avoiding over-denoising. Power spectral density estimated by the Welch method is utilized to evaluate the effectiveness of Bayesian wavelet cleansing method. Furthermore, the probabilistic principal component analysis approach is developed to reduce dimensionality of multiple time series and to address multivariate correlation and data uncertainty for damage diagnostics. The proposed methodology and generalized framework is demonstrated with a set of sensor data collected from a real-world centrifugal compressor with impeller cracks, through both time series and contour analyses of vibration signal and principal components.

  15. Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)

    NASA Astrophysics Data System (ADS)

    OConnor, A.; Kirtman, B. P.; Harrison, S.; Gorman, J.

    2016-02-01

    Current US Navy forecasting systems cannot easily incorporate extended-range forecasts that can improve mission readiness and effectiveness; ensure safety; and reduce cost, labor, and resource requirements. If Navy operational planners had systems that incorporated these forecasts, they could plan missions using more reliable and longer-term weather and climate predictions. Further, using multi-model forecast ensembles instead of single forecasts would produce higher predictive performance. Extended-range multi-model forecast ensembles, such as those available in the North American Multi-Model Ensemble (NMME), are ideal for system integration because of their high skill predictions; however, even higher skill predictions can be produced if forecast model ensembles are combined correctly. While many methods for weighting models exist, the best method in a given environment requires expert knowledge of the models and combination methods.We present an innovative approach that uses machine learning to combine extended-range predictions from multi-model forecast ensembles and generate a probabilistic forecast for any region of the globe up to 12 months in advance. Our machine-learning approach uses 30 years of hindcast predictions to learn patterns of forecast model successes and failures. Each model is assigned a weight for each environmental condition, 100 km2 region, and day given any expected environmental information. These weights are then applied to the respective predictions for the region and time of interest to effectively stitch together a single, coherent probabilistic forecast. Our experimental results demonstrate the benefits of our approach to produce extended-range probabilistic forecasts for regions and time periods of interest that are superior, in terms of skill, to individual NMME forecast models and commonly weighted models. The probabilistic forecast leverages the strengths of three NMME forecast models to predict environmental conditions for an area spanning from San Diego, CA to Honolulu, HI, seven months in-advance. Key findings include: weighted combinations of models are strictly better than individual models; machine-learned combinations are especially better; and forecasts produced using our approach have the highest rank probability skill score most often.

  16. A unified method for evaluating real-time computer controllers: A case study. [aircraft control

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Krishna, C. M.; Lee, Y. H.

    1982-01-01

    A real time control system consists of a synergistic pair, that is, a controlled process and a controller computer. Performance measures for real time controller computers are defined on the basis of the nature of this synergistic pair. A case study of a typical critical controlled process is presented in the context of new performance measures that express the performance of both controlled processes and real time controllers (taken as a unit) on the basis of a single variable: controller response time. Controller response time is a function of current system state, system failure rate, electrical and/or magnetic interference, etc., and is therefore a random variable. Control overhead is expressed as a monotonically nondecreasing function of the response time and the system suffers catastrophic failure, or dynamic failure, if the response time for a control task exceeds the corresponding system hard deadline, if any. A rigorous probabilistic approach is used to estimate the performance measures. The controlled process chosen for study is an aircraft in the final stages of descent, just prior to landing. First, the performance measures for the controller are presented. Secondly, control algorithms for solving the landing problem are discussed and finally the impact of the performance measures on the problem is analyzed.

  17. Analysis of Loss-of-Offsite-Power Events 1997-2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Nancy Ellen; Schroeder, John Alton

    2016-07-01

    Loss of offsite power (LOOP) can have a major negative impact on a power plant’s ability to achieve and maintain safe shutdown conditions. LOOP event frequencies and times required for subsequent restoration of offsite power are important inputs to plant probabilistic risk assessments. This report presents a statistical and engineering analysis of LOOP frequencies and durations at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience during calendar years 1997 through 2015. LOOP events during critical operation that do not result in a reactor trip, are not included. Frequencies and durations weremore » determined for four event categories: plant-centered, switchyard-centered, grid-related, and weather-related. Emergency diesel generator reliability is also considered (failure to start, failure to load and run, and failure to run more than 1 hour). There is an adverse trend in LOOP durations. The previously reported adverse trend in LOOP frequency was not statistically significant for 2006-2015. Grid-related LOOPs happen predominantly in the summer. Switchyard-centered LOOPs happen predominantly in winter and spring. Plant-centered and weather-related LOOPs do not show statistically significant seasonality. The engineering analysis of LOOP data shows that human errors have been much less frequent since 1997 than in the 1986 -1996 time period.« less

  18. Stress imparted by the great 2004 Sumatra earthquake shut down transforms and activated rifts up to 400 km away in the Andaman Sea

    USGS Publications Warehouse

    Sevilgen, Volkan; Stein, Ross S.; Pollitz, Fred F.

    2012-01-01

    The origin and prevalence of triggered seismicity and remote aftershocks are under debate. As a result, they have been excluded from probabilistic seismic hazard assessment and aftershock hazard notices. The 2004 M = 9.2 Sumatra earthquake altered seismicity in the Andaman backarc rift-transform system. Here we show that over a 300-km-long largely transform section of the backarc, M≥4.5 earthquakes stopped for five years, and over a 750-km-long backarc section, the rate of transform events dropped by two-thirds, while the rate of rift events increased eightfold. We compute the propagating dynamic stress wavefield and find the peak dynamic Coulomb stress is similar on the rifts and transforms. Long-period dynamic stress amplitudes, which are thought to promote dynamic failure, are higher on the transforms than on the rifts, opposite to the observations. In contrast to the dynamic stress, we calculate that the mainshock brought the transform segments approximately 0.2 bar (0.02 MPa) farther from static Coulomb failure and the rift segments approximately 0.2 bar closer to static failure, consistent with the seismic observations. This accord means that changes in seismicity rate are sufficiently predictable to be included in post-mainshock hazard evaluations.

  19. Stress imparted by the great 2004 Sumatra earthquake shut down transforms and activated rifts up to 400 km away in the Andaman Sea

    USGS Publications Warehouse

    Sevilgen, Volkan; Stein, Ross S.; Pollitz, Fred F.

    2012-01-01

    The origin and prevalence of triggered seismicity and remote aftershocks are under debate. As a result, they have been excluded from probabilistic seismic hazard assessment and aftershock hazard notices. The 2004 M = 9.2 Sumatra earthquake altered seismicity in the Andaman backarc rift-transform system. Here we show that over a 300-km-long largely transform section of the backarc, M ≥ 4.5 earthquakes stopped for five years, and over a 750-km-long backarc section, the rate of transform events dropped by two-thirds, while the rate of rift events increased eightfold. We compute the propagating dynamic stress wavefield and find the peak dynamic Coulomb stress is similar on the rifts and transforms. Long-period dynamic stress amplitudes, which are thought to promote dynamic failure, are higher on the transforms than on the rifts, opposite to the observations. In contrast to the dynamic stress, we calculate that the mainshock brought the transform segments approximately 0.2 bar (0.02 MPa) farther from static Coulomb failure and the rift segments approximately 0.2 bar closer to static failure, consistent with the seismic observations. This accord means that changes in seismicity rate are sufficiently predictable to be included in post-mainshock hazard evaluations.

  20. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  1. Hypothesis Testing as an Act of Rationality

    NASA Astrophysics Data System (ADS)

    Nearing, Grey

    2017-04-01

    Statistical hypothesis testing is ad hoc in two ways. First, setting probabilistic rejection criteria is, as Neyman (1957) put it, an act of will rather than an act of rationality. Second, physical theories like conservation laws do not inherently admit probabilistic predictions, and so we must use what are called epistemic bridge principles to connect model predictions with the actual methods of hypothesis testing. In practice, these bridge principles are likelihood functions, error functions, or performance metrics. I propose that the reason we are faced with these problems is because we have historically failed to account for a fundamental component of basic logic - namely the portion of logic that explains how epistemic states evolve in the presence of empirical data. This component of Cox' (1946) calculitic logic is called information theory (Knuth, 2005), and adding information theory our hypothetico-deductive account of science yields straightforward solutions to both of the above problems. This also yields a straightforward method for dealing with Popper's (1963) problem of verisimilitude by facilitating a quantitative approach to measuring process isomorphism. In practice, this involves data assimilation. Finally, information theory allows us to reliably bound measures of epistemic uncertainty, thereby avoiding the problem of Bayesian incoherency under misspecified priors (Grünwald, 2006). I therefore propose solutions to four of the fundamental problems inherent in both hypothetico-deductive and/or Bayesian hypothesis testing. - Neyman (1957) Inductive Behavior as a Basic Concept of Philosophy of Science. - Cox (1946) Probability, Frequency and Reasonable Expectation. - Knuth (2005) Lattice Duality: The Origin of Probability and Entropy. - Grünwald (2006). Bayesian Inconsistency under Misspecification. - Popper (1963) Conjectures and Refutations: The Growth of Scientific Knowledge.

  2. PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction

    PubMed Central

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.

    2008-01-01

    A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945

  3. Urns and Chameleons: two metaphors for two different types of measurements

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi

    2013-09-01

    The awareness of the physical possibility of models of space, alternative with respect to the Euclidean one, begun to emerge towards the end of the 19-th century. At the end of the 20-th century a similar awareness emerged concerning the physical possibility of models of the laws of chance alternative with respect to the classical probabilistic models (Kolmogorov model). In geometry the mathematical construction of several non-Euclidean models of space preceded of about one century their applications in physics, which came with the theory of relativity. In physics the opposite situation took place. In fact, while the first example of non Kolmogorov probabilistic models emerged in quantum physics approximately one century ago, at the beginning of 1900, the awareness of the fact that this new mathematical formalism reflected a new mathematical model of the laws of chance had to wait until the early 1980's. In this long time interval the classical and the new probabilistic models were both used in the description and the interpretation of quantum phenomena and negatively interfered with each other because of the absence (for many decades) of a mathematical theory that clearly delimited the respective domains of application. The result of this interference was the emergence of the so-called the "paradoxes of quantum theory". For several decades there have been many different attempts to solve these paradoxes giving rise to what K. Popper baptized "the great quantum muddle": a debate which has been at the core of the philosophy of science for more than 50 years. However these attempts have led to contradictions between the two fundamental theories of the contemporary physical: the quantum theory and the theory of the relativity. Quantum probability identifies the reason of the emergence of non Kolmogorov models, and therefore of the so-called the paradoxes of quantum theory, in the difference between the notion of passive measurements like "reading pre-existent properties" (urn metaphor) and measurements consisting in reading "a response to an interaction" (chameleon metaphor). The non-trivial point is that one can prove that, while the urn scheme cannot lead to empirical data outside of classic probability, response based measurements can give rise to non classical statistics. The talk will include entirely classical examples of non classical statistics and potential applications to economic, sociological or biomedical phenomena.

  4. PROTECTIVE VALUE OF TYPHOID VACCINATION AS SHOWN BY THE EXPERIENCE OF THE AMERICAN TROOPS IN THE WAR

    PubMed Central

    Soper, George A.

    1920-01-01

    It has recently been publicly argued that the Army reports demonstrate the failure of inoculation against typhoid fever. Major Soper distinctly contradicts this claim, states plainly how efficient it really was and explains some so-called failures. These were due to infection before vaccination, errors of haste in insufficient dosage or wrong counting, or worn immunity. PMID:18010282

  5. Proceedings, Seminar on Probabilistic Methods in Geotechnical Engineering

    NASA Astrophysics Data System (ADS)

    Hynes-Griffin, M. E.; Buege, L. L.

    1983-09-01

    Contents: Applications of Probabilistic Methods in Geotechnical Engineering; Probabilistic Seismic and Geotechnical Evaluation at a Dam Site; Probabilistic Slope Stability Methodology; Probability of Liquefaction in a 3-D Soil Deposit; Probabilistic Design of Flood Levees; Probabilistic and Statistical Methods for Determining Rock Mass Deformability Beneath Foundations: An Overview; Simple Statistical Methodology for Evaluating Rock Mechanics Exploration Data; New Developments in Statistical Techniques for Analyzing Rock Slope Stability.

  6. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  7. CARES/Life Ceramics Durability Evaluation Software Used for Mars Microprobe Aeroshell

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1998-01-01

    The CARES/Life computer program, which was developed at the NASA Lewis Research Center, predicts the probability of a monolithic ceramic component's failure as a function of time in service. The program has many features and options for materials evaluation and component design. It couples commercial finite element programs-which resolve a component's temperature and stress distribution-to-reliability evaluation and fracture mechanics routines for modeling strength-limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength. The capability, flexibility, and uniqueness of CARES/Life has attracted many users representing a broad range of interests and has resulted in numerous awards for technological achievements and technology transfer.

  8. Probabilistic simulation of the human factor in structural reliability

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Chamis, Christos C.

    1991-01-01

    Many structural failures have occasionally been attributed to human factors in engineering design, analyses maintenance, and fabrication processes. Every facet of the engineering process is heavily governed by human factors and the degree of uncertainty associated with them. Factors such as societal, physical, professional, psychological, and many others introduce uncertainties that significantly influence the reliability of human performance. Quantifying human factors and associated uncertainties in structural reliability require: (1) identification of the fundamental factors that influence human performance, and (2) models to describe the interaction of these factors. An approach is being developed to quantify the uncertainties associated with the human performance. This approach consists of a multi factor model in conjunction with direct Monte-Carlo simulation.

  9. Stan : A Probabilistic Programming Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  10. Probalistic Finite Elements (PFEM) structural dynamics and fracture mechanics

    NASA Technical Reports Server (NTRS)

    Liu, Wing-Kam; Belytschko, Ted; Mani, A.; Besterfield, G.

    1989-01-01

    The purpose of this work is to develop computationally efficient methodologies for assessing the effects of randomness in loads, material properties, and other aspects of a problem by a finite element analysis. The resulting group of methods is called probabilistic finite elements (PFEM). The overall objective of this work is to develop methodologies whereby the lifetime of a component can be predicted, accounting for the variability in the material and geometry of the component, the loads, and other aspects of the environment; and the range of response expected in a particular scenario can be presented to the analyst in addition to the response itself. Emphasis has been placed on methods which are not statistical in character; that is, they do not involve Monte Carlo simulations. The reason for this choice of direction is that Monte Carlo simulations of complex nonlinear response require a tremendous amount of computation. The focus of efforts so far has been on nonlinear structural dynamics. However, in the continuation of this project, emphasis will be shifted to probabilistic fracture mechanics so that the effect of randomness in crack geometry and material properties can be studied interactively with the effect of random load and environment.

  11. On the Latent Variable Interpretation in Sum-Product Networks.

    PubMed

    Peharz, Robert; Gens, Robert; Pernkopf, Franz; Domingos, Pedro

    2017-10-01

    One of the central themes in Sum-Product networks (SPNs) is the interpretation of sum nodes as marginalized latent variables (LVs). This interpretation yields an increased syntactic or semantic structure, allows the application of the EM algorithm and to efficiently perform MPE inference. In literature, the LV interpretation was justified by explicitly introducing the indicator variables corresponding to the LVs' states. However, as pointed out in this paper, this approach is in conflict with the completeness condition in SPNs and does not fully specify the probabilistic model. We propose a remedy for this problem by modifying the original approach for introducing the LVs, which we call SPN augmentation. We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks. Based on these results, we find a sound derivation of the EM algorithm for SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature was never proven to be correct. We show that this is indeed a correct algorithm, when applied to selective SPNs, and in particular when applied to augmented SPNs. Our theoretical results are confirmed in experiments on synthetic data and 103 real-world datasets.

  12. Stan : A Probabilistic Programming Language

    DOE PAGES

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...

    2017-01-01

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  13. Simulability of observables in general probabilistic theories

    NASA Astrophysics Data System (ADS)

    Filippov, Sergey N.; Heinosaari, Teiko; Leppäjärvi, Leevi

    2018-06-01

    The existence of incompatibility is one of the most fundamental features of quantum theory and can be found at the core of many of the theory's distinguishing features, such as Bell inequality violations and the no-broadcasting theorem. A scheme for obtaining new observables from existing ones via classical operations, the so-called simulation of observables, has led to an extension of the notion of compatibility for measurements. We consider the simulation of observables within the operational framework of general probabilistic theories and introduce the concept of simulation irreducibility. While a simulation irreducible observable can only be simulated by itself, we show that any observable can be simulated by simulation irreducible observables, which in the quantum case correspond to extreme rank-1 positive-operator-valued measures. We also consider cases where the set of simulators is restricted in one of two ways: in terms of either the number of simulating observables or their number of outcomes. The former is seen to be closely connected to compatibility and k compatibility, whereas the latter leads to a partial characterization for dichotomic observables. In addition to the quantum case, we further demonstrate these concepts in state spaces described by regular polygons.

  14. Prospective memory failures in aviation: effects of cue salience, workload, and individual differences.

    PubMed

    Van Benthem, Kathleen D; Herdman, Chris M; Tolton, Rani G; LeFevre, Jo-Anne

    2015-04-01

    Prospective memory allows people to complete intended tasks in the future. Prospective memory failures, such as pilots forgetting to inform pattern traffic of their locations, can have fatal consequences. The present research examined the impact of system factors (memory cue salience and workload) and individual differences (pilot age, cognitive health, and expertise) on prospective memory for communication tasks in the cockpit. Pilots (N = 101) flew a Cessna 172 simulator at a non-towered aerodrome while maintaining communication with traffic and attending to flight parameters. Memory cue salience (the prominence of cues that signal an intended action) and workload were manipulated. Prospective memory was measured as radio call completion rates. Pilots' prospective memory was adversely affected by low-salience cues and high workload. An interaction of cue salience, pilots' age, and cognitive health reflected the effects of system and individual difference factors on prospective memory failures. For example, younger pilots with low levels of cognitive health completed 78% of the radio calls associated with low-salience memory cues, whereas older pilots with low cognitive health scores completed just 61% of similar radio calls. Our findings suggest that technologies designed to signal intended future tasks should target those tasks with inherently low-salience memory cues. In addition, increasing the salience of memory cues is most likely to benefit pilots with lower levels of cognitive health in high-workload conditions.

  15. Malabsorption

    MedlinePlus

    ... that of other children of similar age and gender. This is called failure to thrive . The child ... deficiency Secretin stimulation test Small bowel biopsy Stool culture or culture of small intestine aspirate Stool fat ...

  16. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  17. Seismic hazard and risk assessment for large Romanian dams situated in the Moldavian Platform

    NASA Astrophysics Data System (ADS)

    Moldovan, Iren-Adelina; Popescu, Emilia; Otilia Placinta, Anica; Petruta Constantin, Angela; Toma Danila, Dragos; Borleanu, Felix; Emilian Toader, Victorin; Moldoveanu, Traian

    2016-04-01

    Besides periodical technical inspections, the monitoring and the surveillance of dams' related structures and infrastructures, there are some more seismic specific requirements towards dams' safety. The most important one is the seismic risk assessment that can be accomplished by rating the dams into seismic risk classes using the theory of Bureau and Ballentine (2002), and Bureau (2003), taking into account the maximum expected peak ground motions at the dams site - values obtained using probabilistic hazard assessment approaches (Moldovan et al., 2008), the structures vulnerability and the downstream risk characteristics (human, economical, historic and cultural heritage, etc) in the areas that might be flooded in the case of a dam failure. Probabilistic seismic hazard (PSH), vulnerability and risk studies for dams situated in the Moldavian Platform, starting from Izvorul Muntelui Dam, down on Bistrita and following on Siret River and theirs affluent will be realized. The most vulnerable dams will be studied in detail and flooding maps will be drawn to find the most exposed downstream localities both for risk assessment studies and warnings. GIS maps that clearly indicate areas that are potentially flooded are enough for these studies, thus giving information on the number of inhabitants and goods that may be destroyed. Geospatial servers included topography is sufficient to achieve them, all other further studies are not necessary for downstream risk assessment. The results will consist of local and regional seismic information, dams specific characteristics and locations, seismic hazard maps and risk classes, for all dams sites (for more than 30 dams), inundation maps (for the most vulnerable dams from the region) and possible affected localities. The studies realized in this paper have as final goal to provide the local emergency services with warnings of a potential dam failure and ensuing flood as a result of an large earthquake occurrence, allowing further public training for evacuation. The work is supported from PNII/PCCA 2013 Project DARING 69/2014, financed by UEFISCDI, Romania. Bureau GJ (2003) "Dams and appurtenant facilities" Earthquake Engineering Handbook, CRS Press, WF Chen, and C Scawthorn (eds.), Boca Raton, pp. 26.1-26.47. Bureau GJ and Ballentine GD (2002) "A comprehensive seismic vulnerability and loss assessment of the State of Carolina using HAZUS. Part IV: Dam inventory and vulnerability assessment methodology", 7th National Conference on Earthquake Engineering, July 21-25, Boston, Earthquake Engineering Research Institute, Oakland, CA. Moldovan IA, Popescu E, Constantin A (2008), "Probabilistic seismic hazard assessment in Romania: application for crustal seismic active zones", Romanian Journal of Physics, Vol.53, Nos. 3-4

  18. Probabilistic Assessment of High-Throughput Wireless Sensor Networks

    PubMed Central

    Kim, Robin E.; Mechitov, Kirill; Sim, Sung-Han; Spencer, Billie F.; Song, Junho

    2016-01-01

    Structural health monitoring (SHM) using wireless smart sensors (WSS) has the potential to provide rich information on the state of a structure. However, because of their distributed nature, maintaining highly robust and reliable networks can be challenging. Assessing WSS network communication quality before and after finalizing a deployment is critical to achieve a successful WSS network for SHM purposes. Early studies on WSS network reliability mostly used temporal signal indicators, composed of a smaller number of packets, to assess the network reliability. However, because the WSS networks for SHM purpose often require high data throughput, i.e., a larger number of packets are delivered within the communication, such an approach is not sufficient. Instead, in this study, a model that can assess, probabilistically, the long-term performance of the network is proposed. The proposed model is based on readily-available measured data sets that represent communication quality during high-throughput data transfer. Then, an empirical limit-state function is determined, which is further used to estimate the probability of network communication failure. Monte Carlo simulation is adopted in this paper and applied to a small and a full-bridge wireless networks. By performing the proposed analysis in complex sensor networks, an optimized sensor topology can be achieved. PMID:27258270

  19. HyRAM V1.0 User Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, Katrina M.; Zumwalt, Hannah Ruth; Clark, Andrew Jordan

    2016-03-01

    Hydrogen Risk Assessment Models (HyRAM) is a prototype software toolkit that integrates data and methods relevant to assessing the safety of hydrogen fueling and storage infrastructure. The HyRAM toolkit integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing the impact of hydrogen hazards, including thermal effects from jet fires and thermal pressure effects from deflagration. HyRAM version 1.0 incorporates generic probabilities for equipment failures for nine types of components, and probabilistic models for the impact of heat flux on humans and structures, with computationally and experimentally validated models of various aspects of gaseous hydrogen releasemore » and flame physics. This document provides an example of how to use HyRAM to conduct analysis of a fueling facility. This document will guide users through the software and how to enter and edit certain inputs that are specific to the user-defined facility. Description of the methodology and models contained in HyRAM is provided in [1]. This User’s Guide is intended to capture the main features of HyRAM version 1.0 (any HyRAM version numbered as 1.0.X.XXX). This user guide was created with HyRAM 1.0.1.798. Due to ongoing software development activities, newer versions of HyRAM may have differences from this guide.« less

  20. Quantitative risk analysis of oil storage facilities in seismic areas.

    PubMed

    Fabbrocino, Giovanni; Iervolino, Iunio; Orlando, Francesca; Salzano, Ernesto

    2005-08-31

    Quantitative risk analysis (QRA) of industrial facilities has to take into account multiple hazards threatening critical equipment. Nevertheless, engineering procedures able to evaluate quantitatively the effect of seismic action are not well established. Indeed, relevant industrial accidents may be triggered by loss of containment following ground shaking or other relevant natural hazards, either directly or through cascade effects ('domino effects'). The issue of integrating structural seismic risk into quantitative probabilistic seismic risk analysis (QpsRA) is addressed in this paper by a representative study case regarding an oil storage plant with a number of atmospheric steel tanks containing flammable substances. Empirical seismic fragility curves and probit functions, properly defined both for building-like and non building-like industrial components, have been crossed with outcomes of probabilistic seismic hazard analysis (PSHA) for a test site located in south Italy. Once the seismic failure probabilities have been quantified, consequence analysis has been performed for those events which may be triggered by the loss of containment following seismic action. Results are combined by means of a specific developed code in terms of local risk contour plots, i.e. the contour line for the probability of fatal injures at any point (x, y) in the analysed area. Finally, a comparison with QRA obtained by considering only process-related top events is reported for reference.

Top