Sample records for computational failure analysis

  1. Composite Failures: A Comparison of Experimental Test Results and Computational Analysis Using XFEM

    DTIC Science & Technology

    2016-09-30

    NUWC-NPT Technical Report 12,218 30 September 2016 Composite Failures: A Comparison of Experimental Test Results and Computational Analysis...A Comparison of Experimental Test Results and Computational Analysis Using XFEM 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...availability of measurement techniques, experimental testing of composite materials has largely outpaced the computational modeling ability, forcing

  2. An overview of computational simulation methods for composite structures failure and life analysis

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1993-01-01

    Three parallel computational simulation methods are being developed at the LeRC Structural Mechanics Branch (SMB) for composite structures failure and life analysis: progressive fracture CODSTRAN; hierarchical methods for high-temperature composites; and probabilistic evaluation. Results to date demonstrate that these methods are effective in simulating composite structures failure/life/reliability.

  3. The Range Safety Debris Catalog Analysis in Preparation for the Pad Abort One Flight Test

    NASA Technical Reports Server (NTRS)

    Kutty, Prasad; Pratt, William

    2010-01-01

    With each flight test a Range Safety Data Package is assembled to understand the potential consequences of various failure scenarios. Debris catalog analysis considers an overpressure failure of the Abort Motor and the resulting debris field created 1. Characterize debris fragments generated by failure: weight, shape, and area 2. Compute fragment ballistic coefficients 3. Compute fragment ejection velocities.

  4. Application of Interface Technology in Progressive Failure Analysis of Composite Panels

    NASA Technical Reports Server (NTRS)

    Sleight, D. W.; Lotts, C. G.

    2002-01-01

    A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.

  5. Computational Methods for Failure Analysis and Life Prediction

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Harris, Charles E. (Compiler); Housner, Jerrold M. (Compiler); Hopkins, Dale A. (Compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

  6. Graphical Displays Assist In Analysis Of Failures

    NASA Technical Reports Server (NTRS)

    Pack, Ginger; Wadsworth, David; Razavipour, Reza

    1995-01-01

    Failure Environment Analysis Tool (FEAT) computer program enables people to see and better understand effects of failures in system. Uses digraph models to determine what will happen to system if set of failure events occurs and to identify possible causes of selected set of failures. Digraphs or engineering schematics used. Also used in operations to help identify causes of failures after they occur. Written in C language.

  7. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  8. Coupled Mechanical-Electrochemical-Thermal Analysis of Failure Propagation in Lithium-ion Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chao; Santhanagopalan, Shriram; Pesaran, Ahmad

    2016-07-28

    This is a presentation given at the 12th World Congress for Computational Mechanics on coupled mechanical-electrochemical-thermal analysis of failure propagation in lithium-ion batteries for electric vehicles.

  9. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  10. Estimation of the failure risk of a maxillary premolar with different crack depths with endodontic treatment by computer-aided design/computer-aided manufacturing ceramic restorations.

    PubMed

    Lin, Chun-Li; Chang, Yen-Hsiang; Hsieh, Shih-Kai; Chang, Wen-Jen

    2013-03-01

    This study evaluated the risk of failure for an endodontically treated premolar with different crack depths, which was shearing toward the pulp chamber and was restored by using 3 different computer-aided design/computer-aided manufacturing ceramic restoration configurations. Three 3-dimensional finite element models designed with computer-aided design/computer-aided manufacturing ceramic onlay, endocrown, and conventional crown restorations were constructed to perform simulations. The Weibull function was incorporated with finite element analysis to calculate the long-term failure probability relative to different load conditions. The results indicated that the stress values on the enamel, dentin, and luting cement for endocrown restorations exhibited the lowest values relative to the other 2 restoration methods. Weibull analysis revealed that the overall failure probabilities in a shallow cracked premolar were 27%, 2%, and 1% for the onlay, endocrown, and conventional crown restorations, respectively, in the normal occlusal condition. The corresponding values were 70%, 10%, and 2% for the depth cracked premolar. This numeric investigation suggests that the endocrown provides sufficient fracture resistance only in a shallow cracked premolar with endodontic treatment. The conventional crown treatment can immobilize the premolar for different cracked depths with lower failure risk. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  11. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  12. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  13. Efficient 3-D finite element failure analysis of compression loaded angle-ply plates with holes

    NASA Technical Reports Server (NTRS)

    Burns, S. W.; Herakovich, C. T.; Williams, J. G.

    1987-01-01

    Finite element stress analysis and the tensor polynomial failure criterion predict that failure always initiates at the interface between layers on the hole edge for notched angle-ply laminates loaded in compression. The angular location of initial failure is a function of the fiber orientation in the laminate. The dominant stress components initiating failure are shear. It is shown that approximate symmetry can be used to reduce the computer resources required for the case of unaxial loading.

  14. Experiences with Probabilistic Analysis Applied to Controlled Systems

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Giesy, Daniel P.

    2004-01-01

    This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.

  15. Orbiter subsystem hardware/software interaction analysis. Volume 8: Forward reaction control system

    NASA Technical Reports Server (NTRS)

    Becker, D. D.

    1980-01-01

    The results of the orbiter hardware/software interaction analysis for the AFT reaction control system are presented. The interaction between hardware failure modes and software are examined in order to identify associated issues and risks. All orbiter subsystems and interfacing program elements which interact with the orbiter computer flight software are analyzed. The failure modes identified in the subsystem/element failure mode and effects analysis are discussed.

  16. Mechanism of failure of the Cabrol procedure: A computational fluid dynamic analysis.

    PubMed

    Poullis, M; Pullan, M

    2015-12-01

    Sudden failure of the Cabrol graft is common and frequently fatal. We utilised the technique of computational fluid dynamic (CFD) analysis to evaluate the mechanism of failure and potentially improve on the design of the Cabrol procedure. CFD analysis of the classic Cabrol procedure and a number of its variants was performed. Results from this analysis was utilised to generate further improved geometric options for the Cabrol procedure. These were also subjected to CFD analysis. All current Cabrol and variations of the Cabrol procedure are predicated by CFD analysis to be prone to graft thrombosis, secondary to stasis around the right coronary artery button. The right coronary artery flow characteristics were found to be the dominant reason for Cabrol graft failure. A simple modification of the Cabrol geometry is predicated to virtually eliminate any areas of blood stasis, and graft failure. Modification of the Cabrol graft geometry, due to CFD analysis may help reduce the incidence of graft thrombosis. A C shaped Cabrol graft with the right coronary button anastomosed to its side along its course from the aorta to the left coronary button is predicted to have the least thrombotic tendency. Clinical correlation is needed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. 3D visualization of membrane failures in fuel cells

    NASA Astrophysics Data System (ADS)

    Singh, Yadvinder; Orfino, Francesco P.; Dutta, Monica; Kjeang, Erik

    2017-03-01

    Durability issues in fuel cells, due to chemical and mechanical degradation, are potential impediments in their commercialization. Hydrogen leak development across degraded fuel cell membranes is deemed a lifetime-limiting failure mode and potential safety issue that requires thorough characterization for devising effective mitigation strategies. The scope and depth of failure analysis has, however, been limited by the 2D nature of conventional imaging. In the present work, X-ray computed tomography is introduced as a novel, non-destructive technique for 3D failure analysis. Its capability to acquire true 3D images of membrane damage is demonstrated for the very first time. This approach has enabled unique and in-depth analysis resulting in novel findings regarding the membrane degradation mechanism; these are: significant, exclusive membrane fracture development independent of catalyst layers, localized thinning at crack sites, and demonstration of the critical impact of cracks on fuel cell durability. Evidence of crack initiation within the membrane is demonstrated, and a possible new failure mode different from typical mechanical crack development is identified. X-ray computed tomography is hereby established as a breakthrough approach for comprehensive 3D characterization and reliable failure analysis of fuel cell membranes, and could readily be extended to electrolyzers and flow batteries having similar structure.

  18. Analysis of stationary availability factor of two-level backbone computer networks with arbitrary topology

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.

  19. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  20. Investigation of progressive failure robustness and alternate load paths for damage tolerant structures

    NASA Astrophysics Data System (ADS)

    Marhadi, Kun Saptohartyadi

    Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.

  1. Progressive Damage and Failure Analysis of Composite Laminates

    NASA Astrophysics Data System (ADS)

    Joseph, Ashith P. K.

    Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.

  2. Effect of Premolar Axial Wall Height on Computer-Aided Design/Computer-Assisted Manufacture Crown Retention.

    PubMed

    Martin, Curt; Harris, Ashley; DuVall, Nicholas; Wajdowicz, Michael; Roberts, Howard Wayne

    2018-03-28

    To evaluate the effect of premolar axial wall height on the retention of adhesive, full-coverage, computer-aided design/computer-assisted manufacture (CAD/CAM) restorations. A total of 48 premolar teeth randomized into four groups (n = 12 per group) received all-ceramic CAD/CAM restorations with axial wall heights (AWH) of 3, 2, 1, and 0 mm and 16-degree total occlusal convergence (TOC). Specimens were restored with lithium disilicate material and cemented with self-adhesive resin cement. Specimens were loaded to failure after 24 hours. The 3- and 2-mm AWH specimens demonstrated significantly greater failure load. Failure analysis suggests a 2-mm minimum AWH for premolars with a TOC of 16 degrees. Adhesive technology may compensate for compromised AWH.

  3. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.

    1992-01-01

    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  4. Reliability/safety analysis of a fly-by-wire system

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Goddman, H. A.

    1980-01-01

    An analysis technique has been developed to estimate the reliability of a very complex, safety-critical system by constructing a diagram of the reliability equations for the total system. This diagram has many of the characteristics of a fault-tree or success-path diagram, but is much easier to construct for complex redundant systems. The diagram provides insight into system failure characteristics and identifies the most likely failure modes. A computer program aids in the construction of the diagram and the computation of reliability. Analysis of the NASA F-8 Digital Fly-by-Wire Flight Control System is used to illustrate the technique.

  5. Real-time automated failure analysis for on-orbit operations

    NASA Technical Reports Server (NTRS)

    Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James

    1993-01-01

    A system which is to provide real-time failure analysis support to controllers at the NASA Johnson Space Center Control Center Complex (CCC) for both Space Station and Space Shuttle on-orbit operations is described. The system employs monitored systems' models of failure behavior and model evaluation algorithms which are domain-independent. These failure models are viewed as a stepping stone to more robust algorithms operating over models of intended function. The described system is designed to meet two sets of requirements. It must provide a useful failure analysis capability enhancement to the mission controller. It must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation. The underlying technology and how it may be used to support operations is also discussed.

  6. Signal analysis techniques for incipient failure detection in turbomachinery

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1985-01-01

    Signal analysis techniques for the detection and classification of incipient mechanical failures in turbomachinery were developed, implemented and evaluated. Signal analysis techniques available to describe dynamic measurement characteristics are reviewed. Time domain and spectral methods are described, and statistical classification in terms of moments is discussed. Several of these waveform analysis techniques were implemented on a computer and applied to dynamic signals. A laboratory evaluation of the methods with respect to signal detection capability is described. Plans for further technique evaluation and data base development to characterize turbopump incipient failure modes from Space Shuttle main engine (SSME) hot firing measurements are outlined.

  7. Comprehension and retrieval of failure cases in airborne observatories

    NASA Technical Reports Server (NTRS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-01-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  8. Comprehension and retrieval of failure cases in airborne observatories

    NASA Astrophysics Data System (ADS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-05-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  9. Probabilistic structural analysis of aerospace components using NESSUS

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.

    1988-01-01

    Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.

  10. Program Helps In Analysis Of Failures

    NASA Technical Reports Server (NTRS)

    Stevenson, R. W.; Austin, M. E.; Miller, J. G.

    1993-01-01

    Failure Environment Analysis Tool (FEAT) computer program developed to enable people to see and better understand effects of failures in system. User selects failures from either engineering schematic diagrams or digraph-model graphics, and effects or potential causes of failures highlighted in color on same schematic-diagram or digraph representation. Uses digraph models to answer two questions: What will happen to system if set of failure events occurs? and What are possible causes of set of selected failures? Helps design reviewers understand exactly what redundancies built into system and where there is need to protect weak parts of system or remove them by redesign. Program also useful in operations, where it helps identify causes of failure after they occur. FEAT reduces costs of evaluation of designs, training, and learning how failures propagate through system. Written using Macintosh Programmers Workshop C v3.1. Can be linked with CLIPS 5.0 (MSC-21927, available from COSMIC).

  11. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  12. Estimation of the risk of failure for an endodontically treated maxillary premolar with MODP preparation and CAD/CAM ceramic restorations.

    PubMed

    Lin, Chun-Li; Chang, Yen-Hsiang; Pa, Che-An

    2009-10-01

    This study evaluated the risk of failure for an endodontically treated premolar with mesio occlusodistal palatal (MODP) preparation and 3 different computer-aided design/computer-aided manufacturing (CAD/CAM) ceramic restoration configurations. Three 3-dimensional finite element (FE) models designed with CAD/CAM ceramic onlay, endocrown, and conventional crown restorations were constructed to perform simulations. The Weibull function was incorporated with FE analysis to calculate the long-term failure probability relative to different load conditions. The results indicated that the stress values on the enamel, dentin, and luting cement for endocrown restoration were the lowest values relative to the other 2 restorations. Weibull analysis revealed that the individual failure probability in the endocrown enamel, dentin, and luting cement obviously diminished more than those for onlay and conventional crown restorations. The overall failure probabilities were 27.5%, 1%, and 1% for onlay, endocrown, and conventional crown restorations, respectively, in normal occlusal condition. This numeric investigation suggests that endocrown and conventional crown restorations for endodontically treated premolars with MODP preparation present similar longevity.

  13. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  14. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  15. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  16. Posttest analysis of the 1:6-scale reinforced concrete containment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeiffer, P.A.; Kennedy, J.M.; Marchertas, A.H.

    A prediction of the response of the Sandia National Laboratories 1:6- scale reinforced concrete containment model test was made by Argonne National Laboratory. ANL along with nine other organizations performed a detailed nonlinear response analysis of the 1:6-scale model containment subjected to overpressurization in the fall of 1986. The two-dimensional code TEMP-STRESS and the three-dimensional NEPTUNE code were utilized (1) to predict the global response of the structure, (2) to identify global failure sites and the corresponding failure pressures and (3) to identify some local failure sites and pressure levels. A series of axisymmetric models was studied with the two-dimensionalmore » computer program TEMP-STRESS. The comparison of these pretest computations with test data from the containment model has provided a test for the capability of the respective finite element codes to predict global failure modes, and hence serves as a validation of these codes. Only the two-dimensional analyses will be discussed in this paper. 3 refs., 10 figs.« less

  17. Orbiter subsystem hardware/software interaction analysis. Volume 8: AFT reaction control system, part 2

    NASA Technical Reports Server (NTRS)

    Becker, D. D.

    1980-01-01

    The orbiter subsystems and interfacing program elements which interact with the orbiter computer flight software are analyzed. The failure modes identified in the subsystem/element failure mode and effects analysis are examined. Potential interaction with the software is examined through an evaluation of the software requirements. The analysis is restricted to flight software requirements and excludes utility/checkout software. The results of the hardware/software interaction analysis for the forward reaction control system are presented.

  18. TEXCAD: Textile Composite Analysis for Design. Version 1.0: User's manual

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.

    1994-01-01

    The Textile Composite Analysis for Design (TEXCAD) code provides the materials/design engineer with a user-friendly desktop computer (IBM PC compatible or Apple Macintosh) tool for the analysis of a wide variety of fabric reinforced woven and braided composites. It can be used to calculate overall thermal and mechanical properties along with engineering estimates of damage progression and strength. TEXCAD also calculates laminate properties for stacked, oriented fabric constructions. It discretely models the yarn centerline paths within the textile repeating unit cell (RUC) by assuming sinusoidal undulations at yarn cross-over points and uses a yarn discretization scheme (which subdivides each yarn not smaller, piecewise straight yarn slices) together with a 3-D stress averaging procedure to compute overall stiffness properties. In the calculations for strength, it uses a curved beam-on-elastic foundation model for yarn undulating regions together with an incremental approach in which stiffness properties for the failed yarn slices are reduced based on the predicted yarn slice failure mode. Nonlinear shear effects and nonlinear geometric effects can be simulated. Input to TEXCAD consists of: (1) materials parameters like impregnated yarn and resin properties such moduli, Poisson's ratios, coefficients of thermal expansion, nonlinear parameters, axial failure strains and in-plane failure stresses; and (2) fabric parameters like yarn sizes, braid angle, yarn packing density, filament diameter and overall fiber volume fraction. Output consists of overall thermoelastic constants, yarn slice strains/stresses, yarn slice failure history, in-plane stress-strain response and ultimate failure strength. Strength can be computed under the combined action of thermal and mechanical loading (tension, compression and shear).

  19. SRM Internal Flow Tests and Computational Fluid Dynamic Analysis. Volume 3; Titan, ASRM, and Subscale Motor Analyses

    NASA Technical Reports Server (NTRS)

    1995-01-01

    A computational fluid dynamics (CFD) analysis has been performed on the aft slot region of the Titan 4 Solid Rocket Motor Upgrade (SRMU). This analysis was performed in conjunction with MSFC structural modeling of the propellant grain to determine if the flow field induced stresses would adversely alter the propellant geometry to the extent of causing motor failure. The results of the coupled CFD/stress analysis have shown that there is a continual increase of flow field resistance at the aft slot due to the aft segment propellant grain being progressively moved radially toward the centerline of the motor port. This 'bootstrapping' effect between grain radial movement and internal flow resistance is conducive to causing a rapid motor failure.

  20. Fracture and Failure at and Near Interfaces Under Pressure

    DTIC Science & Technology

    1998-06-18

    realistic data for comparison with improved analytical results, and to 2) initiate a new computational approach for stress analysis of cracks at and near...new computational approach for stress analysis of cracks in solid propellants at and near interfaces, which analysis can draw on the ever expanding...tactical and strategic missile systems. The most important and most difficult component of the system analysis has been the predictability or

  1. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  2. A guide to onboard checkout. Volume 4: Propulsion

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The propulsion system for a space station is considered with respect to onboard checkout requirements. Failure analysis, reliability, and maintenance features are presented. Computer analysis techniques are also discussed.

  3. Environmental isolation task

    NASA Technical Reports Server (NTRS)

    Coulbert, C. D.

    1982-01-01

    The failure-analysis process was organized into a more specific set of long-term degradation steps so that material property change can be differentiated from module damage and module failure. Increasing module performance and life are discussed. A polymeric aging computer model is discussed. Early detection of polymer surface reactions due to aging is reported.

  4. PNNL Data-Intensive Computing for a Smarter Energy Grid

    ScienceCinema

    Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.

  5. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  6. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  7. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  8. Advances and trends in computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1986-01-01

    Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.

  9. Remote maintenance monitoring system

    NASA Technical Reports Server (NTRS)

    Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)

    1992-01-01

    A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.

  10. Improving FMEA risk assessment through reprioritization of failures

    NASA Astrophysics Data System (ADS)

    Ungureanu, A. L.; Stan, G.

    2016-08-01

    Most of the current methods used to assess the failure and to identify the industrial equipment defects are based on the determination of Risk Priority Number (RPN). Although conventional RPN calculation is easy to understand and use, the methodology presents some limitations, such as the large number of duplicates and the difficulty of assessing the RPN indices. In order to eliminate the afore-mentioned shortcomings, this paper puts forward an easy and efficient computing method, called Failure Developing Mode and Criticality Analysis (FDMCA), which takes into account the failures and the defect evolution in time, from failure appearance to a breakdown.

  11. Failure Analysis in Platelet Molded Composite Systems

    NASA Astrophysics Data System (ADS)

    Kravchenko, Sergii G.

    Long-fiber discontinuous composite systems in the form of chopped prepreg tapes provide an advanced, structural grade, molding compound allowing for fabrication of complex three-dimensional components. Understanding of process-structure-property relationship is essential for application of prerpeg platelet molded components, especially because of their possible irregular disordered heterogeneous morphology. Herein, a structure-property relationship was analyzed in the composite systems of many platelets. Regular and irregular morphologies were considered. Platelet-based systems with more ordered morphology possess superior mechanical performance. While regular morphologies allow for a careful inspection of failure mechanisms derived from the morphological characteristics, irregular morphologies are representative of the composite architectures resulting from uncontrolled deposition and molding with chopped prerpegs. Progressive failure analysis (PFA) was used to study the damaged deformation up to ultimate failure in a platelet-based composite system. Computational damage mechanics approaches were utilized to conduct the PFA. The developed computational models granted understanding of how the composite structure details, meaning the platelet geometry and system morphology (geometrical arrangement and orientation distribution of platelets), define the effective mechanical properties of a platelet-molded composite system, its stiffness, strength and variability in properties.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Lin, Guang

    In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less

  13. A Computer Code for Dynamic Stress Analysis of Media-Structure Problems with Nonlinearities (SAMSON). Volume III. User’s Manual.

    DTIC Science & Technology

    NONLINEAR SYSTEMS, LINEAR SYSTEMS, SUBROUTINES , SOIL MECHANICS, INTERFACES, DYNAMICS, LOADS(FORCES), FORCE(MECHANICS), DAMPING, ACCELERATION, ELASTIC...PROPERTIES, PLASTIC PROPERTIES, CRACKS , REINFORCING MATERIALS , COMPOSITE MATERIALS , FAILURE(MECHANICS), MECHANICAL PROPERTIES, INSTRUCTION MANUALS, DIGITAL COMPUTERS...STRESSES, *COMPUTER PROGRAMS), (*STRUCTURES, STRESSES), (*DATA PROCESSING, STRUCTURAL PROPERTIES), SOILS , STRAIN(MECHANICS), MATHEMATICAL MODELS

  14. Physics-based Entry, Descent and Landing Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Huynh, Loc C.; Manning, Ted

    2014-01-01

    A physics-based risk model was developed to assess the risk associated with thermal protection system failures during the entry, descent and landing phase of a manned spacecraft mission. In the model, entry trajectories were computed using a three-degree-of-freedom trajectory tool, the aerothermodynamic heating environment was computed using an engineering-level computational tool and the thermal response of the TPS material was modeled using a one-dimensional thermal response tool. The model was capable of modeling the effect of micrometeoroid and orbital debris impact damage on the TPS thermal response. A Monte Carlo analysis was used to determine the effects of uncertainties in the vehicle state at Entry Interface, aerothermodynamic heating and material properties on the performance of the TPS design. The failure criterion was set as a temperature limit at the bondline between the TPS and the underlying structure. Both direct computation and response surface approaches were used to compute the risk. The model was applied to a generic manned space capsule design. The effect of material property uncertainty and MMOD damage on risk of failure were analyzed. A comparison of the direct computation and response surface approach was undertaken.

  15. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  16. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  17. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  18. Progressive fracture of fiber composites

    NASA Technical Reports Server (NTRS)

    Irvin, T. B.; Ginty, C. A.

    1983-01-01

    Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.

  19. Tree failures and accidents in recreation areas: a guide to data management for hazard control

    Treesearch

    Lee A. Paine; James W. Clarke

    1978-01-01

    A data management system has been developed for storage and retrieval of tree failure and hazard data, with provision for computer analyses and presentation of results in useful tables. This system emphasizes important relationships between tree characteristics, environmental factors, and the resulting hazard. The analysis programs permit easy selection of subsets of...

  20. An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction

    ERIC Educational Resources Information Center

    Bhasin, Harpreet

    2011-01-01

    Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…

  1. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  2. Computer-aided operations engineering with integrated models of systems and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  3. Availability Analysis of Dual Mode Systems

    DOT National Transportation Integrated Search

    1974-04-01

    The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...

  4. Failure detection and isolation investigation for strapdown skew redundant tetrad laser gyro inertial sensor arrays

    NASA Technical Reports Server (NTRS)

    Eberlein, A. J.; Lahm, T. G.

    1976-01-01

    The degree to which flight-critical failures in a strapdown laser gyro tetrad sensor assembly can be isolated in short-haul aircraft after a failure occurrence has been detected by the skewed sensor failure-detection voting logic is investigated along with the degree to which a failure in the tetrad computer can be detected and isolated at the computer level, assuming a dual-redundant computer configuration. The tetrad system was mechanized with two two-axis inertial navigation channels (INCs), each containing two gyro/accelerometer axes, computer, control circuitry, and input/output circuitry. Gyro/accelerometer data is crossfed between the two INCs to enable each computer to independently perform the navigation task. Computer calculations are synchronized between the computers so that calculated quantities are identical and may be compared. Fail-safe performance (identification of the first failure) is accomplished with a probability approaching 100 percent of the time, while fail-operational performance (identification and isolation of the first failure) is achieved 93 to 96 percent of the time.

  5. Modeling Cognitive Strategies during Complex Task Performing Process

    ERIC Educational Resources Information Center

    Mazman, Sacide Guzin; Altun, Arif

    2012-01-01

    The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…

  6. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.

  7. Reliability analysis of the F-8 digital fly-by-wire system

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Goodman, H. A.

    1981-01-01

    The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.

  8. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  9. Analysis of the STS-126 Flow Control Valve Structural-Acoustic Coupling Failure

    NASA Technical Reports Server (NTRS)

    Jones, Trevor M.; Larko, Jeffrey M.; McNelis, Mark E.

    2010-01-01

    During the Space Transportation System mission STS-126, one of the main engine's flow control valves incurred an unexpected failure. A section of the valve broke off during liftoff. It is theorized that an acoustic mode of the flowing fuel, coupled with a structural mode of the valve, causing a high cycle fatigue failure. This report documents the analysis efforts conducted in an attempt to verify this theory. Hand calculations, computational fluid dynamics, and finite element methods are all implemented and analyses are performed using steady-state methods in addition to transient analysis methods. The conclusion of the analyses is that there is a critical acoustic mode that aligns with a structural mode of the valve

  10. Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Cheng, Ron-Bin

    2010-01-01

    A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.

  11. Digital avionics design and reliability analyzer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The description and specifications for a digital avionics design and reliability analyzer are given. Its basic function is to provide for the simulation and emulation of the various fault-tolerant digital avionic computer designs that are developed. It has been established that hardware emulation at the gate-level will be utilized. The primary benefit of emulation to reliability analysis is the fact that it provides the capability to model a system at a very detailed level. Emulation allows the direct insertion of faults into the system, rather than waiting for actual hardware failures to occur. This allows for controlled and accelerated testing of system reaction to hardware failures. There is a trade study which leads to the decision to specify a two-machine system, including an emulation computer connected to a general-purpose computer. There is also an evaluation of potential computers to serve as the emulation computer.

  12. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  13. A Framework for Debugging Geoscience Projects in a High Performance Computing Environment

    NASA Astrophysics Data System (ADS)

    Baxter, C.; Matott, L.

    2012-12-01

    High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.

  14. The analysis of the pilot's cognitive and decision processes

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    Articles are presented on pilot performance in zero-visibility precision approach, failure detection by pilots during automatic landing, experiments in pilot decision-making during simulated low visibility approaches, a multinomial maximum likelihood program, and a random search algorithm for laboratory computers. Other topics discussed include detection of system failures in multi-axis tasks and changes in pilot workload during an instrument landing.

  15. Intelligent redundant actuation system requirements and preliminary system design

    NASA Technical Reports Server (NTRS)

    Defeo, P.; Geiger, L. J.; Harris, J.

    1985-01-01

    Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.

  16. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  17. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  18. Analytical Prediction of Damage Growth in Notched Composite Panels Loaded in Axial Compression

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; McGowan, David M.; Davila, Carlos G.

    1999-01-01

    A progressive failure analysis method based on shell elements is developed for the computation of damage initiation and growth in stiffened thick-skin stitched graphite-epoxy panels loaded in axial compression. The analysis method involves a step-by-step simulation of material degradation based on ply-level failure mechanisms. High computational efficiency is derived from the use of superposed layers of shell elements to model each ply orientation in the laminate. Multiple integration points through the thickness are used to obtain the correct bending effects through the thickness without the need for ply-by-ply evaluations of the state of the material. The analysis results are compared with experimental results for three stiffened panels with notches oriented at 0, 15 and 30 degrees to the panel width dimension. A parametric study is performed to investigate the damage growth retardation characteristics of the Kevlar stitch lines in the pan

  19. A FORTRAN program for multivariate survival analysis on the personal computer.

    PubMed

    Mulder, P G

    1988-01-01

    In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.

  20. The Semantic Distance Task: Quantifying Semantic Distance with Semantic Network Path Length

    ERIC Educational Resources Information Center

    Kenett, Yoed N.; Levi, Effi; Anaki, David; Faust, Miriam

    2017-01-01

    Semantic distance is a determining factor in cognitive processes, such as semantic priming, operating upon semantic memory. The main computational approach to compute semantic distance is through latent semantic analysis (LSA). However, objections have been raised against this approach, mainly in its failure at predicting semantic priming. We…

  1. Fatigue Behavior of Computer-Aided Design/Computer-Assisted Manufacture Ceramic Abutments as a Function of Design and Ceramics Processing.

    PubMed

    Kelly, J Robert; Rungruanganunt, Patchnee

    2016-01-01

    Zirconia is being widely used, at times apparently by simply copying a metal design into ceramic. Structurally, ceramics are sensitive to both design and processing (fabrication) details. The aim of this work was to examine four computer-aided design/computer-assisted manufacture (CAD/CAM) abutments using a modified International Standards Organization (ISO) implant fatigue protocol to determine performance as a function of design and processing. Two full zirconia and two hybrid (Ti-based) abutments (n = 12 each) were tested wet at 15 Hz at a variety of loads to failure. Failure probability distributions were examined at each load, and when found to be the same, data from all loads were combined for lifetime analysis from accelerated to clinical conditions. Two distinctly different failure modes were found for both full zirconia and Ti-based abutments. One of these for zirconia has been reported clinically in the literature, and one for the Ti-based abutments has been reported anecdotally. The ISO protocol modification in this study forced failures in the abutments; no implant bodies failed. Extrapolated cycles for 10% failure at 70 N were: full zirconia, Atlantis 2 × 10(7) and Straumann 3 × 10(7); and Ti-based, Glidewell 1 × 10(6) and Nobel 1 × 10(21). Under accelerated conditions (200 N), performance differed significantly: Straumann clearly outperformed Astra (t test, P = .013), and the Glidewell Ti-base abutment also outperformed Atlantis zirconia at 200 N (Nobel ran-out; t test, P = .035). The modified ISO protocol in this study produced failures that were seen clinically. The manufacture matters; differences in design and fabrication that influence performance cannot be discerned clinically.

  2. The Utility of Failure Modes and Effects Analysis of Consultations in a Tertiary, Academic, Medical Center.

    PubMed

    Niv, Yaron; Itskoviz, David; Cohen, Michal; Hendel, Hagit; Bar-Giora, Yonit; Berkov, Evgeny; Weisbord, Irit; Leviron, Yifat; Isasschar, Assaf; Ganor, Arian

    Failure modes and effects analysis (FMEA) is a tool used to identify potential risks in health care processes. We used the FMEA tool for improving the process of consultation in an academic medical center. A team of 10 staff members-5 physicians, 2 quality experts, 2 organizational consultants, and 1 nurse-was established. The consultation process steps, from ordering to delivering, were computed. Failure modes were assessed for likelihood of occurrence, detection, and severity. A risk priority number (RPN) was calculated. An interventional plan was designed according to the highest RPNs. Thereafter, we compared the percentage of completed computer-based documented consultations before and after the intervention. The team identified 3 main categories of failure modes that reached the highest RPNs: initiation of consultation by a junior staff physician without senior approval, failure to document the consultation in the computerized patient registry, and asking for consultation on the telephone. An interventional plan was designed, including meetings to update knowledge of the consultation request process, stressing the importance of approval by a senior physician, training sessions for closing requests in the patient file, and reporting of telephone requests. The number of electronically documented consultation results and recommendations significantly increased (75%) after intervention. FMEA is an important and efficient tool for improving the consultation process in an academic medical center.

  3. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  4. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  5. Computer Software Management and Information Center

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Computer programs for passive anti-roll tank, earth resources laboratory applications, the NIMBUS-7 coastal zone color scanner derived products, transportable applications executive, plastic and failure analysis of composites, velocity gradient method for calculating velocities in an axisymmetric annular duct, an integrated procurement management system, data I/O PRON for the Motorola exorcisor, aerodynamic shock-layer shape, kinematic modeling, hardware library for a graphics computer, and a file archival system are documented.

  6. PRO-Elicere: A Study for Create a New Process of Dependability Analysis of Space Computer Systems

    NASA Astrophysics Data System (ADS)

    da Silva, Glauco; Netto Lahoz, Carlos Henrique

    2013-09-01

    This paper presents the new approach to the computer system dependability analysis, called PRO-ELICERE, which introduces data mining concepts and intelligent mechanisms to decision support to analyze the potential hazards and failures of a critical computer system. Also, are presented some techniques and tools that support the traditional dependability analysis and briefly discusses the concept of knowledge discovery and intelligent databases for critical computer systems. After that, introduces the PRO-ELICERE process, an intelligent approach to automate the ELICERE, a process created to extract non-functional requirements for critical computer systems. The PRO-ELICERE can be used in the V&V activities in the projects of Institute of Aeronautics and Space, such as the Brazilian Satellite Launcher (VLS-1).

  7. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    NASA Astrophysics Data System (ADS)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  8. Failure analysis of fuel cell electrodes using three-dimensional multi-length scale X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.

    2016-10-01

    X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.

  9. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  10. MeDICi Software Superglue for Data Analysis Pipelines

    ScienceCinema

    Ian Gorton

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework is an integrated middleware platform developed to solve data analysis and processing needs of scientists across many domains. MeDICi is scalable, easily modified, and robust to multiple languages, protocols, and hardware platforms, and in use today by PNNL scientists for bioinformatics, power grid failure analysis, and text analysis.

  11. Analytical investigation of solid rocket nozzle failure

    NASA Technical Reports Server (NTRS)

    Mccoy, K. E.; Hester, J.

    1985-01-01

    On April 5, 1983, an Inertial Upper Stage (IUS) spacecraft experienced loss of control during the burn of the second of two solid rocket motors. The anomaly investigation showed the cause to be a malfunction of the solid rocket motor. This paper presents a description of the IUS system, a failure analysis summary, an account of the thermal testing and computer modeling done at Marshall Space Flight Center, a comparison of analysis results with thermal data obtained from motor static tests, and describes some of the design enhancement incorporated to prevent recurrence of the anomaly.

  12. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  13. Reliability computation using fault tree analysis

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.

  14. [Legionnaire's pneumonia with rhabdomyolysis and acute renal failure. A case report].

    PubMed

    Sposato, Bruno; Mariotta, Salvatore; Ricci, Alberto; Lucantoni, Gabriele; Schmid, Giovanni

    2003-09-01

    Legionella pneumophyla is the agent responsible of Legionnaire's disease. It appears as a severe pneumonia and often requires admission in Intensive Care Unit. In literature, renal failure is reported to occur in 15 percent of Legionnaire disease and this event induce a mortality over 50% of these cases. The authors describe a case of Legionnaire's pneumonia with respiratory failure, rhabdomyolysis and acute renal failure. Patient was a female, 61 yrs old, admitted to our hospital because of fever (38 degrees-38.5 degrees C), severe respiratory failure (pH = 7.49, PaCO2 = 23.1 mmHg, PaO2 = 56.7 mmHg), oliguria (< 200 ml/24 h); chest x-rays and computed tomography (TC) showed a pneumonia at right lower lobe. Among other things, in blood analysis was noted the following values: BUN = 47 mg/dl, creatinine = 2.1 mg/dl, Na+ = 133 mmol/L, Cl- = 97 mmol/L, Ca+ = 7.2 mg/dl, K+ = 5.8 mmol/L, AST = 213 U/L, ALT = 45 U/L, LDH = 1817 U/L, CPK = 16738 U/L, CPK-MB = 229 U/L, myoglobin > 4300 ng/ml., leucocyte count = 17,500/mmc (N = 92%, L = 3%, M = 5%), positive anti Legionella IgG and IgM (IgG > 1:64, IgM > 1:96), evidence of Legionella soluble antigen in the urine analysis. Therapy with clarytromicyne (500 mg b.i.d i.v.) and rifampicin (600 mg/die i.v.) was begun; computed tomography showed after six days an improvement of pulmonary lesion but, in the following days, health status and blood analysis got worse. Patient went on antibiotics and underwent haemotherapy (Hb: 8 gr/dl), haemodialysis because of acute renal failure but healthy status worse furthermore and she died on 18th days after admission. This case point out rhabdomyolysis with acute renal failure is suggestive for Legionnaire's disease and is associated with high rate of mortality.

  15. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  16. Probabilistic design of fibre concrete structures

    NASA Astrophysics Data System (ADS)

    Pukl, R.; Novák, D.; Sajdlová, T.; Lehký, D.; Červenka, J.; Červenka, V.

    2017-09-01

    Advanced computer simulation is recently well-established methodology for evaluation of resistance of concrete engineering structures. The nonlinear finite element analysis enables to realistically predict structural damage, peak load, failure, post-peak response, development of cracks in concrete, yielding of reinforcement, concrete crushing or shear failure. The nonlinear material models can cover various types of concrete and reinforced concrete: ordinary concrete, plain or reinforced, without or with prestressing, fibre concrete, (ultra) high performance concrete, lightweight concrete, etc. Advanced material models taking into account fibre concrete properties such as shape of tensile softening branch, high toughness and ductility are described in the paper. Since the variability of the fibre concrete material properties is rather high, the probabilistic analysis seems to be the most appropriate format for structural design and evaluation of structural performance, reliability and safety. The presented combination of the nonlinear analysis with advanced probabilistic methods allows evaluation of structural safety characterized by failure probability or by reliability index respectively. Authors offer a methodology and computer tools for realistic safety assessment of concrete structures; the utilized approach is based on randomization of the nonlinear finite element analysis of the structural model. Uncertainty of the material properties or their randomness obtained from material tests are accounted in the random distribution. Furthermore, degradation of the reinforced concrete materials such as carbonation of concrete, corrosion of reinforcement, etc. can be accounted in order to analyze life-cycle structural performance and to enable prediction of the structural reliability and safety in time development. The results can serve as a rational basis for design of fibre concrete engineering structures based on advanced nonlinear computer analysis. The presented methodology is illustrated on results from two probabilistic studies with different types of concrete structures related to practical applications and made from various materials (with the parameters obtained from real material tests).

  17. The application of probabilistic fracture analysis to residual life evaluation of embrittled reactor vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, T.L.; Simonen, F.A.

    1992-05-01

    Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less

  18. The application of probabilistic fracture analysis to residual life evaluation of embrittled reactor vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, T.L.; Simonen, F.A.

    1992-01-01

    Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less

  19. On the thermoelastic analysis of solar cell arrays and related material properties

    NASA Technical Reports Server (NTRS)

    Salama, M. A.; Bouquet, F. L.

    1976-01-01

    Accurate prediction of failure of solar cell arrays requires accuracy in the computation of thermally induced stresses. This was accomplished by using the finite element technique. Improved procedures for stress calculation were introduced together with failure criteria capable of describing a wide range of ductile and brittle material behavior. The stress distribution and associated failure mechanisms in the N-interconnect junction of two solar cell designs were then studied. In such stress and failure analysis, it is essential to know the thermomechanical properties of the materials involved. Measurements were made of properties of materials suitable for the design of lightweight arrays: microsheet-0211 glass material for the solar cell filter, and Kapton-H, Kapton F, Teflon, Tedlar, and Mica Ply PG-402 for lightweight substrates. The temperature-dependence of the thermal coefficient of expansion for these materials was determined together with other properties such as the elastic moduli, Poisson's ratio, and the stress-strain behavior up to failure.

  20. Fault trees for decision making in systems analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Howard E.

    1975-10-09

    The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less

  1. Reducing unscheduled plant maintenance delays -- Field test of a new method to predict electric motor failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Homce, G.T.; Thalimer, J.R.

    1996-05-01

    Most electric motor predictive maintenance methods have drawbacks that limit their effectiveness in the mining environment. The US Bureau of Miens (USBM) is developing an alternative approach to detect winding insulation breakdown in advance of complete motor failure. In order to evaluate the analysis algorithms necessary for this approach, the USBM has designed and installed a system to monitor 120 electric motors in a coal preparation plant. The computer-based experimental system continuously gathers, stores, and analyzes electrical parameters for each motor. The results are then correlated to data from conventional motor-maintenance methods and in-service failures to determine if the analysismore » algorithms can detect signs of insulation deterioration and impending failure. This paper explains the on-line testing approach used in this research, and describes monitoring system design and implementation. At this writing data analysis is underway, but conclusive results are not yet available.« less

  2. Ply-level failure analysis of a graphite/epoxy laminate under bearing-bypass loading

    NASA Technical Reports Server (NTRS)

    Naik, R. A.; Crews, J. H., Jr.

    1988-01-01

    A combined experimental and analytical study was conducted to investigate and predict the failure modes of a graphite/epoxy laminate subjected to combined bearing and bypass loading. Tests were conducted in a test machine that allowed the bearing-bypass load ratio to be controlled while a single-fastener coupon was loaded to failure in either tension or compression. Onset and ultimate failure modes and strengths were determined for each test case. The damage-onset modes were studied in detail by sectioning and micrographing the damaged specimens. A two-dimensional, finite-element analysis was conducted to determine lamina strains around the bolt hole. Damage onset consisted of matrix cracks, delamination, and fiber failures. Stiffness loss appeared to be caused by fiber failures rather than by matrix cracking and delamination. An unusual offset-compression mode was observed for compressive bearing-bypass laoding in which the specimen failed across its width along a line offset from the hole. The computed lamina strains in the fiber direction were used in a combined analytical and experimental approach to predict bearing-bypass diagrams for damage onset from a few simple tests.

  3. Ply-level failure analysis of a graphite/epoxy laminate under bearing-bypass loading

    NASA Technical Reports Server (NTRS)

    Naik, R. A.; Crews, J. H., Jr.

    1990-01-01

    A combined experimental and analytical study was conducted to investigate and predict the failure modes of a graphite/epoxy laminate subjected to combined bearing and bypass loading. Tests were conducted in a test machine that allowed the bearing-bypass load ratio to be controlled while a single-fastener coupon was loaded to failure in either tension or compression. Onset and ultimate failure modes and strengths were determined for each test case. The damage-onset modes were studied in detail by sectioning and micrographing the damaged specimens. A two-dimensional, finite-element analysis was conducted to determine lamina strains around the bolt hole. Damage onset consisted of matrix cracks, delamination, and fiber failures. Stiffness loss appeared to be caused by fiber failures rather than by matrix cracking and delamination. An unusual offset-compression mode was observed for compressive bearing-bypass loading in which the specimen failed across its width along a line offset from the hole. The computed lamina strains in the fiber direction were used in a combined analytical and experimental approach to predict bearing-bypass diagrams for damage onset from a few simple tests.

  4. Evaluation of the Retrieval of Metallurgical Document References using the Universal Decimal Classification in a Computer-Based System.

    ERIC Educational Resources Information Center

    Freeman, Robert R.

    A set of twenty five questions was processed against a computer-stored file of 9159 document references in the field of ferrous metallurgy, representing the 1965 coverage of the Iron and Steel Institute (London) information service. A basis for evaluation of system performance characteristics and analysis of system failures was provided by using…

  5. Development of a realistic stress analysis for fatigue analysis of notched composite laminates

    NASA Technical Reports Server (NTRS)

    Humphreys, E. A.; Rosen, B. W.

    1979-01-01

    A finite element stress analysis which consists of a membrane and interlaminar shear spring analysis was developed. This approach was utilized in order to model physically realistic failure mechanisms while maintaining a high degree of computational economy. The accuracy of the stress analysis predictions is verified through comparisons with other solutions to the composite laminate edge effect problem. The stress analysis model was incorporated into an existing fatigue analysis methodology and the entire procedure computerized. A fatigue analysis is performed upon a square laminated composite plate with a circular central hole. A complete description and users guide for the computer code FLAC (Fatigue of Laminated Composites) is included as an appendix.

  6. A study of Mariner 10 flight experiences and some flight piece part failure rate computations

    NASA Technical Reports Server (NTRS)

    Paul, F. A.

    1976-01-01

    The problems and failures encountered in Mariner flight are discussed and the data available through a quantitative accounting of all electronic piece parts on the spacecraft are summarized. It also shows computed failure rates for electronic piece parts. It is intended that these computed data be used in the continued updating of the failure rate base used for trade-off studies and predictions for future JPL space missions.

  7. Recent developments of the NESSUS probabilistic structural analysis computer program

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  8. The practical impact of elastohydrodynamic lubrication

    NASA Technical Reports Server (NTRS)

    Anderson, W. J.

    1978-01-01

    The use of elastohydrodynamics in the analysis of rolling element bearings is discussed. Relationships for minimum film thickness and tractive force were incorporated into computer codes and used for bearing performance prediction. The lambda parameter (ratio of film thickness to composite surface roughness) was shown to be important in predicting bearing life and failure mode. Results indicate that at values of lambda below 3 failure modes other than the classic subsurface initiated fatigue can occur.

  9. Graph-theoretic analysis of discrete-phase-space states for condition change detection and quantification of information

    DOEpatents

    Hively, Lee M.

    2014-09-16

    Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By monitoring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (unstructured data) into discrete-phase-space states, and hence into a graph (structured data) for extraction of condition change.

  10. Dynamic Fracture Simulations of Explosively Loaded Cylinders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Carly W.; Goto, D. M.

    2015-11-30

    This report documents the modeling results of high explosive experiments investigating dynamic fracture of steel (AerMet® 100 alloy) cylinders. The experiments were conducted at Lawrence Livermore National Laboratory (LLNL) during 2007 to 2008 [10]. A principal objective of this study was to gain an understanding of dynamic material failure through the analysis of hydrodynamic computer code simulations. Two-dimensional and three-dimensional computational cylinder models were analyzed using the ALE3D multi-physics computer code.

  11. Failure mechanisms of additively manufactured porous biomaterials: Effects of porosity and type of unit cell.

    PubMed

    Kadkhodapour, J; Montazerian, H; Darabi, A Ch; Anaraki, A P; Ahmadi, S M; Zadpoor, A A; Schmauder, S

    2015-10-01

    Since the advent of additive manufacturing techniques, regular porous biomaterials have emerged as promising candidates for tissue engineering scaffolds owing to their controllable pore architecture and feasibility in producing scaffolds from a variety of biomaterials. The architecture of scaffolds could be designed to achieve similar mechanical properties as in the host bone tissue, thereby avoiding issues such as stress shielding in bone replacement procedure. In this paper, the deformation and failure mechanisms of porous titanium (Ti6Al4V) biomaterials manufactured by selective laser melting from two different types of repeating unit cells, namely cubic and diamond lattice structures, with four different porosities are studied. The mechanical behavior of the above-mentioned porous biomaterials was studied using finite element models. The computational results were compared with the experimental findings from a previous study of ours. The Johnson-Cook plasticity and damage model was implemented in the finite element models to simulate the failure of the additively manufactured scaffolds under compression. The computationally predicted stress-strain curves were compared with the experimental ones. The computational models incorporating the Johnson-Cook damage model could predict the plateau stress and maximum stress at the first peak with less than 18% error. Moreover, the computationally predicted deformation modes were in good agreement with the results of scaling law analysis. A layer-by-layer failure mechanism was found for the stretch-dominated structures, i.e. structures made from the cubic unit cell, while the failure of the bending-dominated structures, i.e. structures made from the diamond unit cells, was accompanied by the shearing bands of 45°. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.

  13. Modeling of Electrical Cable Failure in a Dynamic Assessment of Fire Risk

    NASA Astrophysics Data System (ADS)

    Bucknor, Matthew D.

    Fires at a nuclear power plant are a safety concern because of their potential to defeat the redundant safety features that provide a high level of assurance of the ability to safely shutdown the plant. One of the added complexities of providing protection against fires is the need to determine the likelihood of electrical cable failure which can lead to the loss of the ability to control or spurious actuation of equipment that is required for safe shutdown. A number of plants are now transitioning from their deterministic fire protection programs to a risk-informed, performance based fire protection program according to the requirements of National Fire Protection Association (NFPA) 805. Within a risk-informed framework, credit can be taken for the analysis of fire progression within a fire zone that was not permissible within the deterministic framework of a 10 CFR 50.48 Appendix R safe shutdown analysis. To perform the analyses required for the transition, plants need to be able to demonstrate with some level of assurance that cables related to safe shutdown equipment will not be compromised during postulated fire scenarios. This research contains the development of new cable failure models that have the potential to more accurately predict electrical cable failure in common cable bundle configurations. Methods to determine the thermal properties of the new models from empirical data are presented along with comparisons between the new models and existing techniques used in the nuclear industry today. A Dynamic Event Tree (DET) methodology is also presented which allows for the proper treatment of uncertainties associated with fire brigade intervention and its effects on cable failure analysis. Finally a shielding analysis is performed to determine the effects on the temperature response of a cable bundle that is shielded from a fire source by an intervening object such as another cable tray. The results from the analyses demonstrate that models of similar complexity to existing cable failure techniques and tuned to empirical data can better approximate the temperature response of a cables located in tightly packed cable bundles. The new models also provide a way to determine the conditions insides a cable bundle which allows for separate treatment of cables on the interior of the bundle from cables on the exterior of the bundle. The results from the DET analysis show that the overall assessed probability of cable failure can be significantly reduced by more realistically accounting for the influence that the fire brigade has on a fire progression scenario. The shielding analysis results demonstrate a significant reduction in the temperature response of a shielded versus a non-shielded cable bundle; however the computational cost of using a fire progression model that can capture these effects may be prohibitive for performing DET analyses with currently available computational fluid dynamics models and computational resources.

  14. Reliability analysis of a robotic system using hybridized technique

    NASA Astrophysics Data System (ADS)

    Kumar, Naveen; Komal; Lather, J. S.

    2017-09-01

    In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.

  15. An energy-efficient failure detector for vehicular cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.

  16. An energy-efficient failure detector for vehicular cloud computing

    PubMed Central

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption. PMID:29352282

  17. Independent Orbiter Assessment (IOA): Analysis of the DPS subsystem

    NASA Technical Reports Server (NTRS)

    Lowery, H. J.; Haufler, W. A.; Pietz, K. C.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) is presented. The IOA approach features a top-down analysis of the hardware to independently determine failure modes, criticality, and potential critical items. The independent analysis results corresponding to the Orbiter Data Processing System (DPS) hardware are documented. The DPS hardware is required for performing critical functions of data acquisition, data manipulation, data display, and data transfer throughout the Orbiter. Specifically, the DPS hardware consists of the following components: Multiplexer/Demultiplexer (MDM); General Purpose Computer (GPC); Multifunction CRT Display System (MCDS); Data Buses and Data Bus Couplers (DBC); Data Bus Isolation Amplifiers (DBIA); Mass Memory Unit (MMU); and Engine Interface Unit (EIU). The IOA analysis process utilized available DPS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the extensive redundancy built into the DPS the number of critical items are few. Those identified resulted from premature operation and erroneous output of the GPCs.

  18. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  19. Fundamental analysis of the failure of polymer-based fiber reinforced composites

    NASA Technical Reports Server (NTRS)

    Kanninen, M. F.; Rybicki, E. F.; Griffith, W. I.; Broek, D.

    1975-01-01

    A mathematical model predicting the strength of unidirectional fiber reinforced composites containing known flaws and with linear elastic-brittle material behavior was developed. The approach was to imbed a local heterogeneous region surrounding the crack tip into an anisotropic elastic continuum. This (1) permits an explicit analysis of the micromechanical processes involved in the fracture, and (2) remains simple enough to be useful in practical computations. Computations for arbitrary flaw size and orientation under arbitrary applied loads were performed. The mechanical properties were those of graphite epoxy. With the rupture properties arbitrarily varied to test the capabilities of the model to reflect real fracture modes, it was shown that fiber breakage, matrix crazing, crack bridging, matrix-fiber debonding, and axial splitting can all occur during a period of (gradually) increasing load prior to catastrophic failure. The calculations also reveal the sequential nature of the stable crack growth process proceding fracture.

  20. Efficient computation paths for the systematic analysis of sensitivities

    NASA Astrophysics Data System (ADS)

    Greppi, Paolo; Arato, Elisabetta

    2013-01-01

    A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.

  1. Fusing Symbolic and Numerical Diagnostic Computations

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.

  2. Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods

    NASA Astrophysics Data System (ADS)

    Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed

    2018-04-01

    This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.

  3. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  4. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  5. Analysis and control of supersonic vortex breakdown flows

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1990-01-01

    Analysis and computation of steady, compressible, quasi-axisymmetric flow of an isolated, slender vortex are considered. The compressible, Navier-Stokes equations are reduced to a simpler set by using the slenderness and quasi-axisymmetry assumptions. The resulting set along with a compatibility equation are transformed from the diverging physical domain to a rectangular computational domain. Solving for a compatible set of initial profiles and specifying a compatible set of boundary conditions, the equations are solved using a type-differencing scheme. Vortex breakdown locations are detected by the failure of the scheme to converge. Computational examples include isolated vortex flows at different Mach numbers, external axial-pressure gradients and swirl ratios.

  6. Progressive Failure And Life Prediction of Ceramic and Textile Composites

    NASA Technical Reports Server (NTRS)

    Xue, David Y.; Shi, Yucheng; Katikala, Madhu; Johnston, William M., Jr.; Card, Michael F.

    1998-01-01

    An engineering approach to predict the fatigue life and progressive failure of multilayered composite and textile laminates is presented. Analytical models which account for matrix cracking, statistical fiber failures and nonlinear stress-strain behavior have been developed for both composites and textiles. The analysis method is based on a combined micromechanics, fracture mechanics and failure statistics analysis. Experimentally derived empirical coefficients are used to account for the interface of fiber and matrix, fiber strength, and fiber-matrix stiffness reductions. Similar approaches were applied to textiles using Repeating Unit Cells. In composite fatigue analysis, Walker's equation is applied for matrix fatigue cracking and Heywood's formulation is used for fiber strength fatigue degradation. The analysis has been compared with experiment with good agreement. Comparisons were made with Graphite-Epoxy, C/SiC and Nicalon/CAS composite materials. For textile materials, comparisons were made with triaxial braided and plain weave materials under biaxial or uniaxial tension. Fatigue predictions were compared with test data obtained from plain weave C/SiC materials tested at AS&M. Computer codes were developed to perform the analysis. Composite Progressive Failure Analysis for Laminates is contained in the code CPFail. Micromechanics Analysis for Textile Composites is contained in the code MicroTex. Both codes were adapted to run as subroutines for the finite element code ABAQUS and CPFail-ABAQUS and MicroTex-ABAQUS. Graphic user interface (GUI) was developed to connect CPFail and MicroTex with ABAQUS.

  7. Direct modeling parameter signature analysis and failure mode prediction of physical systems using hybrid computer optimization

    NASA Technical Reports Server (NTRS)

    Drake, R. L.; Duvoisin, P. F.; Asthana, A.; Mather, T. W.

    1971-01-01

    High speed automated identification and design of dynamic systems, both linear and nonlinear, are discussed. Special emphasis is placed on developing hardware and techniques which are applicable to practical problems. The basic modeling experiment and new results are described. Using the improvements developed successful identification of several systems, including a physical example as well as simulated systems, was obtained. The advantages of parameter signature analysis over signal signature analysis in go-no go testing of operational systems were demonstrated. The feasibility of using these ideas in failure mode prediction in operating systems was also investigated. An improved digital controlled nonlinear function generator was developed, de-bugged, and completely documented.

  8. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  9. Computer system design description for SY-101 hydrogen mitigation test project data acquisition and control system (DACS-1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ermi, A.M.

    1997-05-01

    Description of the Proposed Activity/REPORTABLE OCCURRENCE or PIAB: This ECN changes the computer systems design description support document describing the computers system used to control, monitor and archive the processes and outputs associated with the Hydrogen Mitigation Test Pump installed in SY-101. There is no new activity or procedure associated with the updating of this reference document. The updating of this computer system design description maintains an agreed upon documentation program initiated within the test program and carried into operations at time of turnover to maintain configuration control as outlined by design authority practicing guidelines. There are no new crediblemore » failure modes associated with the updating of information in a support description document. The failure analysis of each change was reviewed at the time of implementation of the Systems Change Request for all the processes changed. This document simply provides a history of implementation and current system status.« less

  10. Legal issues of computer imaging in plastic surgery: a primer.

    PubMed

    Chávez, A E; Dagum, P; Koch, R J; Newman, J P

    1997-11-01

    Although plastic surgeons are increasingly incorporating computer imaging techniques into their practices, many fear the possibility of legally binding themselves to achieve surgical results identical to those reflected in computer images. Computer imaging allows surgeons to manipulate digital photographs of patients to project possible surgical outcomes. Some of the many benefits imaging techniques pose include improving doctor-patient communication, facilitating the education and training of residents, and reducing administrative and storage costs. Despite the many advantages computer imaging systems offer, however, surgeons understandably worry that imaging systems expose them to immense legal liability. The possible exploitation of computer imaging by novice surgeons as a marketing tool, coupled with the lack of consensus regarding the treatment of computer images, adds to the concern of surgeons. A careful analysis of the law, however, reveals that surgeons who use computer imaging carefully and conservatively, and adopt a few simple precautions, substantially reduce their vulnerability to legal claims. In particular, surgeons face possible claims of implied contract, failure to instruct, and malpractice from their use or failure to use computer imaging. Nevertheless, legal and practical obstacles frustrate each of those causes of actions. Moreover, surgeons who incorporate a few simple safeguards into their practice may further reduce their legal susceptibility.

  11. A global analysis approach for investigating structural resilience in urban drainage systems.

    PubMed

    Mugume, Seith N; Gomez, Diego E; Fu, Guangtao; Farmani, Raziyeh; Butler, David

    2015-09-15

    Building resilience in urban drainage systems requires consideration of a wide range of threats that contribute to urban flooding. Existing hydraulic reliability based approaches have focused on quantifying functional failure caused by extreme rainfall or increase in dry weather flows that lead to hydraulic overloading of the system. Such approaches however, do not fully explore the full system failure scenario space due to exclusion of crucial threats such as equipment malfunction, pipe collapse and blockage that can also lead to urban flooding. In this research, a new analytical approach based on global resilience analysis is investigated and applied to systematically evaluate the performance of an urban drainage system when subjected to a wide range of structural failure scenarios resulting from random cumulative link failure. Link failure envelopes, which represent the resulting loss of system functionality (impacts) are determined by computing the upper and lower limits of the simulation results for total flood volume (failure magnitude) and average flood duration (failure duration) at each link failure level. A new resilience index that combines the failure magnitude and duration into a single metric is applied to quantify system residual functionality at each considered link failure level. With this approach, resilience has been tested and characterised for an existing urban drainage system in Kampala city, Uganda. In addition, the effectiveness of potential adaptation strategies in enhancing its resilience to cumulative link failure has been tested. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  13. The Model Experiments and Finite Element Analysis on Deformation and Failure by Excavation of Grounds in Foregoing-roof Method

    NASA Astrophysics Data System (ADS)

    Sotokoba, Yasumasa; Okajima, Kenji; Iida, Toshiaki; Tanaka, Tadatsugu

    We propose the trenchless box culvert construction method to construct box culverts in small covering soil layers while keeping roads or tracks open. When we use this construction method, it is necessary to clarify deformation and shear failure by excavation of grounds. In order to investigate the soil behavior, model experiments and elasto-plactic finite element analysis were performed. In the model experiments, it was shown that the shear failure was developed from the end of the roof to the toe of the boundary surface. In the finite element analysis, a shear band effect was introduced. Comparing the observed shear bands in model experiments with computed maximum shear strain contours, it was found that the observed direction of the shear band could be simulated reasonably by the finite element analysis. We may say that the finite element method used in this study is useful tool for this construction method.

  14. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  15. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  16. Cyclic fatigue analysis of rocket thrust chambers. Volume 1: OFHC copper chamber low cycle fatigue

    NASA Technical Reports Server (NTRS)

    Miller, R. W.

    1974-01-01

    A three-dimensional finite element elasto-plastic strain analysis was performed for the throat section of a regeneratively cooled rocket combustion chamber. The analysis employed the RETSCP finite element computer program. The analysis included thermal and pressure loads, and the effects of temperature dependent material properties, to determine the strain range corresponding to the chamber operating cycle. The analysis was performed for chamber configuration and operating conditions corresponding to a hydrogen-oxygen combustion chamber which was fatigue tested to failure. The computed strain range at typical chamber operating conditions was used in conjunction with oxygen-free, high-conductivity (OHFC) copper isothermal fatigue test data to predict chamber low-cycle fatigue life.

  17. Simulating Fatigue Crack Growth in Spiral Bevel Pinion

    NASA Technical Reports Server (NTRS)

    Ural, Ani; Wawrzynek, Paul A.; Ingraffe, Anthony R.

    2003-01-01

    This project investigates computational modeling of fatigue crack growth in spiral bevel gears. Current work is a continuation of the previous efforts made to use the Boundary Element Method (BEM) to simulate tooth-bending fatigue failure in spiral bevel gears. This report summarizes new results predicting crack trajectory and fatigue life for a spiral bevel pinion using the Finite Element Method (FEM). Predicting crack trajectories is important in determining the failure mode of a gear. Cracks propagating through the rim may result in catastrophic failure, whereas the gear may remain intact if one tooth fails and this may allow for early detection of failure. Being able to predict crack trajectories is insightful for the designer. However, predicting growth of three-dimensional arbitrary cracks is complicated due to the difficulty of creating three-dimensional models, the computing power required, and absence of closed- form solutions of the problem. Another focus of this project was performing three-dimensional contact analysis of a spiral bevel gear set incorporating cracks. These analyses were significant in determining the influence of change of tooth flexibility due to crack growth on the magnitude and location of contact loads. This is an important concern since change in contact loads might lead to differences in SIFs and therefore result in alteration of the crack trajectory. Contact analyses performed in this report showed the expected trend of decreasing tooth loads carried by the cracked tooth with increasing crack length. Decrease in tooth loads lead to differences between SIFs extracted from finite element contact analysis and finite element analysis with Hertz contact loads. This effect became more pronounced as the crack grew.

  18. Computing Reliabilities Of Ceramic Components Subject To Fracture

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.

    1992-01-01

    CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.

  19. Design and Rationale of the Cognitive Intervention to Improve Memory in Heart Failure Patients Study.

    PubMed

    Pressler, Susan J; Giordani, Bruno; Titler, Marita; Gradus-Pizlo, Irmina; Smith, Dean; Dorsey, Susan G; Gao, Sujuan; Jung, Miyeon

    Memory loss is an independent predictor of mortality among heart failure patients. Twenty-three percent to 50% of heart failure patients have comorbid memory loss, but few interventions are available to treat the memory loss. The aims of this 3-arm randomized controlled trial were to (1) evaluate efficacy of computerized cognitive training intervention using BrainHQ to improve primary outcomes of memory and serum brain-derived neurotrophic factor levels and secondary outcomes of working memory, instrumental activities of daily living, and health-related quality of life among heart failure patients; (2) evaluate incremental cost-effectiveness of BrainHQ; and (3) examine depressive symptoms and genomic moderators of BrainHQ effect. A sample of 264 heart failure patients within 4 equal-sized blocks (normal/low baseline cognitive function and gender) will be randomly assigned to (1) BrainHQ, (2) active control computer-based crossword puzzles, and (3) usual care control groups. BrainHQ is an 8-week, 40-hour program individualized to each patient's performance. Data collection will be completed at baseline and at 10 weeks and 4 and 8 months. Descriptive statistics, mixed model analyses, and cost-utility analysis using intent-to-treat approach will be computed. This research will provide new knowledge about the efficacy of BrainHQ to improve memory and increase serum brain-derived neurotrophic factor levels in heart failure. If efficacious, the intervention will provide a new therapeutic approach that is easy to disseminate to treat a serious comorbid condition of heart failure.

  20. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

  1. An Analysis of Failure Handling in Chameleon, A Framework for Supporting Cost-Effective Fault Tolerant Services

    NASA Technical Reports Server (NTRS)

    Haakensen, Erik Edward

    1998-01-01

    The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.

  2. Knowledge representation and user interface concepts to support mixed-initiative diagnosis

    NASA Technical Reports Server (NTRS)

    Sobelman, Beverly H.; Holtzblatt, Lester J.

    1989-01-01

    The Remote Maintenance Monitoring System (RMMS) provides automated support for the maintenance and repair of ModComp computer systems used in the Launch Processing System (LPS) at Kennedy Space Center. RMMS supports manual and automated diagnosis of intermittent hardware failures, providing an efficient means for accessing and analyzing the data generated by catastrophic failure recovery procedures. This paper describes the design and functionality of the user interface for interactive analysis of memory dump data, relating it to the underlying declarative representation of memory dumps.

  3. Development of an engineering analysis of progressive damage in composites during low velocity impact

    NASA Technical Reports Server (NTRS)

    Humphreys, E. A.

    1981-01-01

    A computerized, analytical methodology was developed to study damage accumulation during low velocity lateral impact of layered composite plates. The impact event was modeled as perfectly plastic with complete momentum transfer to the plate structure. A transient dynamic finite element approach was selected to predict the displacement time response of the plate structure. Composite ply and interlaminar stresses were computed at selected time intervals and subsequently evaluated to predict layer and interlaminar damage. The effects of damage on elemental stiffness were then incorporated back into the analysis for subsequent time steps. Damage predicted included fiber failure, matrix ply failure and interlaminar delamination.

  4. Interfacing LabVIEW With Instrumentation for Electronic Failure Analysis and Beyond

    NASA Technical Reports Server (NTRS)

    Buchanan, Randy K.; Bryan, Coleman; Ludwig, Larry

    1996-01-01

    The Laboratory Virtual Instrumentation Engineering Workstation (LabVIEW) software is designed such that equipment and processes related to control systems can be operationally lined and controlled by the use of a computer. Various processes within the failure analysis laboratories of NASA's Kennedy Space Center (KSC) demonstrate the need for modernization and, in some cases, automation, using LabVIEW. An examination of procedures and practices with the Failure Analaysis Laboratory resulted in the conclusion that some device was necessary to elevate the potential users of LabVIEW to an operational level in minimum time. This paper outlines the process involved in creating a tutorial application to enable personnel to apply LabVIEW to their specific projects. Suggestions for furthering the extent to which LabVIEW is used are provided in the areas of data acquisition and process control.

  5. Failure of Non-Circular Composite Cylinders

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.

    2004-01-01

    In this study, a progressive failure analysis is used to investigate leakage in internally pressurized non-circular composite cylinders. This type of approach accounts for the localized loss of stiffness when material failure occurs at some location in a structure by degrading the local material elastic properties by a certain factor. The manner in which this degradation of material properties takes place depends on the failure modes, which are determined by the application of a failure criterion. The finite-element code STAGS, which has the capability to perform progressive failure analysis using different degradation schemes and failure criteria, is utilized to analyze laboratory scale, graphite-epoxy, elliptical cylinders with quasi-isotropic, circumferentially-stiff, and axially-stiff material orthotropies. The results are divided into two parts. The first part shows that leakage, which is assumed to develop if there is material failure in every layer at some axial and circumferential location within the cylinder, does not occur without failure of fibers. Moreover before fibers begin to fail, only matrix tensile failures, or matrix cracking, takes place, and at least one layer in all three cylinders studied remain uncracked, preventing the formation of a leakage path. That determination is corroborated by the use of different degradation schemes and various failure criteria. Among the degradation schemes investigated are the degradation of different engineering properties, the use of various degradation factors, the recursive or non-recursive degradation of the engineering properties, and the degradation of material properties using different computational approaches. The failure criteria used in the analysis include the noninteractive maximum stress criterion and the interactive Hashin and Tsai-Wu criteria. The second part of the results shows that leakage occurs due to a combination of matrix tensile and compressive, fiber tensile and compressive, and inplane shear failure modes in all three cylinders. Leakage develops after a relatively low amount of fiber damage, at about the same pressure for three material orthotropies, and at approximately the same location.

  6. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  7. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  8. Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2007-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  9. Nonlinear analysis for the response and failure of compression-loaded angle-ply laminates with a hole

    NASA Technical Reports Server (NTRS)

    Mathison, Steven R.; Herakovich, Carl T.; Pindera, Marek-Jerzy; Shuart, Mark J.

    1987-01-01

    The objective was to determine the effect of nonlinear material behavior on the response and failure of unnotched and notched angle-ply laminates under uniaxial compressive loading. The endochronic theory was chosen as the constitutive theory to model the AS4/3502 graphite-epoxy material system. Three-dimensional finite element analysis incorporating the endochronic theory was used to determine the stresses and strains in the laminates. An incremental/iterative initial strain algorithm was used in the finite element program. To increase computational efficiency, a 180 deg rotational symmetry relationship was utilized and the finite element program was vectorized to run on a supercomputer. Laminate response was compared to experimentation revealing excellent agreement for both the unnotched and notched angle-ply laminates. Predicted stresses in the region of the hole were examined and are presented, comparing linear elastic analysis to the inelastic endochronic theory analysis. A failure analysis of the unnotched and notched laminates was performed using the quadratic tensor polynomial. Predicted fracture loads compared well with experimentation for the unnotched laminates, but were very conservative in comparison with experiments for the notched laminates.

  10. Computational mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raboin, P J

    1998-01-01

    The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D.more » Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.« less

  11. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less

  12. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  13. Simulating Progressive Damage of Notched Composite Laminates with Various Lamination Schemes

    NASA Astrophysics Data System (ADS)

    Mandal, B.; Chakrabarti, A.

    2017-05-01

    A three dimensional finite element based progressive damage model has been developed for the failure analysis of notched composite laminates. The material constitutive relations and the progressive damage algorithms are implemented into finite element code ABAQUS using user-defined subroutine UMAT. The existing failure criteria for the composite laminates are modified by including the failure criteria for fiber/matrix shear damage and delamination effects. The proposed numerical model is quite efficient and simple compared to other progressive damage models available in the literature. The efficiency of the present constitutive model and the computational scheme is verified by comparing the simulated results with the results available in the literature. A parametric study has been carried out to investigate the effect of change in lamination scheme on the failure behaviour of notched composite laminates.

  14. Fault tree analysis for system modeling in case of intentional EMI

    NASA Astrophysics Data System (ADS)

    Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.

    2011-08-01

    The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.

  15. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1991-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/ mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  16. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Bartos, Karen F.; Fite, E. B.; Sharp, G. R.

    1992-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  17. Influence of Finite Element Size in Residual Strength Prediction of Composite Structures

    NASA Technical Reports Server (NTRS)

    Satyanarayana, Arunkumar; Bogert, Philip B.; Karayev, Kazbek Z.; Nordman, Paul S.; Razi, Hamid

    2012-01-01

    The sensitivity of failure load to the element size used in a progressive failure analysis (PFA) of carbon composite center notched laminates is evaluated. The sensitivity study employs a PFA methodology previously developed by the authors consisting of Hashin-Rotem intra-laminar fiber and matrix failure criteria and a complete stress degradation scheme for damage simulation. The approach is implemented with a user defined subroutine in the ABAQUS/Explicit finite element package. The effect of element size near the notch tips on residual strength predictions was assessed for a brittle failure mode with a parametric study that included three laminates of varying material system, thickness and stacking sequence. The study resulted in the selection of an element size of 0.09 in. X 0.09 in., which was later used for predicting crack paths and failure loads in sandwich panels and monolithic laminated panels. Comparison of predicted crack paths and failure loads for these panels agreed well with experimental observations. Additionally, the element size vs. normalized failure load relationship, determined in the parametric study, was used to evaluate strength-scaling factors for three different element sizes. The failure loads predicted with all three element sizes provided converged failure loads with respect to that corresponding with the 0.09 in. X 0.09 in. element size. Though preliminary in nature, the strength-scaling concept has the potential to greatly reduce the computational time required for PFA and can enable the analysis of large scale structural components where failure is dominated by fiber failure in tension.

  18. Ku-band signal design study. [space shuttle orbiter data processing network

    NASA Technical Reports Server (NTRS)

    Rubin, I.

    1978-01-01

    Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.

  19. Software reliability models for fault-tolerant avionics computers and related topics

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1987-01-01

    Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.

  20. NRL Fact Book

    DTIC Science & Technology

    1985-04-01

    characteristics of targets Tank 9.1 m (30 ft) in diameter by 6.7 m (22 ft) deep , automated with computer con- trol and analysis for detailed studies of acoustic...structures; and conducts experiments in the deep ocean, in acoustically shallow water, and in the Arctic. The Division carries out theoretical and...Laser Materials-Application Center Failure Analysis and Fractography Staff Research Activity Areas Environmental Effects Microstructural characterization

  1. Material and morphology parameter sensitivity analysis in particulate composite materials

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Oskay, Caglar

    2017-12-01

    This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.

  2. Rock failure analysis by combined thermal weakening and water jet impact

    NASA Technical Reports Server (NTRS)

    Nayfeh, A. H.

    1976-01-01

    The influence of preheating on the initiation of fracture in rocks subjected to the impingement of a continuous water jet is studied. Preheating the rock is assumed to degrade its mechanical properties and strength in accordance with existing experimental data. The water jet is assumed to place a quasi-static loading on the surface of the rock. The loading is approximated by elementary functions which permit analytic computation of the induced stresses in a rock half-space. The resulting stresses are subsequently coupled with the Griffith criteria for tensile failure to estimate the change, due to heating, in the critical stagnation pressure and velocity of the water jet required to cause failure in the rock.

  3. A fuzzy set approach for reliability calculation of valve controlling electric actuators

    NASA Astrophysics Data System (ADS)

    Karmachev, D. P.; Yefremov, A. A.; Luneva, E. E.

    2017-02-01

    The oil and gas equipment and electric actuators in particular frequently perform in various operational modes and under dynamic environmental conditions. These factors affect equipment reliability measures in a vague, uncertain way. To eliminate the ambiguity, reliability model parameters could be defined as fuzzy numbers. We suggest a technique that allows constructing fundamental fuzzy-valued performance reliability measures based on an analysis of electric actuators failure data in accordance with the amount of work, completed before the failure, instead of failure time. Also, this paper provides a computation example of fuzzy-valued reliability and hazard rate functions, assuming Kumaraswamy complementary Weibull geometric distribution as a lifetime (reliability) model for electric actuators.

  4. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  5. An Example of Concurrent Engineering

    NASA Technical Reports Server (NTRS)

    Rowe, Sidney; Whitten, David; Cloyd, Richard; Coppens, Chris; Rodriguez, Pedro

    1998-01-01

    The Collaborative Engineering Design and Analysis Room (CEDAR) facility allows on-the- spot design review capability for any project during all phases of development. The required disciplines assemble in this facility to work on any problems (analysis, manufacturing, inspection, etc.) associated with a particular design. A small highly focused team of specialists can meet in this room to better expedite the process of developing a solution to an engineering task within the framework of the constraints that are unique to each discipline. This facility provides the engineering tools and translators to develop a concept within the confines of the room or with remote team members that could access the team's data from other locations. The CEDAR area is envisioned as excellent for failure investigation meetings to be conducted where the computer capabilities can be utilized in conjunction with the Smart Board display to develop failure trees, brainstorm failure modes, and evaluate possible solutions.

  6. An analysis of fiber-matrix interface failure stresses for a range of ply stress states

    NASA Technical Reports Server (NTRS)

    Crews, J. H.; Naik, R. A.; Lubowinski, S. J.

    1993-01-01

    A graphite/bismaleimide laminate was prepared without the usual fiber treatment and was tested over a wide range of stress states to measure its ply cracking strength. These tests were conducted using off-axis flexure specimens and produced fiber-matrix interface failure data over a correspondingly wide range of interface stress states. The absence of fiber treatment, weakened the fiber-matrix interfaces and allowed these tests to be conducted at load levels that did not yield the matrix. An elastic micromechanics computer code was used to calculate the fiber-matrix interface stresses at failure. Two different fiber-array models (square and diamond) were used in these calculations to analyze the effects of fiber arrangement as well as stress state on the critical interface stresses at failure. This study showed that both fiber-array models were needed to analyze interface stresses over the range of stress states. A linear equation provided a close fit to these critical stress combinations and, thereby, provided a fiber-matrix interface failure criterion. These results suggest that prediction procedures for laminate ply cracking can be based on micromechanics stress analyses and appropriate fiber-matrix interface failure criteria. However, typical structural laminates may require elastoplastic stress analysis procedures that account for matrix yielding, especially for shear-dominated ply stress states.

  7. Narrowing the scope of failure prediction using targeted fault load injection

    NASA Astrophysics Data System (ADS)

    Jordan, Paul L.; Peterson, Gilbert L.; Lin, Alan C.; Mendenhall, Michael J.; Sellers, Andrew J.

    2018-05-01

    As society becomes more dependent upon computer systems to perform increasingly critical tasks, ensuring that those systems do not fail becomes increasingly important. Many organizations depend heavily on desktop computers for day-to-day operations. Unfortunately, the software that runs on these computers is written by humans and, as such, is still subject to human error and consequent failure. A natural solution is to use statistical machine learning to predict failure. However, since failure is still a relatively rare event, obtaining labelled training data to train these models is not a trivial task. This work presents new simulated fault-inducing loads that extend the focus of traditional fault injection techniques to predict failure in the Microsoft enterprise authentication service and Apache web server. These new fault loads were successful in creating failure conditions that were identifiable using statistical learning methods, with fewer irrelevant faults being created.

  8. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  9. Strength determination of brittle materials as curved monolithic structures.

    PubMed

    Hooi, P; Addison, O; Fleming, G J P

    2014-04-01

    The dental literature is replete with "crunch the crown" monotonic load-to-failure studies of all-ceramic materials despite fracture behavior being dominated by the indenter contact surface. Load-to-failure data provide no information on stress patterns, and comparisons among studies are impossible owing to variable testing protocols. We investigated the influence of nonplanar geometries on the maximum principal stress of curved discs tested in biaxial flexure in the absence of analytical solutions. Radii of curvature analogous to elements of complex dental geometries and a finite element analysis method were integrated with experimental testing as a surrogate solution to calculate the maximum principal stress at failure. We employed soda-lime glass discs, a planar control (group P, n = 20), with curvature applied to the remaining discs by slump forming to different radii of curvature (30, 20, 15, and 10 mm; groups R30-R10). The mean deflection (group P) and radii of curvature obtained on slumping (groups R30-R10) were determined by profilometry before and after annealing and surface treatment protocols. Finite element analysis used the biaxial flexure load-to-failure data to determine the maximum principal stress at failure. Mean maximum principal stresses and load to failure were analyzed with one-way analyses of variance and post hoc Tukey tests (α = 0.05). The measured radii of curvature differed significantly among groups, and the radii of curvature were not influenced by annealing. Significant increases in the mean load to failure were observed as the radius of curvature was reduced. The maximum principal stress did not demonstrate sensitivity to radius of curvature. The findings highlight the sensitivity of failure load to specimen shape. The data also support the synergistic use of bespoke computational analysis with conventional mechanical testing and highlight a solution to complications with complex specimen geometries.

  10. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  11. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  12. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  13. A computer program for cyclic plasticity and structural fatigue analysis

    NASA Technical Reports Server (NTRS)

    Kalev, I.

    1980-01-01

    A computerized tool for the analysis of time independent cyclic plasticity structural response, life to crack initiation prediction, and crack growth rate prediction for metallic materials is described. Three analytical items are combined: the finite element method with its associated numerical techniques for idealization of the structural component, cyclic plasticity models for idealization of the material behavior, and damage accumulation criteria for the fatigue failure.

  14. Design of hat-stiffened composite panels loaded in axial compression

    NASA Astrophysics Data System (ADS)

    Paul, T. K.; Sinha, P. K.

    An integrated step-by-step analysis procedure for the design of axially compressed stiffened composite panels is outlined. The analysis makes use of the effective width concept. A computer code, BUSTCOP, is developed incorporating various aspects of buckling such as skin buckling, stiffener crippling and column buckling. Other salient features of the computer code include capabilities for generation of data based on micromechanics theories and hygrothermal analysis, and for prediction of strength failure. Parametric studies carried out on a hat-stiffened structural element indicate that, for all practical purposes, composite panels exhibit higher structural efficiency. Some hybrid laminates with outer layers made of aluminum alloy also show great promise for flight vehicle structural applications.

  15. Experimental investigation on the fracture behaviour of black shale by acoustic emission monitoring and CT image analysis during uniaxial compression

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, C. H.; Hu, Y. Z.

    2018-04-01

    Plenty of mechanical experiments have been done to investigate the deformation and failure characteristics of shale; however, the anisotropic failure mechanism has not been well studied. Here, laboratory Uniaxial Compressive Strength tests on cylindrical shale samples obtained by drilling at different inclinations to bedding plane were performed. The failure behaviours of the shale samples were studied by real-time acoustic emission (AE) monitoring and post-test X-ray computer tomography (CT) analysis. The experimental results suggest that the pronounced bedding planes of shale have a great influence on the mechanical properties and AE patterns. The AE counts and AE cumulative energy release curves clearly demonstrate different morphology, and the `U'-shaped curve relationship between the AE counts, AE cumulative energy release and bedding inclination was first documented. The post-test CT image analysis shows the crack patterns via 2-D image reconstructions, an index of stimulated fracture density is defined to represent the anisotropic failure mode of shale. What is more, the most striking finding is that the AE monitoring results are in good agreement with the CT analysis. The structural difference in the shale sample is the controlling factor resulting in the anisotropy of AE patterns. The pronounced bedding structure in the shale formation results in an anisotropy of elasticity, strength and AE information from which the changes in strength dominate the entire failure pattern of the shale samples.

  16. Sensitivity analysis of bridge health index to element failure and element conditions.

    DOT National Transportation Integrated Search

    2009-11-01

    Bridge Health Index (BHI) is a bridge performance measure based on the condition of the bridge elements. It : is computed as the ratio of remaining value of the bridge structure to the initial value of the structure. Since it : is expressed as a perc...

  17. Stress and Strain State Analysis of Defective Pipeline Portion

    NASA Astrophysics Data System (ADS)

    Burkov, P. V.; Burkova, S. P.; Knaub, S. A.

    2015-09-01

    The paper presents computer simulation results of the pipeline having defects in a welded joint. Autodesk Inventor software is used for simulation of the stress and strain state of the pipeline. Places of the possible failure and stress concentrators are predicted on the defective portion of the pipeline.

  18. Analysis of silicon stress/strain relationships

    NASA Technical Reports Server (NTRS)

    Dillon, O.

    1985-01-01

    In the study of stress-strain relationships in silicon ribbon, numerous solutions were calculated for stresses, strain rates, and dislocation densities through the use of the Sumino model. It was concluded that many cases of failure of computer solutions to converge are analytical manifestations of shear bands (Luder's band) observed in experiments.

  19. Performance analysis of a fault inferring nonlinear detection system algorithm with integrated avionics flight data

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.

    1985-01-01

    This paper presents the performance analysis results of a fault inferring nonlinear detection system (FINDS) using integrated avionics sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. First, an overview of the FINDS algorithm structure is given. Then, aircraft state estimate time histories and statistics for the flight data sensors are discussed. This is followed by an explanation of modifications made to the detection and decision functions in FINDS to improve false alarm and failure detection performance. Next, the failure detection and false alarm performance of the FINDS algorithm are analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minutes of flight data. Results indicate that the detection speed, failure level estimation, and false alarm performance show a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed is faster for filter measurement sensors such as MLS than for filter input sensors such as flight control accelerometers. Finally, the progress in modifications of the FINDS algorithm design to accommodate flight computer constraints is discussed.

  20. Modelling river bank retreat by combining fluvial erosion, seepage and mass failure

    NASA Astrophysics Data System (ADS)

    Dapporto, S.; Rinaldi, M.

    2003-04-01

    Streambank erosion processes contribute significantly to the sediment yielded from a river system and represent an important issue in the contexts of soil degradation and river management. Bank retreat is controlled by a complex interaction of hydrologic, geotechnical, and hydraulic processes. The capability of modelling these different components allows for a full reconstruction and comprehension of the causes and rates of bank erosion. River bank retreat during a single flow event has been modelled by combining simulation of fluvial erosion, seepage, and mass failures. The study site, along the Sieve River (Central Italy), has been subject to extensive researches, including monitoring of pore water pressures for a period of 4 years. The simulation reconstructs fairly faithfully the observed changes, and is used to: a) test the potentiality and discuss advantages and limitations of such type of methodology for modelling bank retreat; c) quantify the contribution and mutual role of the different processes determining bank retreat. The hydrograph of the event is divided in a series of time steps. Modelling of the riverbank retreat includes for each step the following components: a) fluvial erosion and consequent changes in bank geometry; b) finite element seepage analysis; c) stability analysis by limit equilibrium method. Direct fluvial shear erosion is computed using empirically derived relationships expressing lateral erosion rate as a function of the excess of shear stress to the critical entrainment value for the different materials along the bank profile. Lateral erosion rate has been calibrated on the basis of the total bank retreat measured by digital terrestrial photogrammetry. Finite element seepage analysis is then conducted to reconstruct the saturated and unsaturated flow within the bank and the pore water pressure distribution for each time step. The safety factor for mass failures is then computed, using the pore water pressure distribution obtained by the seepage analysis, and the geometry of the upper bank is modified in case of failure.

  1. High throughput computing: a solution for scientific analysis

    USGS Publications Warehouse

    O'Donnell, M.

    2011-01-01

    handle job failures due to hardware, software, or network interruptions (obviating the need to manually resubmit the job after each stoppage); be affordable; and most importantly, allow us to complete very large, complex analyses that otherwise would not even be possible. In short, we envisioned a job-management system that would take advantage of unused FORT CPUs within a local area network (LAN) to effectively distribute and run highly complex analytical processes. What we found was a solution that uses High Throughput Computing (HTC) and High Performance Computing (HPC) systems to do exactly that (Figure 1).

  2. A computational method for comparing the behavior and possible failure of prosthetic implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, C.; Hollerbach, K.; Perfect, S.

    1995-05-01

    Prosthetic joint implants currently in use exhibit high Realistic computer modeling of prosthetic implants provides an opportunity for orthopedic biomechanics researchers and physicians to understand possible in vivo failure modes, without having to resort to lengthy and costly clinical trials. The research presented here is part of a larger effort to develop realistic models of implanted joint prostheses. The example used here is the thumb carpo-metacarpal (cmc) joint. The work, however, can be applied to any other human joints for which prosthetic implants have been designed. Preliminary results of prosthetic joint loading, without surrounding human tissue (i.e., simulating conditions undermore » which the prosthetic joint has not yet been implanted into the human joint), are presented, based on a three-dimensional, nonlinear finite element analysis of three different joint implant designs.« less

  3. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  4. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    NASA Astrophysics Data System (ADS)

    Williamson, R. L.; Capps, N. A.; Liu, W.; Rashid, Y. R.; Wirth, B. D.

    2016-11-01

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial ( R- Z) or plane radial-circumferential ( R- θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. In comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.

  5. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    DOE PAGES

    Williamson, R. L.; Capps, N. A.; Liu, W.; ...

    2016-09-27

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial (R-Z) ormore » plane radial-circumferential (R-θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used in this paper to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. Finally, in comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.« less

  6. TU-FG-201-12: Designing a Risk-Based Quality Assurance Program for a Newly Implemented Y-90 Microspheres Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vile, D; Zhang, L; Cuttino, L

    2016-06-15

    Purpose: To create a quality assurance program based upon a risk-based assessment of a newly implemented SirSpheres Y-90 procedure. Methods: A process map was created for a newly implemented SirSpheres procedure at a community hospital. The process map documented each step of this collaborative procedure, as well as the roles and responsibilities of each member. From the process map, different potential failure modes were determined as well as any current controls in place. From this list, a full failure mode and effects analysis (FMEA) was performed by grading each failure mode’s likelihood of occurrence, likelihood of detection, and potential severity.more » These numbers were then multiplied to compute the risk priority number (RPN) for each potential failure mode. Failure modes were then ranked based on their RPN. Additional controls were then added, with failure modes corresponding to the highest RPNs taking priority. Results: A process map was created that succinctly outlined each step in the SirSpheres procedure in its current implementation. From this, 72 potential failure modes were identified and ranked according to their associated RPN. Quality assurance controls and safety barriers were then added for failure modes associated with the highest risk being addressed first. Conclusion: A quality assurance program was created from a risk-based assessment of the SirSpheres process. Process mapping and FMEA were effective in identifying potential high-risk failure modes for this new procedure, which were prioritized for new quality assurance controls. TG 100 recommends the fault tree analysis methodology to design a comprehensive and effective QC/QM program, yet we found that by simply introducing additional safety barriers to address high RPN failure modes makes the whole process simpler and safer.« less

  7. Nest success, cause-specific nest failure, and hatchability of aquatic birds at selenium-contaminated Kesterson Reservoir and a reference site

    USGS Publications Warehouse

    Ohlendorf, Harry M.; Hothem, Roger L.; Welsh, Daniel

    1989-01-01

    During 1983-1985, we studied the reproductive success of several species of aquatic birds (coots, ducks, shorebirds, and grebes) nesting at two sites in Merced County, California: a selenium-contaminated site (Kesterson Reservoir) and a nearby reference site (Volta Wildlife Area). We used a computer program (MICROMORT) developed for the analysis of radiotelemetry data (Heisey and Fuller 1985) to estimate nest success and cause-specific failure rates, and then compared these parameters and hatchability between sites and among years. Nest success and causes of failure varied by species, site, and year. The most important causes of nest failure were usually predation, desertion, and water-level changes. However, embryotoxicosis (mortality, deformity, and lack of embryonic development) was the most important cause of nest failure in Eared Grebes (Podiceps nigricollis) at Kesterson Reservoir. Embryotoxicosis also reduced the hatchability of eggs of all other species at Kesterson in one or more years; embryonic mortality occurred rarely at Volta, and abnormalities were not observed.

  8. Probabilistic Analysis of a SiC/SiC Ceramic Matrix Composite Turbine Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Nemeth, Noel N.; Brewer, David N.; Mital, Subodh

    2004-01-01

    To demonstrate the advanced composite materials technology under development within the Ultra-Efficient Engine Technology (UEET) Program, it was planned to fabricate, test, and analyze a turbine vane made entirely of silicon carbide-fiber-reinforced silicon carbide matrix composite (SiC/SiC CMC) material. The objective was to utilize a five-harness satin weave melt-infiltrated (MI) SiC/SiC composite material developed under this program to design and fabricate a stator vane that can endure 1000 hours of engine service conditions. The vane was designed such that the expected maximum stresses were kept within the proportional limit strength of the material. Any violation of this design requirement was considered as the failure. This report presents results of a probabilistic analysis and reliability assessment of the vane. Probability of failure to meet the design requirements was computed. In the analysis, material properties, strength, and pressure loading were considered as random variables. The pressure loads were considered normally distributed with a nominal variation. A temperature profile on the vane was obtained by performing a computational fluid dynamics (CFD) analysis and was assumed to be deterministic. The results suggest that for the current vane design, the chance of not meeting design requirements is about 1.6 percent.

  9. The application of CAD, CAE & CAM in development of butterfly valve’s disc

    NASA Astrophysics Data System (ADS)

    Asiff Razif Shah Ranjit, Muhammad; Hanie Abdullah, Nazlin

    2017-06-01

    The improved design of a butterfly valve disc is based on the concept of sandwich theory. Butterfly valves are mostly used in various industries such as oil and gas plant. The primary failure modes for valves are indented disc, keyways and shaft failure and the cavitation damage. Emphasis on the application of CAD, a new model of the butterfly valve’s disc structure was designed. The structure analysis was analysed using the finite element analysis. Butterfly valve performance factors can be obtained is by using Computational Fluid Dynamics (CFD) software to simulate the physics of fluid flow in a piping system around a butterfly valve. A comparison analysis was done using the finite element to justify the performance of the structure. The second application of CAE is the computational fluid flow analysis. The upstream pressure and the downstream pressure was analysed to calculate the cavitation index and determine the performance throughout each opening position of the valve. The CAM process was done using 3D printer to produce a prototype and analysed the structure in form of prototype. The structure was downscale fabricated based on the model designed initially through the application of CAD. This study is utilized the application of CAD, CAE and CAM for a better improvement of the butterfly valve’s disc components.

  10. Failure analysis on false call probe pins of microprocessor test equipment

    NASA Astrophysics Data System (ADS)

    Tang, L. W.; Ong, N. R.; Mohamad, I. S. B.; Alcain, J. B.; Retnasamy, V.

    2017-09-01

    A study has been conducted to investigate failure analysis on probe pins of test modules for microprocessor. The `health condition' of the probe pin is determined by the resistance value. A test module of 5V power supplied from Arduino UNO with "Four-wire Ohm measurement" method is implemented in this study to measure the resistance of the probe pins of a microprocessor. The probe pins from a scrapped computer motherboard is used as the test sample in this study. The functionality of the test module was validated with the pre-measurement experiment via VEE Pro software. Lastly, the experimental work have demonstrated that the implemented test module have the capability to identify the probe pin's `health condition' based on the measured resistance value.

  11. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  12. Design study of the geometry of the blanking tool to predict the burr formation of Zircaloy-4 sheet

    NASA Astrophysics Data System (ADS)

    Ha, Jisun; Lee, Hyungyil; Kim, Dongchul; Kim, Naksoo

    2013-12-01

    In this work, we investigated factors that influence burr formation for zircaloy-4 sheet used for spacer grids of nuclear fuel roads. Factors we considered are geometric factors of punch. We changed clearance and velocity in order to consider the failure parameters, and we changed shearing angle and corner radius of L-shaped punch in order to consider geometric factors of punch. First, we carried out blanking test with failure parameter of GTN model using L-shaped punch. The tendency of failure parameters and geometric factors that affect burr formation by analyzing sheared edges is investigated. Consequently, geometric factor's influencing on the burr formation is also high as failure parameters. Then, the sheared edges and burr formation with failure parameters and geometric factors is investigated using FE analysis model. As a result of analyzing sheared edges with the variables, we checked geometric factors more affect burr formation than failure parameters. To check the reliability of the FE model, the blanking force and the sheared edges obtained from experiments are compared with the computations considering heat transfer.

  13. The nonlinear bending response of thin-walled laminated composite cylinders

    NASA Technical Reports Server (NTRS)

    Fuchs, Hannes P.; Hyer, Michael W.

    1992-01-01

    The geometrically nonlinear Donnell shell theory is applied to the problem of stable bending of thin-walled circular cylinders. Responses are computed for cylinders with a radius-to-thickness ratio of 50 and length-to-radius ratios of 1 and 5. Four laminated composite cylinders and an aluminum cylinder are considered. Critical moment estimates are presented for short cylinders for which compression-type buckling behavior is important, and for very long cylinders for which the cross-section flattening, i.e., Brazier effect, is important. A finite element analysis is used to estimate the critical end rotation in addition to establishing the range of validity of the prebuckling analysis. The radial displacement response shows that the character of the boundary layer is significantly influenced by the geometric nonlinearities. Application of a first ply failure analysis using the maximum stress criterion suggests that in nearly all instances material failure occurs before buckling. Failure of the composite cylinders can be attributed to fiber breakage. Striking similarities are seen between the prebuckling displacements of the bending problem and axial compression problem for short cylinders.

  14. User-Perceived Reliability of M-for-N (M: N) Shared Protection Systems

    NASA Astrophysics Data System (ADS)

    Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue

    In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.

  15. An approximation formula for a class of fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1986-01-01

    An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.

  16. Cone-beam computed tomography analysis of curved root canals after mechanical preparation with three nickel-titanium rotary instruments

    PubMed Central

    Elsherief, Samia M.; Zayet, Mohamed K.; Hamouda, Ibrahim M.

    2013-01-01

    Cone beam computed tomography is a 3-dimensional high resolution imaging method. The purpose of this study was to compare the effects of 3 different NiTi rotary instruments used to prepare curved root canals on the final shape of the curved canals and total amount of root canal transportation by using cone-beam computed tomography. A total of 81 mesial root canals from 42 extracted human mandibular molars, with a curvature ranging from 15 to 45 degrees, were selected. Canals were randomly divided into 3 groups of 27 each. After preparation with Protaper, Revo-S and Hero Shaper, the amount of transportation and centering ability that occurred were assessed by using cone beam computed tomography. Utilizing pre- and post-instrumentation radiographs, straightening of the canal curvatures was determined with a computer image analysis program. Canals were metrically assessed for changes (surface area, changes in curvature and transportation) during canal preparation by using software SimPlant; instrument failures were also recorded. Mean total widths and outer and inner width measurements were determined on each central canal path and differences were statistically analyzed. The results showed that all instruments maintained the original canal curvature well with no significant differences between the different files (P = 0.226). During preparation there was failure of only one file (the protaper group). In conclusion, under the conditions of this study, all instruments maintained the original canal curvature well and were safe to use. Areas of uninstrumented root canal wall were left in all regions using the various systems. PMID:23885273

  17. A Case Study on Engineering Failure Analysis of Link Chain

    PubMed Central

    Lee, Seong-Beom; Lee, Hong-Chul

    2010-01-01

    Objectives The objective of this study was to investigate the effect of chain installation condition on stress distribution that could eventually cause disastrous failure from sudden deformation and geometric rupture. Methods Fractographic method used for the failed chain indicates that over-stress was considered as the root cause of failure. 3D modeling and finite element analysis for the chain, used in a crane hook, were performed with a three-dimensional interactive application program, CATIA, commercial finite element analysis and computational fluid dynamic software, ANSYS. Results The results showed that the state of stress was changed depending on the initial position of the chain that was installed in the hook. Especially, the magnitude of the stress was strongly affected by the bending forces, which are 2.5 times greater (under the simulation condition currently investigated) than that from the plain tensile load. Also, it was noted that the change of load state is strongly related to the failure of parts. The chain can hold an ultimate load of about 8 tons with only the tensile load acting on it. Conclusion The conclusions of this research clearly showed that a reduction of the loss from similar incidents can be achieved when an operator properly handles the installation of the chain. PMID:22953162

  18. Abstract of Capstone

    ERIC Educational Resources Information Center

    Pack, Della F.

    2013-01-01

    At the end of the Fall 2011 semester at Big Sandy Community and Technical College (BSCTC) a comparison of grade patterns in multiple CIS 100-Introduction to Computers courses was analyzed. This analysis found online courses returned a higher failure rate than those taught in a classroom setting. Why was there a difference? Is the platform of…

  19. Use of mechanistic simulations as a quantitative risk-ranking tool within the quality by design framework.

    PubMed

    Stocker, Elena; Toschkoff, Gregor; Sacher, Stephan; Khinast, Johannes G

    2014-11-20

    The purpose of this study is to evaluate the use of computer simulations for generating quantitative knowledge as a basis for risk ranking and mechanistic process understanding, as required by ICH Q9 on quality risk management systems. In this specific publication, the main focus is the demonstration of a risk assessment workflow, including a computer simulation for the generation of mechanistic understanding of active tablet coating in a pan coater. Process parameter screening studies are statistically planned under consideration of impacts on a potentially critical quality attribute, i.e., coating mass uniformity. Based on computer simulation data the process failure mode and effects analysis of the risk factors is performed. This results in a quantitative criticality assessment of process parameters and the risk priority evaluation of failure modes. The factor for a quantitative reassessment of the criticality and risk priority is the coefficient of variation, which represents the coating mass uniformity. The major conclusion drawn from this work is a successful demonstration of the integration of computer simulation in the risk management workflow leading to an objective and quantitative risk assessment. Copyright © 2014. Published by Elsevier B.V.

  20. Image guided radiation therapy may result in improved local control in locally advanced lung cancer patients.

    PubMed

    Kilburn, Jeremy M; Soike, Michael H; Lucas, John T; Ayala-Peacock, Diandra; Blackstock, William; Isom, Scott; Kearns, William T; Hinson, William H; Miller, Antonius A; Petty, William J; Munley, Michael T; Urbanic, James J

    2016-01-01

    Image guided radiation therapy (IGRT) is designed to ensure accurate and precise targeting, but whether improved clinical outcomes result is unknown. A retrospective comparison of locally advanced lung cancer patients treated with and without IGRT from 2001 to 2012 was conducted. Median local failure-free survival (LFFS), regional, locoregional failure-free survival (LRFFS), distant failure-free survival, progression-free survival, and overall survival (OS) were estimated. Univariate and multivariate models assessed the association between patient- and treatment-related covariates and local failure. A total of 169 patients were treated with definitive radiation therapy and concurrent chemotherapy with a median follow-up of 48 months in the IGRT cohort and 96 months in the non-IGRT cohort. IGRT was used in 36% (62 patients) of patients. OS was similar between cohorts (2-year OS, 47% vs 49%, P = .63). The IGRT cohort had improved 2-year LFFS (80% vs 64%, P = .013) and LRFFS (75% and 62%, P = .04). Univariate analysis revealed IGRT and treatment year improved LFFS, whereas group stage, dose, and positron emission tomography/computed tomography planning had no impact. IGRT remained significant in the multivariate model with an adjusted hazard ratio of 0.40 (P = .01). Distant failure-free survival (58% vs 59%, P = .67) did not differ significantly. IGRT with daily cone beam computed tomography confers an improvement in the therapeutic ratio relative to patients treated without this technology. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  1. Measurement and Analysis of Failures in Computer Systems

    NASA Technical Reports Server (NTRS)

    Thakur, Anshuman

    1997-01-01

    This thesis presents a study of software failures spanning several different releases of Tandem's NonStop-UX operating system running on Tandem Integrity S2(TMR) systems. NonStop-UX is based on UNIX System V and is fully compliant with industry standards, such as the X/Open Portability Guide, the IEEE POSIX standards, and the System V Interface Definition (SVID) extensions. In addition to providing a general UNIX interface to the hardware, the operating system has built-in recovery mechanisms and audit routines that check the consistency of the kernel data structures. The analysis is based on data on software failures and repairs collected from Tandem's product report (TPR) logs for a period exceeding three years. A TPR log is created when a customer or an internal developer observes a failure in a Tandem Integrity system. This study concentrates primarily on those TPRs that report a UNIX panic that subsequently crashes the system. Approximately 200 of the TPRs fall into this category. Approximately 50% of the failures reported are from field systems, and the rest are from the testing and development sites. It has been observed by Tandem developers that fewer cases are encountered from the field than from the test centers. Thus, the data selection mechanism has introduced a slight skew.

  2. Inter-computer communication architecture for a mixed redundancy distributed system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Adams, Stuart J.

    1987-01-01

    The triply redundant intercomputer network for the Advanced Information Processing System (AIPS), an architecture developed to serve as the core avionics system for a broad range of aerospace vehicles, is discussed. The AIPS intercomputer network provides a high-speed, Byzantine-fault-resilient communication service between processing sites, even in the presence of arbitrary failures of simplex and duplex processing sites on the IC network. The IC network contention poll has evolved from the Laning Poll. An analysis of the failure modes and effects and a simulation of the AIPS contention poll, demonstrate the robustness of the system.

  3. Man-rated flight software for the F-8 DFBW program

    NASA Technical Reports Server (NTRS)

    Bairnsfather, R. R.

    1976-01-01

    The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program assembly control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools are described, as well as the program test plans and their implementation on the various simulators. Failure effects analysis and the creation of special failure generating software for testing purposes are described.

  4. Testing and Analysis of Composite Skin/Stringer Debonding Under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Cvitkovich, Michael K.; O'Brien, T. Kevin; Minguet, Pierre J.

    2000-01-01

    A consistent step-wise approach is presented to investigate the damage mechanism in composite bonded skin/stringer constructions under uniaxial and biaxial (in-plane/out-of-plane) loading conditions. The approach uses experiments to detect the failure mechanism, computational stress analysis to determine the location of first matrix cracking and computational fracture mechanics to investigate the potential for delamination growth. In a first step, tests were performed on specimens, which consisted of a tapered composite flange, representing a stringer or frame, bonded onto a composite skin. Tests were performed under monotonic loading conditions in tension, three-point bending, and combined tension/bending to evaluate the debonding mechanisms between the skin and the bonded stringer. For combined tension/bending testing, a unique servohydraulic load frame was used that was capable of applying both in-plane tension and out-of-plane bending loads simultaneously. Specimen edges were examined on the microscope to document the damage occurrence and to identify typical damage patterns. For all three load cases, observed failure initiated in the flange, near the flange tip, causing the flange to almost fully debond from skin. In a second step, a two dimensional plane-strain finite element model was developed to analyze the different test cases using a geometrically nonlinear solution. For all three loading conditions, computed principal stresses exceeded the transverse strength of the material in those areas of the flange where the matrix cracks had developed during the tests. In a third step, delaminations of various lengths were simulated in two locations where delaminations were observed during the tests. The analyses showed that at the loads corresponding to matrix ply crack initiation computed strain energy release rates exceeded the values obtained from a mixed mode failure criterion in one location, Hence. Unstable delamination propagation is likely to occur as observed in the experiments.

  5. Testing and Analysis of Composite Skin/Stringer Debonding under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Cvitkovich, Michael; OBrien, Kevin; Minguet, Pierre J.

    2000-01-01

    A consistent step-wise approach is presented to investigate the damage mechanism in composite bonded skin/stringer constructions under uniaxial and biaxial (in-plane/out-of-plane) loading conditions. The approach uses experiments to detect the failure mechanism, computational stress analysis to determine the location of first matrix cracking and computational fracture mechanics to investigate the potential for delamination growth. In a first step, tests were performed on specimens, which consisted of a tapered composite flange, representing a stringer or frame, bonded onto a composite skin. Tests were performed under monotonic loading conditions in tension, three-point bending, and combined tension/bending to evaluate the debonding mechanisms between the skin and the bonded stringer. For combined tension/bending testing, a unique servohydraulic load frame was used that was capable of applying both in-plane tension and out-of-plane bending loads simultaneously. Specimen edges were examined on the microscope to document the damage occurrence and to identify typical damage patterns. For all three load cases, observed failure initiated in the flange, near the flange tip, causing the flange to almost fully debond from the skin. In a second step, a two-dimensional plane-strain finite element model was developed to analyze the different test cases using a geometrically nonlinear solution. For all three loading conditions, computed principal stresses exceeded the transverse strength of the material in those areas of the flange where the matrix cracks had developed during the tests. In a third step, delaminations of various lengths were simulated in two locations where delaminations were observed during the tests. The analyses showed that at the loads corresponding to matrix ply crack initiation computed strain energy release rates exceeded the values obtained from a mixed mode failure criterion in one location. Hence, unstable delamination propagation is likely to occur as observed in the experiments.

  6. SPECT and PET in ischemic heart failure.

    PubMed

    Angelidis, George; Giamouzis, Gregory; Karagiannis, Georgios; Butler, Javed; Tsougos, Ioannis; Valotassiou, Varvara; Giannakoulas, George; Dimakopoulos, Nikolaos; Xanthopoulos, Andrew; Skoularigis, John; Triposkiadis, Filippos; Georgoulias, Panagiotis

    2017-03-01

    Heart failure is a common clinical syndrome associated with significant morbidity and mortality worldwide. Ischemic heart disease is the leading cause of heart failure, at least in the industrialized countries. Proper diagnosis of the syndrome and management of patients with heart failure require anatomical and functional information obtained through various imaging modalities. Nuclear cardiology techniques play a main role in the evaluation of heart failure. Myocardial single photon emission computed tomography (SPECT) with thallium-201 or technetium-99 m labelled tracers offer valuable data regarding ventricular function, myocardial perfusion, viability, and intraventricular synchronism. Moreover, positron emission tomography (PET) permits accurate evaluation of myocardial perfusion, metabolism, and viability, providing high-quality images and the ability of quantitative analysis. As these imaging techniques assess different parameters of cardiac structure and function, variations of sensitivity and specificity have been reported among them. In addition, the role of SPECT and PET guided therapy remains controversial. In this comprehensive review, we address these controversies and report the advances in patient's investigation with SPECT and PET in ischemic heart failure. Furthermore, we present the innovations in technology that are expected to strengthen the role of nuclear cardiology modalities in the investigation of heart failure.

  7. Eigentumors for prediction of treatment failure in patients with early-stage breast cancer using dynamic contrast-enhanced MRI: a feasibility study

    NASA Astrophysics Data System (ADS)

    Chan, H. M.; van der Velden, B. H. M.; E Loo, C.; Gilhuijs, K. G. A.

    2017-08-01

    We present a radiomics model to discriminate between patients at low risk and those at high risk of treatment failure at long-term follow-up based on eigentumors: principal components computed from volumes encompassing tumors in washin and washout images of pre-treatment dynamic contrast-enhanced (DCE-) MR images. Eigentumors were computed from the images of 563 patients from the MARGINS study. Subsequently, a least absolute shrinkage selection operator (LASSO) selected candidates from the components that contained 90% of the variance of the data. The model for prediction of survival after treatment (median follow-up time 86 months) was based on logistic regression. Receiver operating characteristic (ROC) analysis was applied and area-under-the-curve (AUC) values were computed as measures of training and cross-validated performances. The discriminating potential of the model was confirmed using Kaplan-Meier survival curves and log-rank tests. From the 322 principal components that explained 90% of the variance of the data, the LASSO selected 28 components. The ROC curves of the model yielded AUC values of 0.88, 0.77 and 0.73, for the training, leave-one-out cross-validated and bootstrapped performances, respectively. The bootstrapped Kaplan-Meier survival curves confirmed significant separation for all tumors (P  <  0.0001). Survival analysis on immunohistochemical subgroups shows significant separation for the estrogen-receptor subtype tumors (P  <  0.0001) and the triple-negative subtype tumors (P  =  0.0039), but not for tumors of the HER2 subtype (P  =  0.41). The results of this retrospective study show the potential of early-stage pre-treatment eigentumors for use in prediction of treatment failure of breast cancer.

  8. PAFAC- PLASTIC AND FAILURE ANALYSIS OF COMPOSITES

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1994-01-01

    The increasing number of applications of fiber-reinforced composites in industry demands a detailed understanding of their material properties and behavior. A three-dimensional finite-element computer program called PAFAC (Plastic and Failure Analysis of Composites) has been developed for the elastic-plastic analysis of fiber-reinforced composite materials and structures. The evaluation of stresses and deformations at edges, cut-outs, and joints is essential in understanding the strength and failure for metal-matrix composites since the onset of plastic yielding starts very early in the loading process as compared to the composite's ultimate strength. Such comprehensive analysis can only be achieved by a finite-element program like PAFAC. PAFAC is particularly suited for the analysis of laminated metal-matrix composites. It can model the elastic-plastic behavior of the matrix phase while the fibers remain elastic. Since the PAFAC program uses a three-dimensional element, the program can also model the individual layers of the laminate to account for thickness effects. In PAFAC, the composite is modeled as a continuum reinforced by cylindrical fibers of vanishingly small diameter which occupy a finite volume fraction of the composite. In this way, the essential axial constraint of the phases is retained. Furthermore, the local stress and strain fields are uniform. The PAFAC finite-element solution is obtained using the displacement method. Solution of the nonlinear equilibrium equations is obtained with a Newton-Raphson iteration technique. The elastic-plastic behavior of composites consisting of aligned, continuous elastic filaments and an elastic-plastic matrix is described in terms of the constituent properties, their volume fractions, and mutual constraints between phases indicated by the geometry of the microstructure. The program uses an iterative procedure to determine the overall response of the laminate, then from the overall response determines the stress state in each phase of the composite material. Failure of the fibers or matrix within an element can also be modeled by PAFAC. PAFAC is written in FORTRAN IV for batch execution and has been implemented on a CDC CYBER 170 series computer with a segmented memory requirement of approximately 66K (octal) of 60 bit words. PAFAC was developed in 1982.

  9. Use of histamine H2 receptor antagonists and outcomes in patients with heart failure: a nationwide population-based cohort study.

    PubMed

    Adelborg, Kasper; Sundbøll, Jens; Schmidt, Morten; Bøtker, Hans Erik; Weiss, Noel S; Pedersen, Lars; Sørensen, Henrik Toft

    2018-01-01

    Histamine H 2 receptor activation promotes cardiac fibrosis and apoptosis in mice. However, the potential effectiveness of histamine H 2 receptor antagonists (H2RAs) in humans with heart failure is largely unknown. We examined the association between H2RA initiation and all-cause mortality among patients with heart failure. Using Danish medical registries, we conducted a nationwide population-based active-comparator cohort study of new users of H2RAs and proton pump inhibitors (PPIs) after first-time hospitalization for heart failure during the period 1995-2014. Hazard ratios (HRs) for all-cause mortality and hospitalization due to worsening of heart failure, adjusting for age, sex, and time between heart failure diagnosis and initiation of PPI or H2RA therapy, index year, comorbidity, cardiac surgery, comedications, and socioeconomic status were computed based on Cox regression analysis. Our analysis included 42,902 PPI initiators (median age 78 years, 46% female) and 3,296 H2RA initiators (median age 76 years, 48% female). Mortality risk was lower among H2RA initiators than PPI initiators after 1 year (26% vs 31%) and 5 years (60% vs 66%). In multivariable analyses, the 1-year HR was 0.80 (95% CI, 0.74-0.86) and the 5-year HR was 0.85 (95% CI, 0.80-0.89). These findings were consistent after propensity score matching and for ischemic and nonischemic heart failure, as for sex and age groups. The rate of hospitalization due to worsening of heart failure was lower among H2RA initiators than PPI initiators. In patients with heart failure, H2RA initiation was associated with 15%-20% lower mortality than PPI initiation.

  10. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  11. Preliminary design of a solar central receiver for a site-specific repowering application (Saguaro Power Plant). Volume IV. Appendixes. Final report, October 1982-September 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, E.R.

    1983-09-01

    The appendixes for the Saguaro Power Plant includes the following: receiver configuration selection report; cooperating modes and transitions; failure modes analysis; control system analysis; computer codes and simulation models; procurement package scope descriptions; responsibility matrix; solar system flow diagram component purpose list; thermal storage component and system test plans; solar steam generator tube-to-tubesheet weld analysis; pipeline listing; management control schedule; and system list and definitions.

  12. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  13. Sequential experimental design based generalised ANOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less

  14. Survival analysis of heart failure patients: A case study.

    PubMed

    Ahmad, Tanvir; Munir, Assia; Bhatti, Sajjad Haider; Aftab, Muhammad; Raza, Muhammad Ali

    2017-01-01

    This study was focused on survival analysis of heart failure patients who were admitted to Institute of Cardiology and Allied hospital Faisalabad-Pakistan during April-December (2015). All the patients were aged 40 years or above, having left ventricular systolic dysfunction, belonging to NYHA class III and IV. Cox regression was used to model mortality considering age, ejection fraction, serum creatinine, serum sodium, anemia, platelets, creatinine phosphokinase, blood pressure, gender, diabetes and smoking status as potentially contributing for mortality. Kaplan Meier plot was used to study the general pattern of survival which showed high intensity of mortality in the initial days and then a gradual increase up to the end of study. Martingale residuals were used to assess functional form of variables. Results were validated computing calibration slope and discrimination ability of model via bootstrapping. For graphical prediction of survival probability, a nomogram was constructed. Age, renal dysfunction, blood pressure, ejection fraction and anemia were found as significant risk factors for mortality among heart failure patients.

  15. Nodal failure index approach to groundwater remediation design

    USGS Publications Warehouse

    Lee, J.; Reeves, H.W.; Dowding, C.H.

    2008-01-01

    Computer simulations often are used to design and to optimize groundwater remediation systems. We present a new computationally efficient approach that calculates the reliability of remedial design at every location in a model domain with a single simulation. The estimated reliability and other model information are used to select a best remedial option for given site conditions, conceptual model, and available data. To evaluate design performance, we introduce the nodal failure index (NFI) to determine the number of nodal locations at which the probability of success is below the design requirement. The strength of the NFI approach is that selected areas of interest can be specified for analysis and the best remedial design determined for this target region. An example application of the NFI approach using a hypothetical model shows how the spatial distribution of reliability can be used for a decision support system in groundwater remediation design. ?? 2008 ASCE.

  16. Telehealth for "the digital illiterate"--elderly heart failure patients experiences.

    PubMed

    Lind, Leili; Karlsson, Daniel

    2014-01-01

    Telehealth solutions should be available also for elderly patients with no interest in using, or capacity to use, computers and smartphones. Fourteen elderly, severely ill heart failure patients in home care participated in a telehealth study and used digital pens for daily reporting of their health state--a technology never used before by this patient group. After the study seven patients and two spouses were interviewed face-to-face. A qualitative content analysis of the interview material was performed. The informants had no experience of computers or the Internet and no interest in learning. Still, patients found the digital pen and the health diary form easy to use, thus effortlessly adopting to changes in care provision. They experienced an improved contact with the caregivers and had a sense of increased security despite a multimorbid state. Our study shows that, given that technologies are tailored to specific patient groups, even "the digital illiterate" may use the Internet.

  17. Causes and prevention of splitting/bursting failure of concrete crossties: a computational study

    DOT National Transportation Integrated Search

    2017-09-17

    Concrete splitting/bursting is a well-known failure mode of concrete crossties that can compromise the crosstie integrity and raise railroad maintenance and track safety concerns. This paper presents a computational study aimed at better understandin...

  18. Damage and failure modelling of hybrid three-dimensional textile composites: a mesh objective multi-scale approach

    PubMed Central

    Patel, Deepak K.

    2016-01-01

    This paper is concerned with predicting the progressive damage and failure of multi-layered hybrid textile composites subjected to uniaxial tensile loading, using a novel two-scale computational mechanics framework. These composites include three-dimensional woven textile composites (3DWTCs) with glass, carbon and Kevlar fibre tows. Progressive damage and failure of 3DWTCs at different length scales are captured in the present model by using a macroscale finite-element (FE) analysis at the representative unit cell (RUC) level, while a closed-form micromechanics analysis is implemented simultaneously at the subscale level using material properties of the constituents (fibre and matrix) as input. The N-layers concentric cylinder (NCYL) model (Zhang and Waas 2014 Acta Mech. 225, 1391–1417; Patel et al. submitted Acta Mech.) to compute local stress, srain and displacement fields in the fibre and matrix is used at the subscale. The 2-CYL fibre–matrix concentric cylinder model is extended to fibre and (N−1) matrix layers, keeping the volume fraction constant, and hence is called the NCYL model where the matrix damage can be captured locally within each discrete layer of the matrix volume. The influence of matrix microdamage at the subscale causes progressive degradation of fibre tow stiffness and matrix stiffness at the macroscale. The global RUC stiffness matrix remains positive definite, until the strain softening response resulting from different failure modes (such as fibre tow breakage, tow splitting in the transverse direction due to matrix cracking inside tow and surrounding matrix tensile failure outside of fibre tows) are initiated. At this stage, the macroscopic post-peak softening response is modelled using the mesh objective smeared crack approach (Rots et al. 1985 HERON 30, 1–48; Heinrich and Waas 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23–26 April 2012. AIAA 2012-1537). Manufacturing-induced geometric imperfections are included in the simulation, where the FE mesh of the unit cell is generated directly from micro-computed tomography (MCT) real data using a code Simpleware. Results from multi-scale analysis for both an idealized perfect geometry and one that includes geometric imperfections are compared with experimental results (Pankow et al. 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23–26 April 2012. AIAA 2012-1572). This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242294

  19. Damage and failure modelling of hybrid three-dimensional textile composites: a mesh objective multi-scale approach.

    PubMed

    Patel, Deepak K; Waas, Anthony M

    2016-07-13

    This paper is concerned with predicting the progressive damage and failure of multi-layered hybrid textile composites subjected to uniaxial tensile loading, using a novel two-scale computational mechanics framework. These composites include three-dimensional woven textile composites (3DWTCs) with glass, carbon and Kevlar fibre tows. Progressive damage and failure of 3DWTCs at different length scales are captured in the present model by using a macroscale finite-element (FE) analysis at the representative unit cell (RUC) level, while a closed-form micromechanics analysis is implemented simultaneously at the subscale level using material properties of the constituents (fibre and matrix) as input. The N-layers concentric cylinder (NCYL) model (Zhang and Waas 2014 Acta Mech. 225, 1391-1417; Patel et al. submitted Acta Mech.) to compute local stress, srain and displacement fields in the fibre and matrix is used at the subscale. The 2-CYL fibre-matrix concentric cylinder model is extended to fibre and (N-1) matrix layers, keeping the volume fraction constant, and hence is called the NCYL model where the matrix damage can be captured locally within each discrete layer of the matrix volume. The influence of matrix microdamage at the subscale causes progressive degradation of fibre tow stiffness and matrix stiffness at the macroscale. The global RUC stiffness matrix remains positive definite, until the strain softening response resulting from different failure modes (such as fibre tow breakage, tow splitting in the transverse direction due to matrix cracking inside tow and surrounding matrix tensile failure outside of fibre tows) are initiated. At this stage, the macroscopic post-peak softening response is modelled using the mesh objective smeared crack approach (Rots et al. 1985 HERON 30, 1-48; Heinrich and Waas 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23-26 April 2012 AIAA 2012-1537). Manufacturing-induced geometric imperfections are included in the simulation, where the FE mesh of the unit cell is generated directly from micro-computed tomography (MCT) real data using a code Simpleware Results from multi-scale analysis for both an idealized perfect geometry and one that includes geometric imperfections are compared with experimental results (Pankow et al. 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23-26 April 2012 AIAA 2012-1572). This article is part of the themed issue 'Multiscale modelling of the structural integrity of composite materials'. © 2016 The Author(s).

  20. Computational analysis of an axial flow pediatric ventricular assist device.

    PubMed

    Throckmorton, Amy L; Untaroiu, Alexandrina; Allaire, Paul E; Wood, Houston G; Matherne, Gaynell Paul; Lim, David Scott; Peeler, Ben B; Olsen, Don B

    2004-10-01

    Longer-term (>2 weeks) mechanical circulatory support will provide an improved quality of life for thousands of pediatric cardiac failure patients per year in the United States. These pediatric patients suffer from severe congenital or acquired heart disease complicated by congestive heart failure. There are currently very few mechanical circulatory support systems available in the United States as viable options for this population. For that reason, we have designed an axial flow pediatric ventricular assist device (PVAD) with an impeller that is fully suspended by magnetic bearings. As a geometrically similar, smaller scaled version of our axial flow pump for the adult population, the PVAD has a design point of 1.5 L/min at 65 mm Hg to meet the full physiologic needs of pediatric patients. Conventional axial pump design equations and a nondimensional scaling technique were used to estimate the PVAD's initial dimensions, which allowed for the creation of computational models for performance analysis. A computational fluid dynamic analysis of the axial flow PVAD, which measures approximately 65 mm in length by 35 mm in diameter, shows that the pump will produce 1.5 L/min at 65 mm Hg for 8000 rpm. Fluid forces (approximately 1 N) were also determined for the suspension and motor design, and scalar stress values remained below 350 Pa with maximum particle residence times of approximately 0.08 milliseconds in the pump. This initial design demonstrated acceptable performance, thereby encouraging prototype manufacturing for experimental validation.

  1. Women and Computers: Effects of Stereotype Threat on Attribution of Failure

    ERIC Educational Resources Information Center

    Koch, Sabine C.; Muller, Stephanie M.; Sieverding, Monika

    2008-01-01

    This study investigated whether stereotype threat can influence women's attributions of failure in a computer task. Male and female college-age students (n = 86, 16-21 years old) from Germany were asked to work on a computer task and were hinted beforehand that in this task, either (a) men usually perform better than women do (negative threat…

  2. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  3. Cervical Gross Tumor Volume Dose Predicts Local Control Using Magnetic Resonance Imaging/Diffusion-Weighted Imaging—Guided High-Dose-Rate and Positron Emission Tomography/Computed Tomography—Guided Intensity Modulated Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyk, Pawel; Jiang, Naomi; Sun, Baozhou

    2014-11-15

    Purpose: Magnetic resonance imaging/diffusion weighted-imaging (MRI/DWI)-guided high-dose-rate (HDR) brachytherapy and {sup 18}F-fluorodeoxyglucose (FDG) — positron emission tomography/computed tomography (PET/CT)-guided intensity modulated radiation therapy (IMRT) for the definitive treatment of cervical cancer is a novel treatment technique. The purpose of this study was to report our analysis of dose-volume parameters predicting gross tumor volume (GTV) control. Methods and Materials: We analyzed the records of 134 patients with International Federation of Gynecology and Obstetrics stages IB1-IVB cervical cancer treated with combined MRI-guided HDR and IMRT from July 2009 to July 2011. IMRT was targeted to the metabolic tumor volume and lymph nodesmore » by use of FDG-PET/CT simulation. The GTV for each HDR fraction was delineated by use of T2-weighted or apparent diffusion coefficient maps from diffusion-weighted sequences. The D100, D90, and Dmean delivered to the GTV from HDR and IMRT were summed to EQD2. Results: One hundred twenty-five patients received all irradiation treatment as planned, and 9 did not complete treatment. All 134 patients are included in this analysis. Treatment failure in the cervix occurred in 24 patients (18.0%). Patients with cervix failures had a lower D100, D90, and Dmean than those who did not experience failure in the cervix. The respective doses to the GTV were 41, 58, and 136 Gy for failures compared with 67, 99, and 236 Gy for those who did not experience failure (P<.001). Probit analysis estimated the minimum D100, D90, and Dmean doses required for ≥90% local control to be 69, 98, and 260 Gy (P<.001). Conclusions: Total dose delivered to the GTV from combined MRI-guided HDR and PET/CT-guided IMRT is highly correlated with local tumor control. The findings can be directly applied in the clinic for dose adaptation to maximize local control.« less

  4. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 2 quarter 1 progress report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Bojanowski, C.; Shen, J.

    2012-04-09

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to improve design allowing for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFHRC wind engineering laboratory. This quarterly report documents technical progress on the project tasks for the period of October through December 2011.« less

  5. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 2 quarter 2 progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Bojanowski, C.; Shen, J.

    2012-06-28

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to improve design allowing for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFHRC wind engineering laboratory. This quarterly report documents technical progress on the project tasks for the period of January through March 2012.« less

  6. Towards Prognostics of Power MOSFETs: Accelerated Aging and Precursors of Failure

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxena, Abhinav; Wysocki, Philip; Saha, Sankalita; Goebel, Kai

    2010-01-01

    This paper presents research results dealing with power MOSFETs (metal oxide semiconductor field effect transistor) within the prognostics and health management of electronics. Experimental results are presented for the identification of the on-resistance as a precursor to failure of devices with die-attach degradation as a failure mechanism. Devices are aged under power cycling in order to trigger die-attach damage. In situ measurements of key electrical and thermal parameters are collected throughout the aging process and further used for analysis and computation of the on-resistance parameter. Experimental results show that the devices experience die-attach damage and that the on-resistance captures the degradation process in such a way that it could be used for the development of prognostics algorithms (data-driven or physics-based).

  7. Expert systems for automated maintenance of a Mars oxygen production system

    NASA Technical Reports Server (NTRS)

    Ash, Robert L.; Huang, Jen-Kuang; Ho, Ming-Tsang

    1989-01-01

    A prototype expert system was developed for maintaining autonomous operation of a Mars oxygen production system. Normal operation conditions and failure modes according to certain desired criteria are tested and identified. Several schemes for failure detection and isolation using forward chaining, backward chaining, knowledge-based and rule-based are devised to perform several housekeeping functions. These functions include self-health checkout, an emergency shut down program, fault detection and conventional control activities. An effort was made to derive the dynamic model of the system using Bond-Graph technique in order to develop the model-based failure detection and isolation scheme by estimation method. Finally, computer simulations and experimental results demonstrated the feasibility of the expert system and a preliminary reliability analysis for the oxygen production system is also provided.

  8. Operations analysis (study 2.1): Program manual and users guide for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1975-01-01

    Information is provided necessary to use the LOVES Computer Program in its existing state, or to modify the program to include studies not properly handled by the basic model. The Users Guide defines the basic elements assembled together to form the model for servicing satellites in orbit. As the program is a simulation, the method of attack is to disassemble the problem into a sequence of events, each occurring instantaneously and each creating one or more other events in the future. The main driving force of the simulation is the deterministic launch schedule of satellites and the subsequent failure of the various modules which make up the satellites. The LOVES Computer Program uses a random number generator to simulate the failure of module elements and therefore operates over a long span of time typically 10 to 15 years. The sequence of events is varied by making several runs in succession with different random numbers resulting in a Monte Carlo technique to determine statistical parameters of minimum value, average value, and maximum value.

  9. Computer codes for thermal analysis of a solid rocket motor nozzle

    NASA Technical Reports Server (NTRS)

    Chauhan, Rajinder Singh

    1988-01-01

    A number of computer codes are available for performing thermal analysis of solid rocket motor nozzles. Aerotherm Chemical Equilibrium (ACE) computer program can be used to perform one-dimensional gas expansion to determine the state of the gas at each location of a nozzle. The ACE outputs can be used as input to a computer program called Momentum/Energy Integral Technique (MEIT) for predicting boundary layer development development, shear, and heating on the surface of the nozzle. The output from MEIT can be used as input to another computer program called Aerotherm Charring Material Thermal Response and Ablation Program (CMA). This program is used to calculate oblation or decomposition response of the nozzle material. A code called Failure Analysis Nonlinear Thermal and Structural Integrated Code (FANTASTIC) is also likely to be used for performing thermal analysis of solid rocket motor nozzles after the program is duly verified. A part of the verification work on FANTASTIC was done by using one and two dimension heat transfer examples with known answers. An attempt was made to prepare input for performing thermal analysis of the CCT nozzle using the FANTASTIC computer code. The CCT nozzle problem will first be solved by using ACE, MEIT, and CMA. The same problem will then be solved using FANTASTIC. These results will then be compared for verification of FANTASTIC.

  10. Reliability and cost analysis methods

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.

    1991-01-01

    In the design phase of a system, how does a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability? When is the increased cost justified? High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. We should not consider either the cost of the subsystem or the expected cost due to subsystem failure separately but should minimize the total of the two costs, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure. This final report discusses the Combined Analysis of Reliability, Redundancy, and Cost (CARRAC) methods which were developed under Grant Number NAG 3-1100 from the NASA Lewis Research Center. CARRAC methods and a CARRAC computer program employ five models which can be used to cover a wide range of problems. The models contain an option which can include repair of failed modules.

  11. Advantage of the modified Lunn-McNeil technique over Kalbfleisch-Prentice technique in competing risks

    NASA Astrophysics Data System (ADS)

    Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.

    2002-03-01

    Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.

  12. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  13. Academic-Community Hospital Comparison of Vulnerabilities in Door-to-Needle Process for Acute Ischemic Stroke.

    PubMed

    Prabhakaran, Shyam; Khorzad, Rebeca; Brown, Alexandra; Nannicelli, Anna P; Khare, Rahul; Holl, Jane L

    2015-10-01

    Although best practices have been developed for achieving door-to-needle (DTN) times ≤60 minutes for stroke thrombolysis, critical DTN process failures persist. We sought to compare these failures in the Emergency Department at an academic medical center and a community hospital. Failure modes effects and criticality analysis was used to identify system and process failures. Multidisciplinary teams involved in DTN care participated in moderated sessions at each site. As a result, DTN process maps were created and potential failures and their causes, frequency, severity, and existing safeguards were identified. For each failure, a risk priority number and criticality score were calculated; failures were then ranked, with the highest scores representing the most critical failures and targets for intervention. We detected a total of 70 failures in 50 process steps and 76 failures in 42 process steps at the community hospital and academic medical center, respectively. At the community hospital, critical failures included (1) delay in registration because of Emergency Department overcrowding, (2) incorrect triage diagnosis among walk-in patients, and (3) delay in obtaining consent for thrombolytic treatment. At the academic medical center, critical failures included (1) incorrect triage diagnosis among walk-in patients, (2) delay in stroke team activation, and (3) delay in obtaining computed tomographic imaging. Although the identification of common critical failures suggests opportunities for a generalizable process redesign, differences in the criticality and nature of failures must be addressed at the individual hospital level, to develop robust and sustainable solutions to reduce DTN time. © 2015 American Heart Association, Inc.

  14. Radiographic methods of wear analysis in total hip arthroplasty.

    PubMed

    Rahman, Luthfur; Cobb, Justin; Muirhead-Allwood, Sarah

    2012-12-01

    Polyethylene wear is an important factor in failure of total hip arthroplasty (THA). With increasing numbers of THAs being performed worldwide, particularly in younger patients, the burden of failure and revision arthroplasty is increasing, as well, along with associated costs and workload. Various radiographic methods of measuring polyethylene wear have been developed to assist in deciding when to monitor patients more closely and when to consider revision surgery. Radiographic methods that have been developed to measure polyethylene wear include manual and computer-assisted plain radiography, two- and three-dimensional techniques, and radiostereometric analysis. Some of these methods are important in both clinical and research settings. CT has the potential to provide additional information on component orientation and enables assessment of periprosthetic osteolysis, which is an important consequence of polyethylene wear.

  15. Heart failure analysis dashboard for patient's remote monitoring combining multiple artificial intelligence technologies.

    PubMed

    Guidi, G; Pettenati, M C; Miniati, R; Iadanza, E

    2012-01-01

    In this paper we describe an Heart Failure analysis Dashboard that, combined with a handy device for the automatic acquisition of a set of patient's clinical parameters, allows to support telemonitoring functions. The Dashboard's intelligent core is a Computer Decision Support System designed to assist the clinical decision of non-specialist caring personnel, and it is based on three functional parts: Diagnosis, Prognosis, and Follow-up management. Four Artificial Intelligence-based techniques are compared for providing diagnosis function: a Neural Network, a Support Vector Machine, a Classification Tree and a Fuzzy Expert System whose rules are produced by a Genetic Algorithm. State of the art algorithms are used to support a score-based prognosis function. The patient's Follow-up is used to refine the diagnosis.

  16. An Analysis of Mathematics Interventions: Increased Time-on-Task Compared with Computer-Assisted Mathematics Instruction

    ERIC Educational Resources Information Center

    Calhoun, James M., Jr.

    2011-01-01

    Student achievement is not progressing on mathematics as measured by state, national, and international assessments. Much of the research points to mathematics curriculum and instruction as the root cause of student failure to achieve at levels comparable to other nations. Since mathematics is regarded as a gate keeper to many educational…

  17. Experimental and finite element investigation of the buckling characteristics of a beaded skin panel for a hypersonic aircraft. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Siegel, W. H.

    1978-01-01

    As part of NASA's continuing research into hypersonics and 85 square foot hypersonic wing test section of a proposed hypersonic research airplane was laboratory tested. The project reported on in this paper has carried the hypersonic wing test structure project one step further by testing a single beaded panel to failure. The primary interest was focused upon the buckling characteristics of the panel under pure compression with boundary conditions similar to those found in a wing mounted condition. Three primary phases of analysis are included in the report. These phases include: experimental testing of the beaded panel to failure; finite element structural analysis of the beaded panel with the computer program NASTRAN; a summary of the semiclassical buckling equations for the beaded panel under purely compressive loads. Comparisons between each of the analysis methods are also included.

  18. Progressive Fracture of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Minnetyan, Levon

    2008-01-01

    A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells and the built-up composite structure global fracture are enhanced when internal pressure is combined with shear loads.

  19. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures.

    PubMed

    Zhan, Yijian; Meschke, Günther

    2017-07-08

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.

  20. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures

    PubMed Central

    Zhan, Yijian

    2017-01-01

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130

  1. Reliability analysis of redundant systems. [a method to compute transition probabilities

    NASA Technical Reports Server (NTRS)

    Yeh, H. Y.

    1974-01-01

    A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.

  2. Experimental analysis of computer system dependability

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar, K.; Tang, Dong

    1993-01-01

    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.

  3. Effect of Discontinuities and Uncertainties on the Response and Failure of Composite Structures

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Perry, Ferman W.; Poteat, Marcia M. (Technical Monitor)

    2000-01-01

    The overall goal of this research was to assess the effect of discontinuities and uncertainties on the nonlinear response and failure of composite structures subjected to combined mechanical and thermal loads. The four key elements of the study were: (1) development of simple and efficient procedures for the accurate determination of transverse shear and transverse normal stresses in structural sandwiches as well as in unstiffened and stiffened composite panels and shells; (2) study the effects of transverse stresses on the response, damage initiation and propagation in composite and sandwich structures; (3) use of hierarchical sensitivity coefficients to identify the major parameters that affect the response and damage in each of the different levels in the hierarchy (micro-mechanical, layer, panel, subcomponent and component levels); and (4) application of fuzzy set techniques to identify the range and variation of possible responses. The computational models developed were used in conjunction with experiments, to understand the physical phenomena associated with the nonlinear response and failure of composite and sandwich structures. A toolkit was developed for use in conjunction with deterministic analysis programs to help the designer in assessing the effect of uncertainties in the different computational model parameters on the variability of the response quantities.

  4. Ballistic-Failure Mechanisms in Gas Metal Arc Welds of MIL A46100 Armor-Grade Steel: A Computational Investigation

    DTIC Science & Technology

    2014-06-12

    distribution is unlimited. Ballistic-Failure Mechanisms in Gas Metal Arc Welds of Mil A46100 Armor-Grade Steel : A Computational Investigation The views...Welds of Mil A46100 Armor-Grade Steel : A Computational Investigation Report Title In our recent work, a multi-physics computational model for the...introduction of the sixth module in the present work in recognition of the fact that in thick steel GMAW weldments, the overall ballistic performance

  5. Validation of Computerized Automatic Calculation of the Sequential Organ Failure Assessment Score

    PubMed Central

    Harrison, Andrew M.; Pickering, Brian W.; Herasevich, Vitaly

    2013-01-01

    Purpose. To validate the use of a computer program for the automatic calculation of the sequential organ failure assessment (SOFA) score, as compared to the gold standard of manual chart review. Materials and Methods. Adult admissions (age > 18 years) to the medical ICU with a length of stay greater than 24 hours were studied in the setting of an academic tertiary referral center. A retrospective cross-sectional analysis was performed using a derivation cohort to compare automatic calculation of the SOFA score to the gold standard of manual chart review. After critical appraisal of sources of disagreement, another analysis was performed using an independent validation cohort. Then, a prospective observational analysis was performed using an implementation of this computer program in AWARE Dashboard, which is an existing real-time patient EMR system for use in the ICU. Results. Good agreement between the manual and automatic SOFA calculations was observed for both the derivation (N=94) and validation (N=268) cohorts: 0.02 ± 2.33 and 0.29 ± 1.75 points, respectively. These results were validated in AWARE (N=60). Conclusion. This EMR-based automatic tool accurately calculates SOFA scores and can facilitate ICU decisions without the need for manual data collection. This tool can also be employed in a real-time electronic environment. PMID:23936639

  6. Revisiting of Multiscale Static Analysis of Notched Laminates Using the Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Naghipour Ghezeljeh, Paria; Arnold, Steven M.; Pineda, Evan J.

    2016-01-01

    Composite material systems generally exhibit a range of behavior on different length scales (from constituent level to macro); therefore, a multiscale framework is beneficial for the design and engineering of these material systems. The complex nature of the observed composite failure during experiments suggests the need for a three-dimensional (3D) multiscale model to attain a reliable prediction. However, the size of a multiscale three-dimensional finite element model can become prohibitively large and computationally costly. Two-dimensional (2D) models are preferred due to computational efficiency, especially if many different configurations have to be analyzed for an in-depth damage tolerance and durability design study. In this study, various 2D and 3D multiscale analyses will be employed to conduct a detailed investigation into the tensile failure of a given multidirectional, notched carbon fiber reinforced polymer laminate. Threedimensional finite element analysis is typically considered more accurate than a 2D finite element model, as compared with experiments. Nevertheless, in the absence of adequate mesh refinement, large differences may be observed between a 2D and 3D analysis, especially for a shear-dominated layup. This observed difference has not been widely addressed in previous literature and is the main focus of this paper.

  7. Design of ceramic components with the NASA/CARES computer program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.

    1990-01-01

    The ceramics analysis and reliability evaluation of structures (CARES) computer program is described. The primary function of the code is to calculate the fast-fracture reliability or failure probability of macro-scopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. CARES uses results from MSC/NASTRAN or ANSYS finite-element analysis programs to evaluate how inherent surface and/or volume type flaws component reliability. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effects of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for a single or multiple failure modes by using a least-squares analysis or a maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-to-fit-tests, 90 percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan 90 percent confidence band values are also provided. Examples are provided to illustrate the various features of CARES.

  8. Discriminating between stabilizing and destabilizing protein design mutations via recombination and simulation.

    PubMed

    Johnson, Lucas B; Gintner, Lucas P; Park, Sehoo; Snow, Christopher D

    2015-08-01

    Accuracy of current computational protein design (CPD) methods is limited by inherent approximations in energy potentials and sampling. These limitations are often used to qualitatively explain design failures; however, relatively few studies provide specific examples or quantitative details that can be used to improve future CPD methods. Expanding the design method to include a library of sequences provides data that is well suited for discriminating between stabilizing and destabilizing design elements. Using thermophilic endoglucanase E1 from Acidothermus cellulolyticus as a model enzyme, we computationally designed a sequence with 60 mutations. The design sequence was rationally divided into structural blocks and recombined with the wild-type sequence. Resulting chimeras were assessed for activity and thermostability. Surprisingly, unlike previous chimera libraries, regression analysis based on one- and two-body effects was not sufficient for predicting chimera stability. Analysis of molecular dynamics simulations proved helpful in distinguishing stabilizing and destabilizing mutations. Reverting to the wild-type amino acid at destabilized sites partially regained design stability, and introducing predicted stabilizing mutations in wild-type E1 significantly enhanced thermostability. The ability to isolate stabilizing and destabilizing elements in computational design offers an opportunity to interpret previous design failures and improve future CPD methods. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Risk-based maintenance of ethylene oxide production facilities.

    PubMed

    Khan, Faisal I; Haddara, Mahmoud R

    2004-05-20

    This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.

  10. Life-cycle costs of high-performance cells

    NASA Technical Reports Server (NTRS)

    Daniel, R.; Burger, D.; Reiter, L.

    1985-01-01

    A life cycle cost analysis of high efficiency cells was presented. Although high efficiency cells produce more power, they also cost more to make and are more susceptible to array hot-spot heating. Three different computer analysis programs were used: SAMICS (solar array manufacturing industry costing standards), PVARRAY (an array failure mode/degradation simulator), and LCP (lifetime cost and performance). The high efficiency cell modules were found to be more economical in this study, but parallel redundancy is recommended.

  11. Compilation of Abstracts of Theses Submitted by Candidates for Degrees: October 1990 to September 1991

    DTIC Science & Technology

    1991-09-30

    Tool (ASSET) COMPUTER SCIENCE Vicki Sue Abel VIEWER - A User Interface for Failure 49 Lieutenant Commander, U.S. Navy Region Analysis and Medio Monti...California Current System using a Primitive Equation Model Charles C. McGlothin, Jr. Ambient Sound in the Ocean Induced by 257 Lieutenant, U.S. Navy Heavy...parameters,, and ambient flow/oscillating flow combinations using VAX-3520 and NASA’s Supercomputers. Extensive sensitivity analysis has been performed

  12. Fault tolerance in a supercomputer through dynamic repartitioning

    DOEpatents

    Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Takken, Todd E.

    2007-02-27

    A multiprocessor, parallel computer is made tolerant to hardware failures by providing extra groups of redundant standby processors and by designing the system so that these extra groups of processors can be swapped with any group which experiences a hardware failure. This swapping can be under software control, thereby permitting the entire computer to sustain a hardware failure but, after swapping in the standby processors, to still appear to software as a pristine, fully functioning system.

  13. Public Risk Assessment Program

    NASA Technical Reports Server (NTRS)

    Mendeck, Gavin

    2010-01-01

    The Public Entry Risk Assessment (PERA) program addresses risk to the public from shuttle or other spacecraft re-entry trajectories. Managing public risk to acceptable levels is a major component of safe spacecraft operation. PERA is given scenario inputs of vehicle trajectory, probability of failure along that trajectory, the resulting debris characteristics, and field size and distribution, and returns risk metrics that quantify the individual and collective risk posed by that scenario. Due to the large volume of data required to perform such a risk analysis, PERA was designed to streamline the analysis process by using innovative mathematical analysis of the risk assessment equations. Real-time analysis in the event of a shuttle contingency operation, such as damage to the Orbiter, is possible because PERA allows for a change to the probability of failure models, therefore providing a much quicker estimation of public risk. PERA also provides the ability to generate movie files showing how the entry risk changes as the entry develops. PERA was designed to streamline the computation of the enormous amounts of data needed for this type of risk assessment by using an average distribution of debris on the ground, rather than pinpointing the impact point of every piece of debris. This has reduced the amount of computational time significantly without reducing the accuracy of the results. PERA was written in MATLAB; a compiled version can run from a DOS or UNIX prompt.

  14. Determination of fiber-matrix interface failure parameters from off-axis tests

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1993-01-01

    Critical fiber-matrix (FM) interface strength parameters were determined using a micromechanics-based approach together with failure data from off-axis tension (OAT) tests. The ply stresses at failure for a range of off-axis angles were used as input to a micromechanics analysis that was performed using the personal computer-based MICSTRAN code. FM interface stresses at the failure loads were calculated for both the square and the diamond array models. A simple procedure was developed to determine which array had the more severe FM interface stresses and the location of these critical stresses on the interface. For the cases analyzed, critical FM interface stresses were found to occur with the square array model and were located at a point where adjacent fibers were closest together. The critical FM interface stresses were used together with the Tsai-Wu failure theory to determine a failure criterion for the FM interface. This criterion was then used to predict the onset of ply cracking in angle-ply laminates for a range of laminate angles. Predictions for the onset of ply cracking in angle-ply laminates agreed with the test data trends.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneses, Esteban; Ni, Xiang; Jones, Terry R

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of faultmore » tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.« less

  16. Clavicle anatomical osteosynthesis plate breakage - failure analysis report based on patient morphological parameters.

    PubMed

    Marinescu, Rodica; Antoniac, Vasile Iulian; Stoia, Dan Ioan; Lăptoiu, Dan Constantin

    2017-01-01

    Clavicle fracture reported incidence is about 5% of fractures in adult; among them, those located in the middle third of the shaft represent more than 80% from the total of cases. Due to the special morphological and biomechanical constraints of the clavicle, several methods for restoring morphological integrity in these fractures are described, including conservative, non-surgical treatment. The last 10 years of clinical studies in the field have favored the surgical treatment for selected cases; several osteosynthesis implants are in use - mostly anatomical plates with specific advantages and documented complications. A failed anatomical clavicle plate was explanted and analyzed after a protocol using stereomicroscopy, scanning electron microscopy and energy dispersive spectrometry. Based on the computed tomography (CT) scan determination of patient morphological parameters, a finite elements analysis of the failure scenario was completed. The failure analysis has proved that the plate breakage had occurred in the point of maximal elastic stress and minor deformation. The clinical implication is that no hole should remain free of screw during clavicle plate fixation and the implant should be chosen based on patient morphological parameters. In comminuted clavicle fracture, anatomic bridging with locked plate technique may lead to implant failure due to increase of the stress in the midshaft area. Thorough knowledge of anatomy and morphology of complex bones like the clavicle is necessary. Modern osteosynthesis anatomical implants are still to be improved.

  17. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  18. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  19. Use of Modal Acoustic Emission to Monitor Damage Progression in Carbon Fiber/Epoxy Tows and Implications for Composite Structures

    NASA Technical Reports Server (NTRS)

    Waller, Jess M.; Saulsberry, Regor L.; Nichols, Charles T.; Wentzel, Daniel J.

    2010-01-01

    This slide presentation reviews the use of Modal Acoustic Emission to monitor damage progression to carbon fiber/epoxy tows. There is a risk for catastrophic failure of composite overwrapped pressure vessels (COPVs) due to burst-before-leak (BBL) stress rupture (SR) failure of carbon-epoxy (C/Ep) COPVs. A lack of quantitative nondestructive evaluation (NDE) is causing problems in current and future spacecraft designs. It is therefore important to develop and demonstrate critical NDE that can be implemented during stages of the design process since the observed rupture can occur with little of no advanced warning. Therefore a program was required to develop quantitative acoustic emission (AE) procedures specific to C/Ep overwraps, but which also have utility for monitoring damage accumulation in composite structure in general, and to lay the groundwork for establishing critical thresholds for accumulated damage in composite structures, such as COPVs, so that precautionary or preemptive engineering steps can be implemented to minimize of obviate the risk of catastrophic failure. A computed Felicity Ratio (FR) coupled with fast Fourier Transform (FFT) frequency analysis shows promise as an analytical pass/fail criterion. The FR analysis and waveform and FFT analysis are reviewed

  20. Influence of microscale heterogeneity and microstructure on the tensile behavior of crystalline rocks

    NASA Astrophysics Data System (ADS)

    Mahabadi, O. K.; Tatone, B. S. A.; Grasselli, G.

    2014-07-01

    This study investigates the influence of microscale heterogeneity and microcracks on the failure behavior and mechanical response of a crystalline rock. The thin section analysis for obtaining the microcrack density is presented. Using micro X-ray computed tomography (μCT) scanning of failed laboratory specimens, the influence of heterogeneity and, in particular, biotite grains on the brittle fracture of the specimens is discussed and various failure patterns are characterized. Three groups of numerical simulations are presented, which demonstrate the role of microcracks and the influence of μCT-based and stochastically generated phase distributions. The mechanical response, stress distribution, and fracturing process obtained by the numerical simulations are also discussed. The simulation results illustrate that heterogeneity and microcracks should be considered to accurately predict the tensile strength and failure behavior of the sample.

  1. Validating FMEA output against incident learning data: A study in stereotactic body radiation therapy.

    PubMed

    Yang, F; Cao, N; Young, L; Howard, J; Logan, W; Arbuckle, T; Sponseller, P; Korssjoen, T; Meyer, J; Ford, E

    2015-06-01

    Though failure mode and effects analysis (FMEA) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge, its output has never been validated against data on errors that actually occur. The objective of this study was to perform FMEA of a stereotactic body radiation therapy (SBRT) treatment planning process and validate the results against data recorded within an incident learning system. FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, dosimetrists, and IT technologists. Potential failure modes were identified through a systematic review of the process map. Failure modes were rated for severity, occurrence, and detectability on a scale of one to ten and risk priority number (RPN) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that has been active for two and a half years. Differences between FMEA anticipated failure modes and existing incidents were identified. FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. Combining both methods yielded a total of 76 possible process failures, of which 13 (17%) were missed by FMEA while 43 (57%) identified by FMEA only. When scored for RPN, the 13 events missed by FMEA ranked within the lower half of all failure modes and exhibited significantly lower severity relative to those identified by FMEA (p = 0.02). FMEA, though valuable, is subject to certain limitations. In this study, FMEA failed to identify 17% of actual failure modes, though these were of lower risk. Similarly, an incident learning system alone fails to identify a large number of potentially high-severity process errors. Using FMEA in combination with incident learning may render an improved overview of risks within a process.

  2. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 1 quarter 3 progress report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Kulak, R.F.; Bojanowski, C.

    2011-08-26

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of April through June 2011.« less

  3. Experimental and theoretical investigation of fatigue life in reusable rocket thrust chambers

    NASA Technical Reports Server (NTRS)

    Hannum, N. P.; Kasper, H. J.; Pavli, A. J.

    1976-01-01

    During a test program to investigate low-cycle thermal fatigue, 13 rocket combustion chambers were fabricated and cyclically test fired to failure. Six oxygen-free, high-conductivity (OFHC) copper and seven Amzirc chambers were tested. The chamber liners were fabricated of copper or copper alloy and contained milled coolant channels. The chambers were completed by means of an electroformed nickel closeout. The oxidant/fuel ratio for the liquid oxygen and gaseous hydrogen propellants was 6.0. The failures in the OFHC copper chambers were not typical fatigue failures but are described as creep rupture enhanced by ratcheting. The coolant channels bulged toward the chamber centerline, resulting in progressive thinning of the wall during each cycle. The failures in the Amzirc alloy chambers were caused by low-cycle thermal fatigue. The lives were much shorter than were predicted by an analytical structural analysis computer program used in conjunction with fatigue life data from isothermal test specimens, due to the uneven distribution of Zr in the chamber material.

  4. A model for the progressive failure of laminated composite structural components

    NASA Technical Reports Server (NTRS)

    Allen, D. H.; Lo, D. C.

    1991-01-01

    Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.

  5. Measurement of multiaxial ply strength by an off-axis flexure test

    NASA Technical Reports Server (NTRS)

    Crews, John H., Jr.; Naik, Rajiv A.

    1992-01-01

    An off-axis flexure (OAF) test was performed to measure ply strength under multiaxial stress states. This test involves unidirectional off-axis specimens loaded in bending, using an apparatus that allows these anisotropic specimens to twist as well as flex without the complications of a resisting torque. A 3D finite element stress analysis verified that simple beam theory could be used to compute the specimen bending stresses at failure. Unidirectional graphite/epoxy specimens with fiber angles ranging from 90 deg to 15 deg have combined normal and shear stresses on their failure planes that are typical of 45 deg plies in structural laminates. Tests for a range of stress states with AS4/3501-6 specimens showed that both normal and shear stresses on the failure plane influenced cracking resistance. This OAF test may prove to be useful for generating data needed to predict ply cracking in composite structures and may also provide an approach for studying fiber-matrix interface failures under stress states typical of structures.

  6. Thermomechanical CSM analysis of a superheater tube in transient state

    NASA Astrophysics Data System (ADS)

    Taler, Dawid; Madejski, Paweł

    2011-12-01

    The paper presents a thermomechanical computational solid mechanics analysis (CSM) of a pipe "double omega", used in the steam superheaters in circulating fluidized bed (CFB) boilers. The complex cross-section shape of the "double omega" tubes requires more precise analysis in order to prevent from failure as a result of the excessive temperature and thermal stresses. The results have been obtained using the finite volume method for transient state of superheater. The calculation was carried out for the section of pipe made of low-alloy steel.

  7. Closed-Loop Evaluation of an Integrated Failure Identification and Fault Tolerant Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan

    2006-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.

  8. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  9. Design of advanced beams considering elasto-plastic behaviour of material

    NASA Astrophysics Data System (ADS)

    Tolun, S.

    1992-10-01

    The paper proposes a computational procedure for precise calculation of limit and ultimate or design loads, which must be carried by an advanced aviation beam, without permanent distortion and without rupture. Among several stress-strain curve representations, one that is suitable for a particular material is chosen for applied loads, yield, and failure load calculations, and then nonlinear analysis is performed.

  10. A Cognitive Analysis of Developmental Mathematics Students' Errors and Misconceptions in Real Number Computations and Evaluating Algebraic Expressions

    ERIC Educational Resources Information Center

    Titus, Freddie

    2010-01-01

    Fifty percent of college-bound students graduate from high school underprepared for mathematics at the post-secondary level. As a result, thirty-five percent of college students take developmental mathematics courses. What is even more shocking is the high failure rate (ranging from 35 to 42 percent) of students enrolled in developmental…

  11. Failure location prediction by finite element analysis for an additive manufactured mandible implant.

    PubMed

    Huo, Jinxing; Dérand, Per; Rännar, Lars-Erik; Hirsch, Jan-Michaél; Gamstedt, E Kristofer

    2015-09-01

    In order to reconstruct a patient with a bone defect in the mandible, a porous scaffold attached to a plate, both in a titanium alloy, was designed and manufactured using additive manufacturing. Regrettably, the implant fractured in vivo several months after surgery. The aim of this study was to investigate the failure of the implant and show a way of predicting the mechanical properties of the implant before surgery. All computed tomography data of the patient were preprocessed to remove metallic artefacts with metal deletion technique before mandible geometry reconstruction. The three-dimensional geometry of the patient's mandible was also reconstructed, and the implant was fixed to the bone model with screws in Mimics medical imaging software. A finite element model was established from the assembly of the mandible and the implant to study stresses developed during mastication. The stress distribution in the load-bearing plate was computed, and the location of main stress concentration in the plate was determined. Comparison between the fracture region and the location of the stress concentration shows that finite element analysis could serve as a tool for optimizing the design of mandible implants. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Prediction of damage formation in hip arthroplasties by finite element analysis using computed tomography images.

    PubMed

    Abdullah, Abdul Halim; Todo, Mitsugu; Nakashima, Yasuharu

    2017-06-01

    Femoral bone fracture is one of the main causes for the failure of hip arthroplasties (HA). Being subjected to abrupt and high impact forces in daily activities may lead to complex loading configuration such as bending and sideway falls. The objective of this study is to predict the risk of femoral bone fractures in total hip arthroplasty (THA) and resurfacing hip arthroplasty (RHA). A computed tomography (CT) based on finite element analysis was conducted to demonstrate damage formation in a three dimensional model of HAs. The inhomogeneous model of femoral bone was constructed from a 79 year old female patient with hip osteoarthritis complication. Two different femoral components were modeled with titanium alloy and cobalt chromium and inserted into the femoral bones to present THA and RHA models respectively. The analysis included six configurations, which exhibited various loading and boundary conditions, including axial compression, torsion, lateral bending, stance and two types of falling configurations. The applied hip loadings were normalized to body weight (BW) and accumulated from 1 BW to 3 BW. Predictions of damage formation in the femoral models were discussed as the resulting tensile failure as well as the compressive yielding and failure elements. The results indicate that loading directions can forecast the pattern and location of fractures at varying magnitudes of loading. Lateral bending configuration experienced the highest damage formation in both THA and RHA models. Femoral neck and trochanteric regions were in a common location in the RHA model in most configurations, while the predicted fracture locations in THA differed as per the Vancouver classification. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Self-Aware Computing

    DTIC Science & Technology

    2009-06-01

    to floating point , to multi-level logic. 2 Overview Self-aware computation can be distinguished from existing computational models which are...systems have advanced to the point that the time is ripe to realize such a system. To illustrate, let us examine each of the key aspects of self...servers for each service, there are no single points of failure in the system. If an OS or user core has a failure, one of several introspection cores

  14. MO-G-BRE-09: Validating FMEA Against Incident Learning Data: A Study in Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, F; Cao, N; Young, L

    2014-06-15

    Purpose: Though FMEA (Failure Mode and Effects Analysis) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge it has never been validated against actual incident learning data. The objective of this study was to perform an FMEA analysis of an SBRT (Stereotactic Body Radiation Therapy) treatment planning process and validate this against data recorded within an incident learning system. Methods: FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, and dosimetrists. Potential failure modes were identified through a systematic review of the workflow process. Failuremore » modes were rated for severity, occurrence, and detectability on a scale of 1 to 10 and RPN (Risk Priority Number) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that had been active for two years. Differences were identified. Results: FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. FMEA failed to anticipate 13 of these events, among which 3 were registered with severity ratings of severe or critical in the incident learning system. Combining both methods yielded a total of 76 failure modes, and when scored for RPN the 13 events missed by FMEA ranked within the middle half of all failure modes. Conclusion: FMEA, though valuable, is subject to certain limitations, among them the limited ability to anticipate all potential errors for a given process. This FMEA exercise failed to identify a significant number of possible errors (17%). Integration of FMEA with retrospective incident data may be able to render an improved overview of risks within a process.« less

  15. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  16. Control methods for aiding a pilot during STOL engine failure transients

    NASA Technical Reports Server (NTRS)

    Nelson, E. R.; Debra, D. B.

    1976-01-01

    Candidate autopilot control laws that control the engine failure transient sink rates by demonstrating the engineering application of modern state variable control theory were defined. The results of approximate modal analysis were compared to those derived from full state analyses provided from computer design solutions. The aircraft was described, and a state variable model of its longitudinal dynamic motion due to engine and control variations was defined. The classical fast and slow modes were assumed to be sufficiently different to define reduced order approximations of the aircraft motion amendable to hand analysis control definition methods. The original state equations of motion were also applied to a large scale state variable control design program, in particular OPTSYS. The resulting control laws were compared with respect to their relative responses, ease of application, and meeting the desired performance objectives.

  17. Human factors process failure modes and effects analysis (HF PFMEA) software tool

    NASA Technical Reports Server (NTRS)

    Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)

    2011-01-01

    Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.

  18. Advanced Composite Wind Turbine Blade Design Based on Durability and Damage Tolerance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abumeri, Galib; Abdi, Frank

    2012-02-16

    The objective of the program was to demonstrate and verify Certification-by-Analysis (CBA) capability for wind turbine blades made from advanced lightweight composite materials. The approach integrated durability and damage tolerance analysis with robust design and virtual testing capabilities to deliver superior, durable, low weight, low cost, long life, and reliable wind blade design. The GENOA durability and life prediction software suite was be used as the primary simulation tool. First, a micromechanics-based computational approach was used to assess the durability of composite laminates with ply drop features commonly used in wind turbine applications. Ply drops occur in composite joints andmore » closures of wind turbine blades to reduce skin thicknesses along the blade span. They increase localized stress concentration, which may cause premature delamination failure in composite and reduced fatigue service life. Durability and damage tolerance (D&DT) were evaluated utilizing a multi-scale micro-macro progressive failure analysis (PFA) technique. PFA is finite element based and is capable of detecting all stages of material damage including initiation and propagation of delamination. It assesses multiple failure criteria and includes the effects of manufacturing anomalies (i.e., void, fiber waviness). Two different approaches have been used within PFA. The first approach is Virtual Crack Closure Technique (VCCT) PFA while the second one is strength-based. Constituent stiffness and strength properties for glass and carbon based material systems were reverse engineered for use in D&DT evaluation of coupons with ply drops under static loading. Lamina and laminate properties calculated using manufacturing and composite architecture details matched closely published test data. Similarly, resin properties were determined for fatigue life calculation. The simulation not only reproduced static strength and fatigue life as observed in the test, it also showed composite damage and fracture modes that resemble those reported in the tests. The results show that computational simulation can be relied on to enhance the design of tapered composite structures such as the ones used in turbine wind blades. A computational simulation for durability, damage tolerance (D&DT) and reliability of composite wind turbine blade structures in presence of uncertainties in material properties was performed. A composite turbine blade was first assessed with finite element based multi-scale progressive failure analysis to determine failure modes and locations as well as the fracture load. D&DT analyses were then validated with static test performed at Sandia National Laboratories. The work was followed by detailed weight analysis to identify contribution of various materials to the overall weight of the blade. The methodology ensured that certain types of failure modes, such as delamination progression, are contained to reduce risk to the structure. Probabilistic analysis indicated that composite shear strength has a great influence on the blade ultimate load under static loading. Weight was reduced by 12% with robust design without loss in reliability or D&DT. Structural benefits obtained with the use of enhanced matrix properties through nanoparticles infusion were also assessed. Thin unidirectional fiberglass layers enriched with silica nanoparticles were applied to the outer surfaces of a wind blade to improve its overall structural performance and durability. The wind blade was a 9-meter prototype structure manufactured and tested subject to three saddle static loading at Sandia National Laboratory (SNL). The blade manufacturing did not include the use of any nano-material. With silica nanoparticles in glass composite applied to the exterior surfaces of the blade, the durability and damage tolerance (D&DT) results from multi-scale PFA showed an increase in ultimate load of the blade by 9.2% as compared to baseline structural performance (without nano). The use of nanoparticles lead to a delay in the onset of delamination. Load-displacement relationships obtained from testing of the blade with baseline neat material were compared to the ones from analytical simulation using neat resin and using silica nanoparticles in the resin. Multi-scale PFA results for the neat material construction matched closely those from test for both load displacement and location and type of damage and failure. AlphaSTAR demonstrated that wind blade structures made from advanced composite materials can be certified with multi-scale progressive failure analysis by following building block verification approach.« less

  19. Multiscale Multifunctional Progressive Fracture of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Minnetyan, L.

    2012-01-01

    A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells. Global fracture is enhanced when internal pressure is combined with shear loads. The old reference denotes that nothing has been added to this comprehensive report since then.

  20. 11 CFR 111.35 - If the respondent decides to challenge the alleged violation or proposed civil money penalty...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... staff; (4) Committee computer, software or Internet service provider failures; (5) A committee's failure... software despite the respondent seeking technical assistance from Commission personnel and resources; (2) A... Commission's or respondent's computer systems or Internet service provider; and (3) Severe weather or other...

  1. Progressive Damage Analysis of Laminated Composite (PDALC) (A Computational Model Implemented in the NASA COMET Finite Element Code). 2.0

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.; Lo, David C.; Allen, David H.

    1998-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged damage variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete listing of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occurs during the loading history. Residual strength predictions made with this information compared favorably with experimental measurements.

  2. Progressive Damage Analysis of Laminated Composite (PDALC)-A Computational Model Implemented in the NASA COMET Finite Element Code

    NASA Technical Reports Server (NTRS)

    Lo, David C.; Coats, Timothy W.; Harris, Charles E.; Allen, David H.

    1996-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete list of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occur during the load history. Residual strength predictions made with this information compared favorably with experimental measurements.

  3. Local buckling and crippling of composite stiffener sections

    NASA Technical Reports Server (NTRS)

    Bonanni, David L.; Johnson, Eric R.; Starnes, James H., Jr.

    1988-01-01

    Local buckling, postbuckling, and crippling (failure) of channel, zee, and I- and J-section stiffeners made of AS4/3502 graphite-epoxy unidirectional tape are studied by experiment and analysis. Thirty-six stiffener specimens were tested statically to failure in axial compression as intermediate length columns. Web width is 1.25 inches for all specimens, and the flange width-to-thickness ratio ranges from 7 to 28 for the specimens tested. The radius of the stiffener corners is either 0.125 or 0.250 inches. A sixteen-ply orthotropic layup, an eight-ply quasi-isotropic layup, and a sixteen-ply quasi-isotropic layup are examined. Geometrically nonlinear analyses of five specimens were performed with the STAGS finite element code. Analytical results are compared to experimental data. Inplane stresses from STAGS are used to conduct a plane stress failure analysis of these specimens. Also, the development of interlaminar stress equations from equilibrium for classical laminated plate theory is presented. An algorithm to compute high order displacement derivatives required by these equations based on the Discrete Fourier Transform (DFT) is discussed.

  4. Single living predicts a higher mortality in both women and men with chronic heart failure.

    PubMed

    Mard, Shan; Nielsen, Finn Erland

    2016-09-01

    We examined the impact of single living on all-cause mortality in patients with chronic heart failure and determined if this association was modified by gender. This historical cohort study included 637 patients who were admitted to the Department of Cardiology, Herlev Hospital, Denmark, between 1 July 2005 and 30 June 2007. Baseline clinical data were obtained from patient records. Data on survival rates were obtained from the Danish Civil Registration System. Cox proportional hazard analysis was used to compute the hazard ratio (HR) of all-cause mortality, controlling for confounding factors. The median follow-up time was 2.8 years. A total of 323 (50.7%) patients died during the follow-up period. After adjustment for confounding factors, risk of death was associated with being single (HR = 1.53 (95% confidence interval: 1.19-1.96)). In a gender-stratified analysis, the risk of death did not differ among single-living women and men. Single living is a prognostic determinant of all-cause mortality in men and women with chronic heart failure. none. not relevant.

  5. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  6. A risk assessment method for multi-site damage

    NASA Astrophysics Data System (ADS)

    Millwater, Harry Russell, Jr.

    This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.

  7. Automated Detection of Events of Scientific Interest

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A report presents a slightly different perspective of the subject matter of Fusing Symbolic and Numerical Diagnostic Computations (NPO-42512), which appears elsewhere in this issue of NASA Tech Briefs. Briefly, the subject matter is the X-2000 Anomaly Detection Language, which is a developmental computing language for fusing two diagnostic computer programs one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for real-time detection of events. In the case of the cited companion NASA Tech Briefs article, the contemplated events that one seeks to detect would be primarily failures or other changes that could adversely affect the safety or success of a spacecraft mission. In the case of the instant report, the events to be detected could also include natural phenomena that could be of scientific interest. Hence, the use of X- 2000 Anomaly Detection Language could contribute to a capability for automated, coordinated use of multiple sensors and sensor-output-data-processing hardware and software to effect opportunistic collection and analysis of scientific data.

  8. Advanced techniques in reliability model representation and solution

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  9. Evaluation of reinitialization-free nonvolatile computer systems for energy-harvesting Internet of things applications

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro

    2017-08-01

    In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.

  10. Multi-hop routing mechanism for reliable sensor computing.

    PubMed

    Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min

    2009-01-01

    Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.

  11. Link failure detection in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

    2010-11-09

    Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

  12. Internal Progressive Failure in Deep-Seated Landslides

    NASA Astrophysics Data System (ADS)

    Yerro, Alba; Pinyol, Núria M.; Alonso, Eduardo E.

    2016-06-01

    Except for simple sliding motions, the stability of a slope does not depend only on the resistance of the basal failure surface. It is affected by the internal distortion of the moving mass, which plays an important role on the stability and post-failure behaviour of a landslide. The paper examines the stability conditions and the post-failure behaviour of a compound landslide whose geometry is inspired by one of the representative cross-sections of Vajont landslide. The brittleness of the mobilized rock mass was described by a strain-softening Mohr-Coulomb model, whose parameters were derived from previous contributions. The analysis was performed by means of a MPM computer code, which is capable of modelling the whole instability procedure in a unified calculation. The gravity action has been applied to initialize the stress state. This step mobilizes part of the strength along a shearing band located just above the kink of the basal surface, leading to the formation a kinematically admissible mechanism. The overall instability is triggered by an increase of water level. The increase of pore water pressures reduces the effective stresses within the slope and it leads to a progressive failure mechanism developing along an internal shearing band which controls the stability of the compound slope. The effect of the basal shearing resistance has been analysed during the post-failure stage. If no shearing strength is considered (as predicted by a thermal pressurization analysis), the model predicts a response similar to actual observations, namely a maximum sliding velocity of 25 m/s and a run-out close to 500 m.

  13. Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy.

    PubMed

    Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Di Muzio, Nadia; Longobardi, Barbara; Mangili, Paola; Veronese, Ivan

    2013-09-06

    The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety.

  14. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC year 1 quarter 4 progress report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Kulak, R.F.; Bojanowski, C.

    2011-12-09

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFCHR wind engineering laboratory, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of July through September 2011.« less

  15. How Do Tissues Respond and Adapt to Stresses Around a Prosthesis? A Primer on Finite Element Stress Analysis for Orthopaedic Surgeons

    PubMed Central

    Brand, Richard A; Stanford, Clark M; Swan, Colby C

    2003-01-01

    Joint implant design clearly affects long-term outcome. While many implant designs have been empirically-based, finite element analysis has the potential to identify beneficial and deleterious features prior to clinical trials. Finite element analysis is a powerful analytic tool allowing computation of the stress and strain distribution throughout an implant construct. Whether it is useful depends upon many assumptions and details of the model. Since ultimate failure is related to biological factors in addition to mechanical, and since the mechanical causes of failure are related to load history, rather than a few loading conditions, chief among them is whether the stresses or strains under limited loading conditions relate to outcome. Newer approaches can minimize this and the many other model limitations. If the surgeon is to critically and properly interpret the results in scientific articles and sales literature, he or she must have a fundamental understanding of finite element analysis. We outline here the major capabilities of finite element analysis, as well as the assumptions and limitations. PMID:14575244

  16. How oral environment simulation affects ceramic failure behavior.

    PubMed

    Lodi, Ediléia; Weber, Kátia R; Benetti, Paula; Corazza, Pedro H; Della Bona, Álvaro; Borba, Márcia

    2018-05-01

    Investigating the mechanical behavior of ceramics in a clinically simulated scenario contributes to the development of new and tougher materials, improving the clinical performance of restorations. The optimal in vitro environment for testing is unclear. The purpose of this in vitro study was to investigate the failure behavior of a leucite-reinforced glass-ceramic under compression loading and fatigue in different simulated oral environment conditions. Fifty-three plate-shaped ceramic specimens were produced from computer-aided design and computer-aided manufactured (CAD-CAM) blocks and adhesively cemented onto a dentin analog substrate. For the monotonic test (n=23), a gradual compressive load (0.5 mm/min) was applied to the center of the specimens, immersed in 37ºC water, using a universal testing machine. The initial crack was detected with an acoustic system. The fatigue test was performed in a mechanical cycling machine (37ºC water, 2 Hz) using the boundary technique (n=30). Two lifetimes were evaluated (1×10 6 and 2×10 6 cycles). Failure analysis was performed using transillumination. Weibull distribution was used to evaluate compressive load data. A cumulative damage model with an inverse power law (IPL) lifetime-stress relationship was used to fit the fatigue data. A characteristic failure load of 1615 N and a Weibull modulus of 5 were obtained with the monotonic test. The estimated probability of failure (P f ) for 1×10 6 cycles at 100 N was 31%, at 150 N it was 55%, and at 200 N it was 75%. For 2×10 6 cycles, the P f increased approximately 20% in comparison with the values predicted for 1×10 6 cycles, which was not significant. The most frequent failure mode was a radial crack from the intaglio surface. For fatigue, combined failure modes were also found (radial crack combined with cone crack or chipping). Fatigue affects the fracture load and failure mode of leucite-reinforced glass-ceramic. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  17. Circadian body temperature variability is an indicator of poor prognosis in cardiomyopathic hamsters.

    PubMed

    Ahmed, Amany; Gondi, Sreedevi; Cox, Casey; Wang, Suwei; Stupin, Igor V; Shankar, K J; Munir, Shahzeb M; Sobash, Ed; Brewer, Alan; Ferguson, James J; Elayda, Macarthur A; Casscells, S Ward; Wilson, James M

    2010-03-01

    Low body temperature is an independent predictor of poor prognosis in patients with congestive heart failure. The cardiomyopathic hamster develops progressive biventricular dysfunction, resulting in heart failure death at 9 months to 1 year of life. Our goal was to use cardiomyopathic hamsters to examine the relationship between body temperature and heart failure decompensation and death. To this end, we implanted temperature and activity transducers with telemetry into the peritoneal space of 46 male Bio-TO-2 Syrian cardiomyopathic hamsters. Multiple techniques, including computing mean temperature, frequency domain analysis, and nonlinear analysis, were used to determine the most useful method for predicting poor prognosis. Data from 44 hamsters were included in our final analysis. We detected a decline in core body temperature in 98% of the hamsters 8+/-4 days before death (P < .001). We examined the dominant frequency of temperature variation (ie, the circadian rhythm) by using cosinor analysis, which revealed a significant decrease in the amplitude of the body temperature circadian rhythm 8 weeks before death (0.28 degrees C; 95% CI, 0.26-0.31) compared to baseline (0.36 degrees C; 95% CI, 0.34-0.39; P=.005). The decline in the circadian temperature variation preceded all other evidence of decompensation. We conclude that a decrease in the amplitude of the body temperature circadian rhythm precedes fatal decompensation in cardiomyopathic hamsters. Continuous temperature monitoring may be useful in predicting preclinical decompensation in patients with heart failure and in identifying opportunities for therapeutic intervention. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  18. Robust failure detection filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sanmartin, A. M.

    1985-01-01

    The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.

  19. Biaxial tests of flat graphite/epoxy laminates

    NASA Technical Reports Server (NTRS)

    Liebowitz, H.; Jones, D. L.

    1981-01-01

    The influence of biaxially applied loads on the strength of composite materials containing holes was analyzed. The analysis was performed through the development of a three dimensional, finite element computer program that is capable of evaluating fiber breakage, delamination, and matrix failure. Realistic failure criteria were established for each of the failure modes, and the influence of biaxial loading on damage accumulation under monotonically increasing loading was examined in detail. Both static and fatigue testing of specially designed biaxial specimens containing central holes were performed. Static tests were performed to obtain an understanding of the influence of biaxial loads on the fracture strength of composite materials and to provide correlation with the analytical predictions. The predicted distributions and types of damage are in reasonable agreement with the experimental results. A number of fatigue tests were performed to determine the influence of cyclic biaxial loads on the fatigue life and residual strength of several composite laminates.

  20. Liver transplantation for fulminant hepatitis at Stanford University.

    PubMed

    Lu, Amy; Monge, Humberto; Drazan, Kenneth; Millan, Maria; Esquivel, Carlos O

    2002-01-01

    To review the clinical characteristics and outcomes of 26 patients evaluated for liver transplantation for fulminant hepatic failure at Stanford University and Lucile Packard Children's Hospital in an attempt to identify risk factors and prognostic predictors of survival. A retrospective review of the records of 26 consecutive patients who were evaluated for possible liver transplantation for acute liver failure from May 1, 1995, to January 1, 2000. Pretransplant patient demographics and clinical characteristics were collected, and the data were analyzed by univariate and multivariate analysis. Clinical assessment of encephalopathy did not predict outcome. Patients with abnormal computed tomography (CT) of the brain had a twofold increase in mortality compared with those patients with normal studies (p = 0.03). Patients requiring mechanical ventilation and continuous venovenous hemofiltration (CVVH) also had a poor prognosis. Predictors of poor outcome after fulminant hepatic failure include abnormal CT scan, mechanical ventilation, and requirement for hemofiltration.

  1. Failure detection and isolation analysis of a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Motyka, P.; Landey, M.; Mckern, R.

    1981-01-01

    The objective of this study was to define and develop techniques for failure detection and isolation (FDI) algorithms for a dual fail/operational redundant strapdown inertial navigation system are defined and developed. The FDI techniques chosen include provisions for hard and soft failure detection in the context of flight control and navigation. Analyses were done to determine error detection and switching levels for the inertial navigation system, which is intended for a conventional takeoff or landing (CTOL) operating environment. In addition, investigations of false alarms and missed alarms were included for the FDI techniques developed, along with the analyses of filters to be used in conjunction with FDI processing. Two specific FDI algorithms were compared: the generalized likelihood test and the edge vector test. A deterministic digital computer simulation was used to compare and evaluate the algorithms and FDI systems.

  2. Man-rated flight software for the F-8 DFBW program

    NASA Technical Reports Server (NTRS)

    Bairnsfather, R. R.

    1975-01-01

    The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program Assembly Control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools--the all-digital simulator, the hybrid simulator, and the Iron Bird simulator--are described, as well as the program test plans and their implementation on the various simulators. Failure-effects analysis and the creation of special failure-generating software for testing purposes are described. The quality of the end product is evidenced by the F-8 DFBW flight test program in which 42 flights, totaling 58 hours of flight time, were successfully made without any DFCS inflight software, or hardware, failures.

  3. Mesoscale analysis of failure in quasi-brittle materials: comparison between lattice model and acoustic emission data.

    PubMed

    Grégoire, David; Verdon, Laura; Lefort, Vincent; Grassl, Peter; Saliba, Jacqueline; Regoin, Jean-Pierre; Loukili, Ahmed; Pijaudier-Cabot, Gilles

    2015-10-25

    The purpose of this paper is to analyse the development and the evolution of the fracture process zone during fracture and damage in quasi-brittle materials. A model taking into account the material details at the mesoscale is used to describe the failure process at the scale of the heterogeneities. This model is used to compute histograms of the relative distances between damaged points. These numerical results are compared with experimental data, where the damage evolution is monitored using acoustic emissions. Histograms of the relative distances between damage events in the numerical calculations and acoustic events in the experiments exhibit good agreement. It is shown that the mesoscale model provides relevant information from the point of view of both global responses and the local failure process. © 2015 The Authors. International Journal for Numerical and Analytical Methods in Geomechanics published by John Wiley & Sons Ltd.

  4. 75 FR 27422 - Airworthiness Directives; Hawker Beechcraft Corporation (Type Certificate No. A00010WI Previously...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-17

    ... system, pilot primary flight display, audio panel, or the 1 air data computer. This failure could lead to... include the autopilot, anti-skid system, hydraulic indicator, spoiler system, pilot primary flight display, audio panel, or the 1 air data computer. This failure could lead to a significant increase in pilot...

  5. Comparison of slope stability in two Brazilian municipal landfills.

    PubMed

    Gharabaghi, B; Singh, M K; Inkratas, C; Fleming, I R; McBean, E

    2008-01-01

    The implementation of landfill gas to energy (LFGTE) projects has greatly assisted in reducing the greenhouse gases and air pollutants, leading to an improved local air quality and reduced health risks. The majority of cities in developing countries still dispose of their municipal waste in uncontrolled 'open dumps.' Municipal solid waste landfill construction practices and operating procedures in these countries pose a challenge to implementation of LFGTE projects because of concern about damage to the gas collection infrastructure (horizontal headers and vertical wells) caused by minor, relatively shallow slumps and slides within the waste mass. While major slope failures can and have occurred, such failures in most cases have been shown to involve contributory factors or triggers such as high pore pressures, weak foundation soil or failure along weak geosynthetic interfaces. Many researchers who have studied waste mechanics propose that the shear strength of municipal waste is sufficient such that major deep-seated catastrophic failures under most circumstances require such contributory factors. Obviously, evaluation of such potential major failures requires expert analysis by geotechnical specialists with detailed site-specific information regarding foundation soils, interface shearing resistances and pore pressures both within the waste and in clayey barrier layers or foundation soils. The objective of this paper is to evaluate the potential use of very simple stability analyses which can be used to study the potential for slumps and slides within the waste mass and which may represent a significant constraint on construction and development of the landfill, on reclamation and closure and on the feasibility of a LFGTE project. The stability analyses rely on site-specific but simple estimates of the unit weight of waste and the pore pressure conditions and use "generic" published shear strength envelopes for municipal waste. Application of the slope stability analysis method is presented in a case study of two Brazilian landfill sites; the Cruz das Almas Landfill in Maceio and the Muribeca Landfill in Recife. The Muribeca site has never recorded a slope failure and is much larger and better-maintained when compared to the Maceio site at which numerous minor slumps and slides have been observed. Conventional limit-equilibrium analysis was used to calculate factors of safety for stability of the landfill side slopes. Results indicate that the Muribeca site is more stable with computed factors of safety values in the range 1.6-2.4 compared with computed values ranging from 0.9 to 1.4 for the Maceio site at which slope failures have been known to occur. The results suggest that this approach may be useful as a screening-level tool when considering the feasibility of implementing LFGTE projects.

  6. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less

  7. Aeromechanics Analysis of a Distortion-Tolerant Fan with Boundary Layer Ingestion

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Reddy, T. S. R.; Coroneos, Rula M.; Min, James B.; Provenza, Andrew J.; Duffy, Kirsten P.; Stefko, George L.; Heinlein, Gregory S.

    2018-01-01

    A propulsion system with Boundary Layer Ingestion (BLI) has the potential to significantly reduce aircraft engine fuel burn. But a critical challenge is to design a fan that can operate continuously with a persistent BLI distortion without aeromechanical failure -- flutter or high cycle fatigue due to forced response. High-fidelity computational aeromechanics analysis can be very valuable to support the design of a fan that has satisfactory aeromechanic characteristics and good aerodynamic performance and operability. Detailed aeromechanics analyses together with careful monitoring of the test article is necessary to avoid unexpected problems or failures during testing. In the present work, an aeromechanics analysis based on a three-dimensional, time-accurate, Reynolds-averaged Navier-Stokes computational fluid dynamics code is used to study the performance and aeromechanical characteristics of the fan in both circumferentially-uniform and circumferentially-varying distorted flows. Pre-test aeromechanics analyses are used to prepare for the wind tunnel test and comparisons are made with measured blade vibration data after the test. The analysis shows that the fan has low levels of aerodynamic damping at various operating conditions examined. In the test, the fan remained free of flutter except at one near-stall operating condition. Analysis could not be performed at this low mass flow rate operating condition since it fell beyond the limit of numerical stability of the analysis code. The measured resonant forced response at a specific low-response crossing indicated that the analysis under-predicted this response and work is in progress to understand possible sources of differences and to analyze other larger resonant responses. Follow-on work is also planned with a coupled inlet-fan aeromechanics analysis that will more accurately represent the interactions between the fan and BLI distortion.

  8. Inlay-retained cantilever fixed dental prostheses to substitute a single premolar: impact of zirconia framework design after dynamic loading.

    PubMed

    Shahin, Ramez; Tannous, Fahed; Kern, Matthias

    2014-08-01

    The purpose of this in-vitro study was to evaluate the influence of the framework design on the durability of inlay-retained cantilever fixed dental prostheses (IR-FDPs), made from zirconia ceramic, after artificial ageing. Forty-eight caries-free human premolars were prepared as abutments for all-ceramic cantilevered IR-FDPs using six framework designs: occlusal-distal (OD) inlay, OD inlay with an oral retainer wing, OD inlay with two retainer wings, mesial-occlusal-distal (MOD) inlay, MOD inlay with an oral retainer ring, and veneer partial coping with a distal box (VB). Zirconia IR-FDPs were fabricated via computer-aided design/computer-aided manufacturing (CAD/CAM) technology. The bonding surfaces were air-abraded (50 μm alumina/0.1 MPa), and the frameworks were bonded with adhesive resin cement. Specimens were stored for 150 d in a 37°C water bath during which they were thermocycled between 5 and 55°C for 37,500 cycles; thereafter, they were exposed to 600,000 cycles of dynamic loading with a 5-kg load in a chewing simulator. All surviving specimens were loaded onto the pontic and tested until failure using a universal testing machine. The mean failure load of the groups ranged from 260.8 to 746.7 N. Statistical analysis showed that both MOD groups exhibited significantly higher failure loads compared with the other groups (i.e. the three OD groups and the VB group) and that there was no significant difference in the failure load among the OD groups and the VB group. In conclusion, zirconia IR-FDPs with a modified design exhibited promising failure modes. © 2014 Eur J Oral Sci.

  9. Multi-Scale Hierarchical and Topological Design of Structures for Failure Resistance

    DTIC Science & Technology

    2013-10-04

    materials, simulation, 3D printing , advanced manufacturing, design, fracture Markus J. Buehler Massachusetts Institute of Technology (MIT) 77...by Mineralized Natural Materials: Computation, 3D printing , and Testing, Advanced Functional Materials, (09 2013): 0. doi: 10.1002/adfm.201300215 10...have made substantial progress. Recent work focuses on the analysis of topological effects of composite design, 3D printing of bioinspired and

  10. Aleatory Uncertainty and Scale Effects in Computational Damage Models for Failure and Fragmentation

    DTIC Science & Technology

    2014-09-01

    larger specimens, small specimens have, on average, higher strengths. Equivalently, because curves for small specimens fall below those of larger...the material strength associated with each realization parameter R in Equation (7), and strength distribution curves associated with multiple...effects in brittle media [58], which applies micromorphological dimensional analysis to obtain a universal curve which closely fits rate-dependent

  11. A guide to onboard checkout. Volume 5: Data management

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The baseline data management subsystem for a space station is discussed. The subsystem consists of equipment necessary to transfer, store, and process data to and from users and subsystems. It acquires and conditions a wide variety of input data from experiments, vehicle subsystems sensors, uplinked ground communications, and astronaut-activated controls. Computer techniques for failure analysis, reliability, and maintenance checkout onboard the space station are considered.

  12. A Multiscale Progressive Failure Modeling Methodology for Composites that Includes Fiber Strength Stochastics

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.

    2014-01-01

    A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.

  13. Prediction of postoperative outcome after hepatectomy with a new bedside test for maximal liver function capacity.

    PubMed

    Stockmann, Martin; Lock, Johan F; Riecke, Björn; Heyne, Karsten; Martus, Peter; Fricke, Michael; Lehmann, Sina; Niehues, Stefan M; Schwabe, Michael; Lemke, Arne-Jörn; Neuhaus, Peter

    2009-07-01

    To validate the LiMAx test, a new bedside test for the determination of maximal liver function capacity based on C-methacetin kinetics. To investigate the diagnostic performance of different liver function tests and scores including the LiMAx test for the prediction of postoperative outcome after hepatectomy. Liver failure is a major cause of mortality after hepatectomy. Preoperative prediction of residual liver function has been limited so far. Sixty-four patients undergoing hepatectomy were analyzed in a prospective observational study. Volumetric analysis of the liver was carried out using preoperative computed tomography and intraoperative measurements. Perioperative factors associated with morbidity and mortality were analyzed. Cutoff values of the LiMAx test were evaluated by receiver operating characteristic. Residual LiMAx demonstrated an excellent linear correlation with residual liver volume (r = 0.94, P < 0.001) after hepatectomy. The multivariate analysis revealed LiMAx on postoperative day 1 as the only predictor of liver failure (P = 0.003) and mortality (P = 0.004). AUROC for the prediction of liver failure and liver failure related death by the LiMAx test was both 0.99. Preoperative volume/function analysis combining CT volumetry and LiMAx allowed an accurate calculation of the remnant liver function capacity prior to surgery (r = 0.85, P < 0.001). Residual liver function is the major factor influencing the outcome of patients after hepatectomy and can be predicted preoperatively by a combination of LiMAx and CT volumetry.

  14. The Effect of Fiber Strength Stochastics and Local Fiber Volume Fraction on Multiscale Progressive Failure of Composites

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Jr., Thomas E.; Bednarcyk, Brett A.; Arnold, Steven M.

    2013-01-01

    Continuous fiber unidirectional polymer matrix composites (PMCs) can exhibit significant local variations in fiber volume fraction as a result of processing conditions that can lead to further local differences in material properties and failure behavior. In this work, the coupled effects of both local variations in fiber volume fraction and the empirically-based statistical distribution of fiber strengths on the predicted longitudinal modulus and local tensile strength of a unidirectional AS4 carbon fiber/ Hercules 3502 epoxy composite were investigated using the special purpose NASA Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC); local effective composite properties were obtained by homogenizing the material behavior over repeating units cells (RUCs). The predicted effective longitudinal modulus was relatively insensitive to small (8%) variations in local fiber volume fraction. The composite tensile strength, however, was highly dependent on the local distribution in fiber strengths. The RUC-averaged constitutive response can be used to characterize lower length scale material behavior within a multiscale analysis framework that couples the NASA code FEAMAC and the ABAQUS finite element solver. Such an approach can be effectively used to analyze the progressive failure of PMC structures whose failure initiates at the RUC level. Consideration of the effect of local variations in constituent properties and morphologies on progressive failure of PMCs is a central aspect of the application of Integrated Computational Materials Engineering (ICME) principles for composite materials.

  15. Skin-Stiffener Debond Prediction Based on Computational Fracture Analysis

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Gates, Tom (Technical Monitor)

    2005-01-01

    Interlaminar fracture mechanics has proven useful for characterizing the onset of delaminations in composites and has been used with limited success primarily to investigate onset in fracture toughness specimens and laboratory size coupon type specimens. Future acceptance of the methodology by industry and certification authorities however, requires the successful demonstration of the methodology on structural level. For this purpose a panel was selected that is reinforced with stringers. Shear loading causes the panel to buckle and the resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. For finite element analysis, the panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot and the panel in the vicinity of the embedded defect were modeled with a local 3D solid model. Across the width of the stringer foot the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. For small applied loads the failure index is well below one across the entire width. With increasing load the failure index approaches one first near the edge of the stringer foot from which delamination is expected to grow. With increasing delamination lengths the buckling pattern of the panel changes and the failure index increases which suggests that rapid delamination growth from the initial defect is to be expected.

  16. Using Controlled Landslide Initiation Experiments to Test Limit-Equilibrium Analyses of Slope Stability

    NASA Astrophysics Data System (ADS)

    Reid, M. E.; Iverson, R. M.; Brien, D. L.; Iverson, N. R.; Lahusen, R. G.; Logan, M.

    2004-12-01

    Most studies of landslide initiation employ limit equilibrium analyses of slope stability. Owing to a lack of detailed data, however, few studies have tested limit-equilibrium predictions against physical measurements of slope failure. We have conducted a series of field-scale, highly controlled landslide initiation experiments at the USGS debris-flow flume in Oregon; these experiments provide exceptional data to test limit equilibrium methods. In each of seven experiments, we attempted to induce failure in a 0.65m thick, 2m wide, 6m3 prism of loamy sand placed behind a retaining wall in the 31° sloping flume. We systematically investigated triggering of sliding by groundwater injection, by prolonged moderate-intensity sprinkling, and by bursts of high intensity sprinkling. We also used vibratory compaction to control soil porosity and thereby investigate differences in failure behavior of dense and loose soils. About 50 sensors were monitored at 20 Hz during the experiments, including nests of tiltmeters buried at 7 cm spacing to define subsurface failure geometry, and nests of tensiometers and pore-pressure sensors to define evolving pore-pressure fields. In addition, we performed ancillary laboratory tests to measure soil porosity, shear strength, hydraulic conductivity, and compressibility. In loose soils (porosity of 0.52 to 0.55), abrupt failure typically occurred along the flume bed after substantial soil deformation. In denser soils (porosity of 0.41 to 0.44), gradual failure occurred within the soil prism. All failure surfaces had a maximum length to depth ratio of about 7. In even denser soil (porosity of 0.39), we could not induce failure by sprinkling. The internal friction angle of the soils varied from 28° to 40° with decreasing porosity. We analyzed stability at failure, given the observed pore-pressure conditions just prior to large movement, using a 1-D infinite-slope method and a more complete 2-D Janbu method. Each method provides a static Factor of Safety (FS), and in theory failure occurs when FS ≤ 1. Using the 1-D analysis, all experiments having failure had FS well below 1 (typically 0.5-0.8). Using the 2-D analysis for these same conditions, FS was less than but closer to 1 (typically 0.8-0.9). For the experiment with no failure, the 2-D FS was, reassuringly, > 1. These results indicate that the 2-D Janbu analysis is more accurate than the 1-D infinite-slope method for computing limit-equilibrium slope stability in shallow slides with limited areal extent.

  17. Automated real-time data acquisition and analysis of cardiorespiratory function.

    PubMed

    Moorman, R C; Mackenzie, C F; Ho, G H; Barnas, G M; Wilson, P D; Matjasko, M J

    1991-01-01

    Microcomputer generation of an automated record without complexity or operator intervention is desirable in many circumstances. We developed a microcomputer system specifically designed for simplified automated collection of cardiorespiratory data in research and clinical environments. We tested the system during possible extreme clinical conditions by comparison with a patient simulator. Ranges used were heart rate of 35-182 beats per minute, systemic blood pressures of 65-147 mmHg and venous blood pressures of 14-37 mmHg, all with superimposed respiratory variation of 0-24 mmHg. We also tested multiple electrocardiographic dysrhythmias. The results showed that there were no clinically relevant differences in vascular pressures, heart rate, and other variables between computer processed and simulator values. Manually and computer recorded physiological variables were compared to simulator values and the results show that computer values were more accurate. The system was used routinely in 21 animal research experiments over a 4 month period employing a total of 270 collection periods. The file system integrity was tested and found to be satisfactory, even during power failures. Unlike other data collection systems this one (1) requires little or no operator intervention and training, (2) has been rigorously tested for accuracy using a wide variety of extreme patient conditions, (3) has had computer derived values measured against a standardized reference, (4) is reliable against external sources of computer failure, and (5) has screen and printout presentations with quick and easily understandable formats.

  18. FTMP - A highly reliable Fault-Tolerant Multiprocessor for aircraft

    NASA Technical Reports Server (NTRS)

    Hopkins, A. L., Jr.; Smith, T. B., III; Lala, J. H.

    1978-01-01

    The FTMP (Fault-Tolerant Multiprocessor) is a complex multiprocessor computer that employs a form of redundancy related to systems considered by Mathur (1971), in which each major module can substitute for any other module of the same type. Despite the conceptual simplicity of the redundancy form, the implementation has many intricacies owing partly to the low target failure rate, and partly to the difficulty of eliminating single-fault vulnerability. An extensive analysis of the computer through the use of such modeling techniques as Markov processes and combinatorial mathematics shows that for random hard faults the computer can meet its requirements. It is also shown that the maintenance scheduled at intervals of 200 hr or more can be adequate most of the time.

  19. Loading tests of a wing structure for a hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Fields, R. A.; Reardon, L. F.; Siegel, W. H.

    1980-01-01

    Room-temperature loading tests were conducted on a wing structure designed with a beaded panel concept for a Mach 8 hypersonic research airplane. Strain, stress, and deflection data were compared with the results of three finite-element structural analysis computer programs and with design data. The test program data were used to evaluate the structural concept and the methods of analysis used in the design. A force stiffness technique was utilized in conjunction with load conditions which produced various combinations of panel shear and compression loading to determine the failure envelope of the buckling critical beaded panels The force-stiffness data did not result in any predictions of buckling failure. It was, therefore, concluded that the panels were conservatively designed as a result of design constraints and assumptions of panel eccentricities. The analysis programs calculated strains and stresses competently. Comparisons between calculated and measured structural deflections showed good agreement. The test program offered a positive demonstration of the beaded panel concept subjected to room-temperature load conditions.

  20. Quantitative risk assessment system (QRAS)

    NASA Technical Reports Server (NTRS)

    Tan, Zhibin (Inventor); Mosleh, Ali (Inventor); Weinstock, Robert M (Inventor); Smidts, Carol S (Inventor); Chang, Yung-Hsien (Inventor); Groen, Francisco J (Inventor); Swaminathan, Sankaran (Inventor)

    2001-01-01

    A quantitative risk assessment system (QRAS) builds a risk model of a system for which risk of failure is being assessed, then analyzes the risk of the system corresponding to the risk model. The QRAS performs sensitivity analysis of the risk model by altering fundamental components and quantifications built into the risk model, then re-analyzes the risk of the system using the modifications. More particularly, the risk model is built by building a hierarchy, creating a mission timeline, quantifying failure modes, and building/editing event sequence diagrams. Multiplicities, dependencies, and redundancies of the system are included in the risk model. For analysis runs, a fixed baseline is first constructed and stored. This baseline contains the lowest level scenarios, preserved in event tree structure. The analysis runs, at any level of the hierarchy and below, access this baseline for risk quantitative computation as well as ranking of particular risks. A standalone Tool Box capability exists, allowing the user to store application programs within QRAS.

  1. Integrating Insults: Using Fault Tree Analysis to Guide Schizophrenia Research across Levels of Analysis.

    PubMed

    MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I

    2015-01-01

    The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.

  2. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, whilemore » we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.« less

  3. Adhesive Cementation Promotes Higher Fatigue Resistance to Zirconia Crowns.

    PubMed

    Campos, F; Valandro, L F; Feitosa, S A; Kleverlaan, C J; Feilzer, A J; de Jager, N; Bottino, M A

    The aim of this study was to investigate the influence of the cementation strategy on the fatigue resistance of zirconia crowns. The null hypothesis was that the cementation strategy would not affect the fatigue resistance of the crowns. Seventy-five simplified molar tooth crown preparations were machined in glass fiber-filled epoxy resin. Zirconia crowns were designed (thickness=0.7 mm), milled by computer-aided design/computer-aided manufacturing, and sintered, as recommended. Crowns were cemented onto the resin preparations using five cementation strategies (n=15): ZP, luting with zinc phosphate cement; PN, luting with Panavia F resin cement; AL, air particle abrasion with alumina particles (125 μm) as the crown inner surface pretreatment + Panavia F; CJ, tribochemical silica coating as crown inner surface pretreatment + Panavia F; and GL, application of a thin layer of porcelain glaze followed by etching with hydrofluoric acid and silanization as crown inner surface pretreatment + Panavia F. Resin cement was activated for 30 seconds for each surface. Specimens were tested until fracture in a stepwise stress fatigue test (10,000 cycles in each step, 600 to 1400 N, frequency of 1.4 Hz). The mode of failure was analyzed by stereomicroscopy and scanning electron microscopy. Data were analyzed by Kaplan-Meier and Mantel-Cox (log rank) tests and a pairwise comparison (p<0.05) and by Weibull analysis. The CJ group had the highest load mean value for failure (1200 N), followed by the PN (1026 N), AL (1026 N), and GL (1013 N) groups, while the ZP group had the lowest mean value (706 N). Adhesively cemented groups (CJ, AL, PN, and GL) needed a higher number of cycles for failure than the group ZP did. The groups' Weibull moduli (CJ=5.9; AL=4.4; GL=3.9; PN=3.7; ZP=2.1) were different, considering the number of cycles for failure data. The predominant mode of failure was a fracture that initiated in the cement/zirconia layer. Finite element analysis showed the different stress distribution for the two models. Adhesive cementation of zirconia crowns improves fatigue resistance.

  4. A hierarchy of computationally derived surgical and patient influences on metal on metal press-fit acetabular cup failure.

    PubMed

    Clarke, S G; Phillips, A T M; Bull, A M J; Cobb, J P

    2012-06-01

    The impact of anatomical variation and surgical error on excessive wear and loosening of the acetabular component of large diameter metal-on-metal hip arthroplasties was measured using a multi-factorial analysis through 112 different simulations. Each surgical scenario was subject to eight different daily loading activities using finite element analysis. Excessive wear appears to be predominantly dependent on cup orientation, with inclination error having a higher influence than version error, according to the study findings. Acetabular cup loosening, as inferred from initial implant stability, appears to depend predominantly on factors concerning the area of cup-bone contact, specifically the level of cup seating achieved and the individual patient's anatomy. The extent of press fit obtained at time of surgery did not appear to influence either mechanism of failure in this study. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Analysis of Composite Panel-Stiffener Debonding Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.

    2006-01-01

    Interlaminar fracture mechanics has proven useful for characterizing the onset of delaminations in composites and has been used with limited success primarily to investigate onset in fracture toughness specimens and laboratory size coupon type specimens. Future acceptance of the methodology by industry and certification authorities however, requires the successful demonstration of the methodology on structural level. For this purpose a panel was selected that was reinforced with stringers. Shear loading cases the panel to buckle and the resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. For finite element analysis, the panel and surrounding load fixture were modeled with shell element. A small section of the stringer foot and the panel in the vicinity of the embedded defect were modeled with a local 3D solid model. A failure index was calculated by correlating computed mixed-mode failure criterion of the graphite/epoxy material.

  6. Assessment of Intralaminar Progressive Damage and Failure Analysis Using an Efficient Evaluation Framework

    NASA Technical Reports Server (NTRS)

    Hyder, Imran; Schaefer, Joseph; Justusson, Brian; Wanthal, Steve; Leone, Frank; Rose, Cheryl

    2017-01-01

    Reducing the timeline for development and certification for composite structures has been a long standing objective of the aerospace industry. This timeline can be further exacerbated when attempting to integrate new fiber-reinforced composite materials due to the large number of testing required at every level of design. computational progressive damage and failure analysis (PDFA) attempts to mitigate this effect; however, new PDFA methods have been slow to be adopted in industry since material model evaluation techniques have not been fully defined. This study presents an efficient evaluation framework which uses a piecewise verification and validation (V&V) approach for PDFA methods. Specifically, the framework is applied to evaluate PDFA research codes within the context of intralaminar damage. Methods are incrementally taken through various V&V exercises specifically tailored to study PDFA intralaminar damage modeling capability. Finally, methods are evaluated against a defined set of success criteria to highlight successes and limitations.

  7. Software For Design And Analysis Of Tanks And Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.; Graham, Jerry B.

    1995-01-01

    Skin-stringer Tank Analysis Spreadsheet System (STASS) computer program developed for use as preliminary design software tool that enables quick-turnaround design and analysis of structural domes and cylindrical barrel sections in propellant tanks or other cylindrical shells. Determines minimum required skin thicknesses for domes and cylindrical shells to withstand material failure due to applied pressures (ullage and/or hydrostatic) and runs buckling analyses on cylindrical shells and skin-stringers. Implemented as workbook program, using Microsoft Excel v4.0 on Macintosh II. Also implemented using Microsoft Excel v4.0 for Microsoft Windows v3.1 IBM PC.

  8. The second Sandia Fracture Challenge. Predictions of ductile failure under quasi-static and moderate-rate dynamic loading

    DOE PAGES

    Boyce, B. L.; Kramer, S. L. B.; Bosiljevac, T. R.; ...

    2016-03-14

    Ductile failure of structural metals is relevant to a wide range of engineering scenarios. Computational methods are employed to anticipate the critical conditions of failure, yet they sometimes provide inaccurate and misleading predictions. Challenge scenarios, such as the one presented in the current work, provide an opportunity to assess the blind, quantitative predictive ability of simulation methods against a previously unseen failure problem. Instead of evaluating the predictions of a single simulation approach, the Sandia Fracture Challenge relied on numerous volunteer teams with expertise in computational mechanics to apply a broad range of computational methods, numerical algorithms, and constitutive modelsmore » to the challenge. This exercise is intended to evaluate the state of health of technologies available for failure prediction. In the first Sandia Fracture Challenge, a wide range of issues were raised in ductile failure modeling, including a lack of consistency in failure models, the importance of shear calibration data, and difficulties in quantifying the uncertainty of prediction [see Boyce et al. (Int J Fract 186:5–68, 2014) for details of these observations]. This second Sandia Fracture Challenge investigated the ductile rupture of a Ti–6Al–4V sheet under both quasi-static and modest-rate dynamic loading (failure in ~ 0.1 s). Like the previous challenge, the sheet had an unusual arrangement of notches and holes that added geometric complexity and fostered a competition between tensile- and shear-dominated failure modes. The teams were asked to predict the fracture path and quantitative far-field failure metrics such as the peak force and displacement to cause crack initiation. Fourteen teams contributed blind predictions, and the experimental outcomes were quantified in three independent test labs. In addition, shortcomings were revealed in this second challenge such as inconsistency in the application of appropriate boundary conditions, need for a thermomechanical treatment of the heat generation in the dynamic loading condition, and further difficulties in model calibration based on limited real-world engineering data. As with the prior challenge, this work not only documents the ‘state-of-the-art’ in computational failure prediction of ductile tearing scenarios, but also provides a detailed dataset for non-blind assessment of alternative methods.« less

  9. The second Sandia Fracture Challenge. Predictions of ductile failure under quasi-static and moderate-rate dynamic loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyce, B. L.; Kramer, S. L. B.; Bosiljevac, T. R.

    Ductile failure of structural metals is relevant to a wide range of engineering scenarios. Computational methods are employed to anticipate the critical conditions of failure, yet they sometimes provide inaccurate and misleading predictions. Challenge scenarios, such as the one presented in the current work, provide an opportunity to assess the blind, quantitative predictive ability of simulation methods against a previously unseen failure problem. Instead of evaluating the predictions of a single simulation approach, the Sandia Fracture Challenge relied on numerous volunteer teams with expertise in computational mechanics to apply a broad range of computational methods, numerical algorithms, and constitutive modelsmore » to the challenge. This exercise is intended to evaluate the state of health of technologies available for failure prediction. In the first Sandia Fracture Challenge, a wide range of issues were raised in ductile failure modeling, including a lack of consistency in failure models, the importance of shear calibration data, and difficulties in quantifying the uncertainty of prediction [see Boyce et al. (Int J Fract 186:5–68, 2014) for details of these observations]. This second Sandia Fracture Challenge investigated the ductile rupture of a Ti–6Al–4V sheet under both quasi-static and modest-rate dynamic loading (failure in ~ 0.1 s). Like the previous challenge, the sheet had an unusual arrangement of notches and holes that added geometric complexity and fostered a competition between tensile- and shear-dominated failure modes. The teams were asked to predict the fracture path and quantitative far-field failure metrics such as the peak force and displacement to cause crack initiation. Fourteen teams contributed blind predictions, and the experimental outcomes were quantified in three independent test labs. In addition, shortcomings were revealed in this second challenge such as inconsistency in the application of appropriate boundary conditions, need for a thermomechanical treatment of the heat generation in the dynamic loading condition, and further difficulties in model calibration based on limited real-world engineering data. As with the prior challenge, this work not only documents the ‘state-of-the-art’ in computational failure prediction of ductile tearing scenarios, but also provides a detailed dataset for non-blind assessment of alternative methods.« less

  10. More About Software for No-Loss Computing

    NASA Technical Reports Server (NTRS)

    Edmonds, Iarina

    2007-01-01

    A document presents some additional information on the subject matter of "Integrated Hardware and Software for No- Loss Computing" (NPO-42554), which appears elsewhere in this issue of NASA Tech Briefs. To recapitulate: The hardware and software designs of a developmental parallel computing system are integrated to effectuate a concept of no-loss computing (NLC). The system is designed to reconfigure an application program such that it can be monitored in real time and further reconfigured to continue a computation in the event of failure of one of the computers. The design provides for (1) a distributed class of NLC computation agents, denoted introspection agents, that effects hierarchical detection of anomalies; (2) enhancement of the compiler of the parallel computing system to cause generation of state vectors that can be used to continue a computation in the event of a failure; and (3) activation of a recovery component when an anomaly is detected.

  11. Is Computer-Aided Instruction an Effective Tier-One Intervention for Kindergarten Students at Risk for Reading Failure in an Applied Setting?

    ERIC Educational Resources Information Center

    Kreskey, Donna DeVaughn; Truscott, Stephen D.

    2016-01-01

    This study investigated the use of computer-aided instruction (CAI) as an intervention for kindergarten students at risk for reading failure. Headsprout Early Reading (Headsprout 2005), a type of CAI, provides internet-based, reading instruction incorporating the critical components of reading instruction cited by the National Reading Panel (NRP…

  12. A Comparison of Success and Failure Rates between Computer-Assisted and Traditional College Algebra Sections

    ERIC Educational Resources Information Center

    Herron, Sherry; Gandy, Rex; Ye, Ningjun; Syed, Nasser

    2012-01-01

    A unique aspect of the implementation of a computer algebra system (CAS) at a comprehensive university in the U.S. allowed us to compare the student success and failure rates to the traditional method of teaching college algebra. Due to space limitations, the university offered sections of both CAS and traditional simultaneously and, upon…

  13. Memory management and compiler support for rapid recovery from failures in computer systems

    NASA Technical Reports Server (NTRS)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  14. Closed-form solution of decomposable stochastic models

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1990-01-01

    Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.

  15. Probabilistic Structural Analysis Program

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.

    2010-01-01

    NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.

  16. Safety and feasibility of STAT RAD: Improvement of a novel rapid tomotherapy-based radiation therapy workflow by failure mode and effects analysis.

    PubMed

    Jones, Ryan T; Handsfield, Lydia; Read, Paul W; Wilson, David D; Van Ausdal, Ray; Schlesinger, David J; Siebers, Jeffrey V; Chen, Quan

    2015-01-01

    The clinical challenge of radiation therapy (RT) for painful bone metastases requires clinicians to consider both treatment efficacy and patient prognosis when selecting a radiation therapy regimen. The traditional RT workflow requires several weeks for common palliative RT schedules of 30 Gy in 10 fractions or 20 Gy in 5 fractions. At our institution, we have created a new RT workflow termed "STAT RAD" that allows clinicians to perform computed tomographic (CT) simulation, planning, and highly conformal single fraction treatment delivery within 2 hours. In this study, we evaluate the safety and feasibility of the STAT RAD workflow. A failure mode and effects analysis (FMEA) was performed on the STAT RAD workflow, including development of a process map, identification of potential failure modes, description of the cause and effect, temporal occurrence, and team member involvement in each failure mode, and examination of existing safety controls. A risk probability number (RPN) was calculated for each failure mode. As necessary, workflow adjustments were then made to safeguard failure modes of significant RPN values. After workflow alterations, RPN numbers were again recomputed. A total of 72 potential failure modes were identified in the pre-FMEA STAT RAD workflow, of which 22 met the RPN threshold for clinical significance. Workflow adjustments included the addition of a team member checklist, changing simulation from megavoltage CT to kilovoltage CT, alteration of patient-specific quality assurance testing, and allocating increased time for critical workflow steps. After these modifications, only 1 failure mode maintained RPN significance; patient motion after alignment or during treatment. Performing the FMEA for the STAT RAD workflow before clinical implementation has significantly strengthened the safety and feasibility of STAT RAD. The FMEA proved a valuable evaluation tool, identifying potential problem areas so that we could create a safer workflow. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  17. Free-Swinging Failure Tolerance for Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    English, James

    1997-01-01

    Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.

  18. Predictive modeling of dynamic fracture growth in brittle materials with machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel

    We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less

  19. Predictive modeling of dynamic fracture growth in brittle materials with machine learning

    DOE PAGES

    Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel; ...

    2018-02-22

    We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less

  20. Commercialization of NESSUS: Status

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Millwater, Harry R.

    1991-01-01

    A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.

  1. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  2. Learning from Failures: Archiving and Designing with Failure and Risk

    NASA Technical Reports Server (NTRS)

    VanWie, Michael; Bohm, Matt; Barrientos, Francesca; Turner, Irem; Stone, Robert

    2005-01-01

    Identifying and mitigating risks during conceptual design remains an ongoing challenge. This work presents the results of collaborative efforts between The University of Missouri-Rolla and NASA Ames Research Center to examine how an early stage mission design team at NASA addresses risk, and, how a computational support tool can assist these designers in their tasks. Results of our observations are given in addition to a brief example of our implementation of a repository based computational tool that allows users to browse and search through archived failure and risk data as related to either physical artifacts or functionality.

  3. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  4. Structures and Dynamics Division: Research and technology plans for FY 1983 and accomplishments for FY 1982

    NASA Technical Reports Server (NTRS)

    Bales, K. S.

    1983-01-01

    The objectives, expected results, approach, and milestones for research projects of the IPAD Project Office and the impact dynamics, structural mechanics, and structural dynamics branches of the Structures and Dynamics Division are presented. Research facilities are described. Topics covered include computer aided design; general aviation/transport crash dynamics; aircraft ground performance; composite structures; failure analysis, space vehicle dynamics; and large space structures.

  5. Data-Driven Correlation Analysis Between Observed 3D Fatigue-Crack Path and Computed Fields from High-Fidelity, Crystal-Plasticity, Finite-Element Simulations

    NASA Astrophysics Data System (ADS)

    Pierson, Kyle D.; Hochhalter, Jacob D.; Spear, Ashley D.

    2018-05-01

    Systematic correlation analysis was performed between simulated micromechanical fields in an uncracked polycrystal and the known path of an eventual fatigue-crack surface based on experimental observation. Concurrent multiscale finite-element simulation of cyclic loading was performed using a high-fidelity representation of grain structure obtained from near-field high-energy x-ray diffraction microscopy measurements. An algorithm was developed to parameterize and systematically correlate the three-dimensional (3D) micromechanical fields from simulation with the 3D fatigue-failure surface from experiment. For comparison, correlation coefficients were also computed between the micromechanical fields and hypothetical, alternative surfaces. The correlation of the fields with hypothetical surfaces was found to be consistently weaker than that with the known crack surface, suggesting that the micromechanical fields of the cyclically loaded, uncracked microstructure might provide some degree of predictiveness for microstructurally small fatigue-crack paths, although the extent of such predictiveness remains to be tested. In general, gradients of the field variables exhibit stronger correlations with crack path than the field variables themselves. Results from the data-driven approach implemented here can be leveraged in future model development for prediction of fatigue-failure surfaces (for example, to facilitate univariate feature selection required by convolution-based models).

  6. Controlled impact demonstration airframe bending bridges

    NASA Technical Reports Server (NTRS)

    Soltis, S. J.

    1986-01-01

    The calibration of the KRASH and DYCAST models for transport aircraft is discussed. The FAA uses computer analysis techniques to predict the response of controlled impact demonstration (CID) during impact. The moment bridges can provide a direct correlation between the predictive loads or moments that the models will predict and what was experienced during the actual impact. Another goal is to examine structural failure mechanisms and correlate with analytical predictions. The bending bridges did achieve their goals and objectives. The data traces do provide some insight with respect to airframe loads and structural response. They demonstrate quite clearly what's happening to the airframe. A direct quantification of metal airframe loads was measured by the moment bridges. The measured moments can be correlated with the KRASH and DYCAST computer models. The bending bridge data support airframe failure mechanisms analysis and provide residual airframe strength estimation. It did not appear as if any of the bending bridges on the airframe exceeded limit loads. (The observed airframe fracture was due to the fuselage encounter with the tomahawk which tore out the keel beam.) The airframe bridges can be used to estimate the impact conditions and those estimates are correlating with some of the other data measurements. Structural response, frequency and structural damping are readily measured by the moment bridges.

  7. Probabilistic Fracture Mechanics Analysis of the Orbiter's LH2 Feedline Flowliner

    NASA Technical Reports Server (NTRS)

    Bonacuse, Peter J. (Technical Monitor); Hudak, Stephen J., Jr.; Huyse, Luc; Chell, Graham; Lee, Yi-Der; Riha, David S.; Thacker, Ben; McClung, Craig; Gardner, Brian; Leverant, Gerald R.; hide

    2005-01-01

    Work performed by Southwest Research Institute (SwRI) as part of an Independent Technical Assessment (ITA) for the NASA Engineering and Safety Center (NESC) is summarized. The ITA goal was to establish a flight rationale in light of a history of fatigue cracking due to flow induced vibrations in the feedline flowliners that supply liquid hydrogen to the space shuttle main engines. Prior deterministic analyses using worst-case assumptions predicted failure in a single flight. The current work formulated statistical models for dynamic loading and cryogenic fatigue crack growth properties, instead of using worst-case assumptions. Weight function solutions for bivariant stressing were developed to determine accurate crack "driving-forces". Monte Carlo simulations showed that low flowliner probabilities of failure (POF = 0.001 to 0.0001) are achievable, provided pre-flight inspections for cracks are performed with adequate probability of detection (POD)-specifically, 20/75 mils with 50%/99% POD. Measurements to confirm assumed POD curves are recommended. Since the computed POFs are very sensitive to the cyclic loads/stresses and the analysis of strain gage data revealed inconsistencies with the previous assumption of a single dominant vibrant mode, further work to reconcile this difference is recommended. It is possible that the unaccounted vibrational modes in the flight spectra could increase the computed POFs.

  8. Reproducing Kernel Particle Method in Plasticity of Pressure-Sensitive Material with Reference to Powder Forming Process

    NASA Astrophysics Data System (ADS)

    Khoei, A. R.; Samimi, M.; Azami, A. R.

    2007-02-01

    In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.

  9. Operational experience in underwater photogrammetry

    NASA Astrophysics Data System (ADS)

    Leatherdale, John D.; John Turner, D.

    Underwater photogrammetry has become established as a cost-effective technique for inspection and maintenance of platforms and pipelines for the offshore oil industry. A commercial service based in Scotland operates in the North Sea, USA, Brazil, West Africa and Australia. 70 mm cameras and flash units are built for the purpose and analytical plotters and computer graphics systems are used for photogrammetric measurement and analysis of damage, corrosion, weld failures and redesign of underwater structures. Users are seeking simple, low-cost systems for photogrammetric analysis which their engineers can use themselves.

  10. Computational Approach for Developing Blood Pump

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  11. A review on recent contribution of meshfree methods to structure and fracture mechanics applications.

    PubMed

    Daxini, S D; Prajapati, J M

    2014-01-01

    Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.

  12. Impact of different variables on the outcome of patients with clinically confined prostate carcinoma: prediction of pathologic stage and biochemical failure using an artificial neural network.

    PubMed

    Ziada, A M; Lisle, T C; Snow, P B; Levine, R F; Miller, G; Crawford, E D

    2001-04-15

    The advent of advanced computing techniques has provided the opportunity to analyze clinical data using artificial intelligence techniques. This study was designed to determine whether a neural network could be developed using preoperative prognostic indicators to predict the pathologic stage and time of biochemical failure for patients who undergo radical prostatectomy. The preoperative information included TNM stage, prostate size, prostate specific antigen (PSA) level, biopsy results (Gleason score and percentage of positive biopsy), as well as patient age. All 309 patients underwent radical prostatectomy at the University of Colorado Health Sciences Center. The data from all patients were used to train a multilayer perceptron artificial neural network. The failure rate was defined as a rise in the PSA level > 0.2 ng/mL. The biochemical failure rate in the data base used was 14.2%. Univariate and multivariate analyses were performed to validate the results. The neural network statistics for the validation set showed a sensitivity and specificity of 79% and 81%, respectively, for the prediction of pathologic stage with an overall accuracy of 80% compared with an overall accuracy of 67% using the multivariate regression analysis. The sensitivity and specificity for the prediction of failure were 67% and 85%, respectively, demonstrating a high confidence in predicting failure. The overall accuracy rates for the artificial neural network and the multivariate analysis were similar. Neural networks can offer a convenient vehicle for clinicians to assess the preoperative risk of disease progression for patients who are about to undergo radical prostatectomy. Continued investigation of this approach with larger data sets seems warranted. Copyright 2001 American Cancer Society.

  13. Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy

    PubMed Central

    Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Muzio, Nadia Di; Longobardi, Barbara; Mangili, Paola

    2013-01-01

    The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety. PACS number: 87.55.Qr PMID:24036868

  14. Structural Analysis Made 'NESSUSary'

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Everywhere you look, chances are something that was designed and tested by a computer will be in plain view. Computers are now utilized to design and test just about everything imaginable, from automobiles and airplanes to bridges and boats, and elevators and escalators to streets and skyscrapers. Computer-design engineering first emerged in the 1970s, in the automobile and aerospace industries. Since computers were in their infancy, however, architects and engineers during the time were limited to producing only designs similar to hand-drafted drawings. (At the end of 1970s, a typical computer-aided design system was a 16-bit minicomputer with a price tag of $125,000.) Eventually, computers became more affordable and related software became more sophisticated, offering designers the "bells and whistles" to go beyond the limits of basic drafting and rendering, and venture into more skillful applications. One of the major advancements was the ability to test the objects being designed for the probability of failure. This advancement was especially important for the aerospace industry, where complicated and expensive structures are designed. The ability to perform reliability and risk assessment without using extensive hardware testing is critical to design and certification. In 1984, NASA initiated the Probabilistic Structural Analysis Methods (PSAM) project at Glenn Research Center to develop analysis methods and computer programs for the probabilistic structural analysis of select engine components for current Space Shuttle and future space propulsion systems. NASA envisioned that these methods and computational tools would play a critical role in establishing increased system performance and durability, and assist in structural system qualification and certification. Not only was the PSAM project beneficial to aerospace, it paved the way for a commercial risk- probability tool that is evaluating risks in diverse, down- to-Earth application

  15. New techniques for the analysis of manual control systems. [mathematical models of human operator behavior

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.

    1971-01-01

    Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.

  16. Determination of a tissue-level failure evaluation standard for rat femoral cortical bone utilizing a hybrid computational-experimental method.

    PubMed

    Fan, Ruoxun; Liu, Jie; Jia, Zhengbin; Deng, Ying; Liu, Jun

    2018-01-01

    Macro-level failure in bone structure could be diagnosed by pain or physical examination. However, diagnosing tissue-level failure in a timely manner is challenging due to the difficulty in observing the interior mechanical environment of bone tissue. Because most fractures begin with tissue-level failure in bone tissue caused by continually applied loading, people attempt to monitor the tissue-level failure of bone and provide corresponding measures to prevent fracture. Many tissue-level mechanical parameters of bone could be predicted or measured; however, the value of the parameter may vary among different specimens belonging to a kind of bone structure even at the same age and anatomical site. These variations cause difficulty in representing tissue-level bone failure. Therefore, determining an appropriate tissue-level failure evaluation standard is necessary to represent tissue-level bone failure. In this study, the yield and failure processes of rat femoral cortical bones were primarily simulated through a hybrid computational-experimental method. Subsequently, the tissue-level strains and the ratio between tissue-level failure and yield strains in cortical bones were predicted. The results indicated that certain differences existed in tissue-level strains; however, slight variations in the ratio were observed among different cortical bones. Therefore, the ratio between tissue-level failure and yield strains for a kind of bone structure could be determined. This ratio may then be regarded as an appropriate tissue-level failure evaluation standard to represent the mechanical status of bone tissue.

  17. Unitized Stiffened Composite Textile Panels: Manufacturing, Characterization, Experiments, and Analysis

    NASA Astrophysics Data System (ADS)

    Kosztowny, Cyrus Joseph Robert

    Use of carbon fiber textiles in complex manufacturing methods creates new implementations of structural components by increasing performance, lowering manufacturing costs, and making composites overall more attractive across industry. Advantages of textile composites include high area output, ease of handling during the manufacturing process, lower production costs per material used resulting from automation, and provide post-manufacturing assembly mainstreaming because significantly more complex geometries such as stiffened shell structures can be manufactured with fewer pieces. One significant challenge with using stiffened composite structures is stiffener separation under compression. Axial compression loading conditions have frequently observed catastrophic structural failure due to stiffeners separating from the shell skin. Characterizing stiffener separation behavior is often costly computationally and experimentally. The objectives of this research are to demonstrate unitized stiffened textile composite panels can be manufactured to produce quality test specimens, that existing characterization techniques applied to state-of-the-art high-performance composites provide valuable information in modeling such structures, that the unitized structure concept successfully removes stiffener separation as a primary structural failure mode, and that modeling textile material failure modes are sufficient to accurately capture postbuckling and final failure responses of the stiffened structures. The stiffened panels in this study have taken the integrally stiffened concept to an extent such that the stiffeners and skin are manufactured at the same time, as one single piece, and from the same composite textile layers. Stiffener separation is shown to be removed as a primary structural failure mode for unitized stiffened composite textile panels loaded under axial compression well into the postbuckling regime. Instead of stiffener separation, a material damaging and failure model effectively captures local post-peak material response via incorporating a mesoscale model using a multiscaling framework with a smeared crack element-based failure model in the macroscale stiffened panel. Material damage behavior is characterized by simple experimental tests and incorporated into the post-peak stiffness degradation law in the smeared crack implementation. Computational modeling results are in overall excellent agreement compared to the experimental responses.

  18. Effect of Crystal Orientation on Analysis of Single-Crystal, Nickel-Based Turbine Blade Superalloys

    NASA Technical Reports Server (NTRS)

    Swanson, G. R.; Arakere, N. K.

    2000-01-01

    High-cycle fatigue-induced failures in turbine and turbopump blades is a pervasive problem. Single-crystal nickel turbine blades are used because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities. Single-crystal materials have highly orthotropic properties making the position of the crystal lattice relative to the part geometry a significant and complicating factor. A fatigue failure criterion based on the maximum shear stress amplitude on the 24 octahedral and 6 cube slip systems is presented for single-crystal nickel superalloys (FCC crystal). This criterion greatly reduces the scatter in uniaxial fatigue data for PWA 1493 at 1,200 F in air. Additionally, single-crystal turbine blades used in the Space Shuttle main engine high pressure fuel turbopump/alternate turbopump are modeled using a three-dimensional finite element (FE) model. This model accounts for material orthotrophy and crystal orientation. Fatigue life of the blade tip is computed using FE stress results and the failure criterion that was developed. Stress analysis results in the blade attachment region are also presented. Results demonstrate that control of crystallographic orientation has the potential to significantly increase a component's resistance to fatigue crack growth without adding additional weight or cost.

  19. A new casemix adjustment index for hospital mortality among patients with congestive heart failure.

    PubMed

    Polanczyk, C A; Rohde, L E; Philbin, E A; Di Salvo, T G

    1998-10-01

    Comparative analysis of hospital outcomes requires reliable adjustment for casemix. Although congestive heart failure is one of the most common indications for hospitalization, congestive heart failure casemix adjustment has not been widely studied. The purposes of this study were (1) to describe and validate a new congestive heart failure-specific casemix adjustment index to predict in-hospital mortality and (2) to compare its performance to the Charlson comorbidity index. Data from all 4,608 admissions to the Massachusetts General Hospital from January 1990 to July 1996 with a principal ICD-9-CM discharge diagnosis of congestive heart failure were evaluated. Massachusetts General Hospital patients were randomly divided in a derivation and a validation set. By logistic regression, odds ratios for in-hospital death were computed and weights were assigned to construct a new predictive index in the derivation set. The performance of the index was tested in an internal Massachusetts General Hospital validation set and in a non-Massachusetts General Hospital external validation set incorporating data from all 1995 New York state hospital discharges with a primary discharge diagnosis of congestive heart failure. Overall in-hospital mortality was 6.4%. Based on the new index, patients were assigned to six categories with incrementally increasing hospital mortality rates ranging from 0.5% to 31%. By logistic regression, "c" statistics of the congestive heart failure-specific index (0.83 and 0.78, derivation and validation set) were significantly superior to the Charlson index (0.66). Similar incrementally increasing hospital mortality rates were observed in the New York database with the congestive heart failure-specific index ("c" statistics 0.75). In an administrative database, this congestive heart failure-specific index may be a more adequate casemix adjustment tool to predict hospital mortality in patients hospitalized for congestive heart failure.

  20. Computer-Related Success and Failure: A Longitudinal Field Study of the Factors Influencing Computer-Related Performance.

    ERIC Educational Resources Information Center

    Rozell, E. J.; Gardner, W. L., III

    1999-01-01

    A model of the intrapersonal processes impacting computer-related performance was tested using data from 75 manufacturing employees in a computer training course. Gender, computer experience, and attributional style were predictive of computer attitudes, which were in turn related to computer efficacy, task-specific performance expectations, and…

  1. Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.

    1984-01-01

    A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.

  2. Matrix Dominated Failure of Fiber-Reinforced Composite Laminates Under Static and Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Schaefer, Joseph Daniel

    Hierarchical material systems provide the unique opportunity to connect material knowledge to solving specific design challenges. Representing the quickest growing class of hierarchical materials in use, fiber-reinforced polymer composites (FRPCs) offer superior strength and stiffness-to-weight ratios, damage tolerance, and decreasing production costs compared to metals and alloys. However, the implementation of FRPCs has historically been fraught with inadequate knowledge of the material failure behavior due to incomplete verification of recent computational constitutive models and improper (or non-existent) experimental validation, which has severely slowed creation and development. Noted by the recent Materials Genome Initiative and the Worldwide Failure Exercise, current state of the art qualification programs endure a 20 year gap between material conceptualization and implementation due to the lack of effective partnership between computational coding (simulation) and experimental characterization. Qualification processes are primarily experiment driven; the anisotropic nature of composites predisposes matrix-dominant properties to be sensitive to strain rate, which necessitates extensive testing. To decrease the qualification time, a framework that practically combines theoretical prediction of material failure with limited experimental validation is required. In this work, the Northwestern Failure Theory (NU Theory) for composite lamina is presented as the theoretical basis from which the failure of unidirectional and multidirectional composite laminates is investigated. From an initial experimental characterization of basic lamina properties, the NU Theory is employed to predict the matrix-dependent failure of composites under any state of biaxial stress from quasi-static to 1000 s-1 strain rates. It was found that the number of experiments required to characterize the strain-rate-dependent failure of a new composite material was reduced by an order of magnitude, and the resulting strain-rate-dependence was applicable for a large class of materials. The presented framework provides engineers with the capability to quickly identify fiber and matrix combinations for a given application and determine the failure behavior over the range of practical loadings cases. The failure-mode-based NU Theory may be especially useful when partnered with computational approaches (which often employ micromechanics to determine constituent and constitutive response) to provide accurate validation of the matrix-dominated failure modes experienced by laminates during progressive failure.

  3. Probabilistic evaluation of uncertainties and risks in aerospace components

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.

    1992-01-01

    This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.

  4. Sensitivity analysis of limit state functions for probability-based plastic design

    NASA Technical Reports Server (NTRS)

    Frangopol, D. M.

    1984-01-01

    The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.

  5. Manned systems utilization analysis (study 2.1). Volume 3: LOVES computer simulations, results, and analyses

    NASA Technical Reports Server (NTRS)

    Stricker, L. T.

    1975-01-01

    The LOVES computer program was employed to analyze the geosynchronous portion of the NASA's 1973 automated satellite mission model from 1980 to 1990. The objectives of the analyses were: (1) to demonstrate the capability of the LOVES code to provide the depth and accuracy of data required to support the analyses; and (2) to tradeoff the concept of space servicing automated satellites composed of replaceable modules against the concept of replacing expendable satellites upon failure. The computer code proved to be an invaluable tool in analyzing the logistic requirements of the various test cases required in the tradeoff. It is indicated that the concept of space servicing offers the potential for substantial savings in the cost of operating automated satellite systems.

  6. Free-Swinging Failure Tolerance for Robotic Manipulators. Degree awarded by Purdue Univ.

    NASA Technical Reports Server (NTRS)

    English, James

    1997-01-01

    Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.

  7. Multiscale Fiber Kinking: Computational Micromechanics and a Mesoscale Continuum Damage Mechanics Models

    NASA Technical Reports Server (NTRS)

    Herraez, Miguel; Bergan, Andrew C.; Gonzalez, Carlos; Lopes, Claudio S.

    2017-01-01

    In this work, the fiber kinking phenomenon, which is known as the failure mechanism that takes place when a fiber reinforced polymer is loaded under longitudinal compression, is studied. A computational micromechanics model is employed to interrogate the assumptions of a recently developed mesoscale continuum damage mechanics (CDM) model for fiber kinking based on the deformation gradient decomposition (DGD) and the LaRC04 failure criteria.

  8. Continuous fiber ceramic matrix composites for heat engine components

    NASA Technical Reports Server (NTRS)

    Tripp, David E.

    1988-01-01

    High strength at elevated temperatures, low density, resistance to wear, and abundance of nonstrategic raw materials make structural ceramics attractive for advanced heat engine applications. Unfortunately, ceramics have a low fracture toughness and fail catastrophically because of overload, impact, and contact stresses. Ceramic matrix composites provide the means to achieve improved fracture toughness while retaining desirable characteristics, such as high strength and low density. Materials scientists and engineers are trying to develop the ideal fibers and matrices to achieve the optimum ceramic matrix composite properties. A need exists for the development of failure models for the design of ceramic matrix composite heat engine components. Phenomenological failure models are currently the most frequently used in industry, but they are deterministic and do not adequately describe ceramic matrix composite behavior. Semi-empirical models were proposed, which relate the failure of notched composite laminates to the stress a characteristic distance away from the notch. Shear lag models describe composite failure modes at the micromechanics level. The enhanced matrix cracking stress occurs at the same applied stress level predicted by the two models of steady state cracking. Finally, statistical models take into consideration the distribution in composite failure strength. The intent is to develop these models into computer algorithms for the failure analysis of ceramic matrix composites under monotonically increasing loads. The algorithms will be included in a postprocessor to general purpose finite element programs.

  9. Phased-mission system analysis using Boolean algebraic methods

    NASA Technical Reports Server (NTRS)

    Somani, Arun K.; Trivedi, Kishor S.

    1993-01-01

    Most reliability analysis techniques and tools assume that a system is used for a mission consisting of a single phase. However, multiple phases are natural in many missions. The failure rates of components, system configuration, and success criteria may vary from phase to phase. In addition, the duration of a phase may be deterministic or random. Recently, several researchers have addressed the problem of reliability analysis of such systems using a variety of methods. A new technique for phased-mission system reliability analysis based on Boolean algebraic methods is described. Our technique is computationally efficient and is applicable to a large class of systems for which the failure criterion in each phase can be expressed as a fault tree (or an equivalent representation). Our technique avoids state space explosion that commonly plague Markov chain-based analysis. A phase algebra to account for the effects of variable configurations and success criteria from phase to phase was developed. Our technique yields exact (as opposed to approximate) results. The use of our technique was demonstrated by means of an example and present numerical results to show the effects of mission phases on the system reliability.

  10. Reliability, Risk and Cost Trade-Offs for Composite Designs

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1996-01-01

    Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.

  11. Failure Impact Analysis of Key Management in AMI Using Cybernomic Situational Assessment (CSA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T; Hauser, Katie R

    2013-01-01

    In earlier work, we presented a computational framework for quantifying the security of a system in terms of the average loss a stakeholder stands to sustain as a result of threats to the system. We named this system, the Cyberspace Security Econometrics System (CSES). In this paper, we refine the framework and apply it to cryptographic key management within the Advanced Metering Infrastructure (AMI) as an example. The stakeholders, requirements, components, and threats are determined. We then populate the matrices with justified values by addressing the AMI at a higher level, rather than trying to consider every piece of hardwaremore » and software involved. We accomplish this task by leveraging the recently established NISTR 7628 guideline for smart grid security. This allowed us to choose the stakeholders, requirements, components, and threats realistically. We reviewed the literature and selected an industry technical working group to select three representative threats from a collection of 29 threats. From this subset, we populate the stakes, dependency, and impact matrices, and the threat vector with realistic numbers. Each Stakeholder s Mean Failure Cost is then computed.« less

  12. In-Situ Observations of Longitudinal Compression Damage in Carbon-Epoxy Cross Ply Laminates Using Fast Synchrotron Radiation Computed Tomography

    NASA Technical Reports Server (NTRS)

    Bergan, Andrew C.; Garcea, Serafina C.

    2017-01-01

    The role of longitudinal compressive failure mechanisms in notched cross-ply laminates is studied experimentally with in-situ synchrotron radiation based computed tomography. Carbon/epoxy specimens loaded monotonically in uniaxial compression exhibited a quasi-stable failure process, which was captured with computed tomography scans recorded continuously with a temporal resolutions of 2.4 seconds and a spatial resolution of 1.1 microns per voxel. A detailed chronology of the initiation and propagation of longitudinal matrix splitting cracks, in-plane and out-of-plane kink bands, shear-driven fiber failure, delamination, and transverse matrix cracks is provided with a focus on kink bands as the dominant failure mechanism. An automatic segmentation procedure is developed to identify the boundary surfaces of a kink band. The segmentation procedure enables 3-dimensional visualization of the kink band and conveys the orientation, inclination, and spatial variation of the kink band. The kink band inclination and length are examined using the segmented data revealing tunneling and spatial variations not apparent from studying the 2-dimensional section data.

  13. Investigation of possible wellbore cement failures during hydraulic fracturing operations

    EPA Pesticide Factsheets

    Researchers used the peer-reviewed TOUGH+ geomechanics computational software and simulation system to investigate the possibility of fractures and shear failure along vertical wells during hydraulic fracturing operations.

  14. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  15. ACARA user's manual

    NASA Technical Reports Server (NTRS)

    Stalnaker, Dale K.

    1993-01-01

    ACARA (Availability, Cost, and Resource Allocation) is a computer program which analyzes system availability, lifecycle cost (LCC), and resupply scheduling using Monte Carlo analysis to simulate component failure and replacement. This manual was written to: (1) explain how to prepare and enter input data for use in ACARA; (2) explain the user interface, menus, input screens, and input tables; (3) explain the algorithms used in the program; and (4) explain each table and chart in the output.

  16. Availability and mean time between failures of redundant systems with random maintenance of subsystems

    NASA Technical Reports Server (NTRS)

    Schneeweiss, W.

    1977-01-01

    It is shown how the availability and MTBF (Mean Time Between Failures) of a redundant system with subsystems maintenanced at the points of so-called stationary renewal processes can be determined from the distributions of the intervals between maintenance actions and of the failure-free operating intervals of the subsystems. The results make it possible, for example, to determine the frequency and duration of hidden failure states in computers which are incidentally corrected during the repair of observed failures.

  17. Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates

    PubMed Central

    Barrese, James C; Rao, Naveen; Paroo, Kaivon; Triebwasser, Corey; Vargas-Irwin, Carlos; Franquemont, Lachlan; Donoghue, John P

    2016-01-01

    Objective Brain–computer interfaces (BCIs) using chronically implanted intracortical microelectrode arrays (MEAs) have the potential to restore lost function to people with disabilities if they work reliably for years. Current sensors fail to provide reliably useful signals over extended periods of time for reasons that are not clear. This study reports a comprehensive retrospective analysis from a large set of implants of a single type of intracortical MEA in a single species, with a common set of measures in order to evaluate failure modes. Approach Since 1996, 78 silicon MEAs were implanted in 27 monkeys (Macaca mulatta). We used two approaches to find reasons for sensor failure. First, we classified the time course leading up to complete recording failure as acute (abrupt) or chronic (progressive). Second, we evaluated the quality of electrode recordings over time based on signal features and electrode impedance. Failure modes were divided into four categories: biological, material, mechanical, and unknown. Main results Recording duration ranged from 0 to 2104 days (5.75 years), with a mean of 387 days and a median of 182 days (n = 78). Sixty-two arrays failed completely with a mean time to failure of 332 days (median = 133 days) while nine array experiments were electively terminated for experimental reasons (mean = 486 days). Seven remained active at the close of this study (mean = 753 days). Most failures (56%) occurred within a year of implantation, with acute mechanical failures the most common class (48%), largely because of connector issues (83%). Among grossly observable biological failures (24%), a progressive meningeal reaction that separated the array from the parenchyma was most prevalent (14.5%). In the absence of acute interruptions, electrode recordings showed a slow progressive decline in spike amplitude, noise amplitude, and number of viable channels that predicts complete signal loss by about eight years. Impedance measurements showed systematic early increases, which did not appear to affect recording quality, followed by a slow decline over years. The combination of slowly falling impedance and signal quality in these arrays indicate that insulating material failure is the most significant factor. Significance This is the first long-term failure mode analysis of an emerging BCI technology in a large series of non-human primates. The classification system introduced here may be used to standardize how neuroprosthetic failure modes are evaluated. The results demonstrate the potential for these arrays to record for many years, but achieving reliable sensors will require replacing connectors with implantable wireless systems, controlling the meningeal reaction, and improving insulation materials. These results will focus future research in order to create clinical neuroprosthetic sensors, as well as valuable research tools, that are able to safely provide reliable neural signals for over a decade. PMID:24216311

  18. Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates

    NASA Astrophysics Data System (ADS)

    Barrese, James C.; Rao, Naveen; Paroo, Kaivon; Triebwasser, Corey; Vargas-Irwin, Carlos; Franquemont, Lachlan; Donoghue, John P.

    2013-12-01

    Objective. Brain-computer interfaces (BCIs) using chronically implanted intracortical microelectrode arrays (MEAs) have the potential to restore lost function to people with disabilities if they work reliably for years. Current sensors fail to provide reliably useful signals over extended periods of time for reasons that are not clear. This study reports a comprehensive retrospective analysis from a large set of implants of a single type of intracortical MEA in a single species, with a common set of measures in order to evaluate failure modes. Approach. Since 1996, 78 silicon MEAs were implanted in 27 monkeys (Macaca mulatta). We used two approaches to find reasons for sensor failure. First, we classified the time course leading up to complete recording failure as acute (abrupt) or chronic (progressive). Second, we evaluated the quality of electrode recordings over time based on signal features and electrode impedance. Failure modes were divided into four categories: biological, material, mechanical, and unknown. Main results. Recording duration ranged from 0 to 2104 days (5.75 years), with a mean of 387 days and a median of 182 days (n = 78). Sixty-two arrays failed completely with a mean time to failure of 332 days (median = 133 days) while nine array experiments were electively terminated for experimental reasons (mean = 486 days). Seven remained active at the close of this study (mean = 753 days). Most failures (56%) occurred within a year of implantation, with acute mechanical failures the most common class (48%), largely because of connector issues (83%). Among grossly observable biological failures (24%), a progressive meningeal reaction that separated the array from the parenchyma was most prevalent (14.5%). In the absence of acute interruptions, electrode recordings showed a slow progressive decline in spike amplitude, noise amplitude, and number of viable channels that predicts complete signal loss by about eight years. Impedance measurements showed systematic early increases, which did not appear to affect recording quality, followed by a slow decline over years. The combination of slowly falling impedance and signal quality in these arrays indicates that insulating material failure is the most significant factor. Significance. This is the first long-term failure mode analysis of an emerging BCI technology in a large series of non-human primates. The classification system introduced here may be used to standardize how neuroprosthetic failure modes are evaluated. The results demonstrate the potential for these arrays to record for many years, but achieving reliable sensors will require replacing connectors with implantable wireless systems, controlling the meningeal reaction, and improving insulation materials. These results will focus future research in order to create clinical neuroprosthetic sensors, as well as valuable research tools, that are able to safely provide reliable neural signals for over a decade.

  19. A Study of Failure in Small Pressurized Cylindrical Shells Containing a Crack

    NASA Technical Reports Server (NTRS)

    Barwell, Craig A.; Eber, Lorenz; Fyfe, Ian M.

    1998-01-01

    The deformation in the vicinity of axial cracks in thin pressurized cylinders is examined using small experimental The deformation in the vicinity of axial cracks in thin pressurized cylinders is examined using small experimental models. The loading applied was either symmetric or unsymmetric about the crack plane, the latter being caused by structural constraints such as stringers. The objective was two fold - one, to provide the experimental results which will allow computer modeling techniques to be evaluated for deformations that are significantly different from that experienced by flat plates, and the other to examine the deformations and conditions associated with the onset of crack kinking which often precedes crack curving. The stresses which control crack growth in a cylindrical geometry depend on conditions introduced by the axial bulging, which is an integral part of this type of failure. For the symmetric geometry, both the hoop and radial strain just ahead off the crack, r = a, were measured and these results compared with those obtained from a variety of structural analysis codes, in particular STAGS [1], ABAQUS and ANSYS. In addition to these measurements, the pressures at the onset of stable and unstable crack growth were obtained and the corresponding crack deformations measured as the pressures were increased to failure. For the unsymmetric cases, measurements were taken of the crack kinking angle, and the displacements in the vicinity of the crack. In general, the strains ahead of the crack showed good agreement between the three computer codes and between the codes and the experiments. In the case of crack behavior, it was determined that modeling stable tearing with a crack-tip opening displacement fracture criterion could be successfully combined with the finite-element analysis techniques as used in structural analysis codes. The analytic results obtained in this study were very compatible with the experimental observations of crack growth. Measured crack kinking angles also showed good agreement with theories based on the maximum principle stress criterion.

  20. Cardiac image modelling: Breadth and depth in heart disease.

    PubMed

    Suinesiaputra, Avan; McCulloch, Andrew D; Nash, Martyn P; Pontre, Beau; Young, Alistair A

    2016-10-01

    With the advent of large-scale imaging studies and big health data, and the corresponding growth in analytics, machine learning and computational image analysis methods, there are now exciting opportunities for deepening our understanding of the mechanisms and characteristics of heart disease. Two emerging fields are computational analysis of cardiac remodelling (shape and motion changes due to disease) and computational analysis of physiology and mechanics to estimate biophysical properties from non-invasive imaging. Many large cohort studies now underway around the world have been specifically designed based on non-invasive imaging technologies in order to gain new information about the development of heart disease from asymptomatic to clinical manifestations. These give an unprecedented breadth to the quantification of population variation and disease development. Also, for the individual patient, it is now possible to determine biophysical properties of myocardial tissue in health and disease by interpreting detailed imaging data using computational modelling. For these population and patient-specific computational modelling methods to develop further, we need open benchmarks for algorithm comparison and validation, open sharing of data and algorithms, and demonstration of clinical efficacy in patient management and care. The combination of population and patient-specific modelling will give new insights into the mechanisms of cardiac disease, in particular the development of heart failure, congenital heart disease, myocardial infarction, contractile dysfunction and diastolic dysfunction. Copyright © 2016. Published by Elsevier B.V.

  1. Probabilistic inspection strategies for minimizing service failures

    NASA Technical Reports Server (NTRS)

    Brot, Abraham

    1994-01-01

    The INSIM computer program is described which simulates the 'limited fatigue life' environment in which aircraft structures generally operate. The use of INSIM to develop inspection strategies which aim to minimize service failures is demonstrated. Damage-tolerance methodology, inspection thresholds and customized inspections are simulated using the probability of failure as the driving parameter.

  2. A diagnosis system using object-oriented fault tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, F. A.

    1990-01-01

    Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.

  3. The investigation of tethered satellite system dynamics

    NASA Technical Reports Server (NTRS)

    Lorenzini, E.

    1984-01-01

    Tethered satellite system (TSS) dynamics were studied. The dynamic response of the TSS during the entire stationkeeping phase for the first electrodynamic mission was investigated. An out of plane swing amplitude and the tether's bowing were observed. The dynamics of the slack tether was studied and computer code, SLACK2, was improved both in capabilities and computational speed. Speed hazard related to tether breakage or plasma contactor failure was examined. Preliminary values of the potential difference after the failure and of the drop of the electric field along the tether axis have been computed. The update of the satellite rotational dynamics model is initiated.

  4. Preoperative short hookwire placement for small pulmonary lesions: evaluation of technical success and risk factors for initial placement failure.

    PubMed

    Iguchi, Toshihiro; Hiraki, Takao; Matsui, Yusuke; Fujiwara, Hiroyasu; Masaoka, Yoshihisa; Tanaka, Takashi; Sato, Takuya; Gobara, Hideo; Toyooka, Shinichi; Kanazawa, Susumu

    2018-05-01

    To retrospectively evaluate the technical success of computed tomography fluoroscopy-guided short hookwire placement before video-assisted thoracoscopic surgery and to identify the risk factors for initial placement failure. In total, 401 short hookwire placements for 401 lesions (mean diameter 9.3 mm) were reviewed. Technical success was defined as correct positioning of the hookwire. Possible risk factors for initial placement failure (i.e., requirement for placement of an additional hookwire or to abort the attempt) were evaluated using logistic regression analysis for all procedures, and for procedures performed via the conventional route separately. Of the 401 initial placements, 383 were successful and 18 failed. Short hookwires were finally placed for 399 of 401 lesions (99.5%). Univariate logistic regression analyses revealed that in all 401 procedures only the transfissural approach was a significant independent predictor of initial placement failure (odds ratio, OR, 15.326; 95% confidence interval, CI, 5.429-43.267; p < 0.001) and for the 374 procedures performed via the conventional route only lesion size was a significant independent predictor of failure (OR 0.793, 95% CI 0.631-0.996; p = 0.046). The technical success of preoperative short hookwire placement was extremely high. The transfissural approach was a predictor initial placement failure for all procedures and small lesion size was a predictor of initial placement failure for procedures performed via the conventional route. • Technical success of preoperative short hookwire placement was extremely high. • The transfissural approach was a significant independent predictor of initial placement failure for all procedures. • Small lesion size was a significant independent predictor of initial placement failure for procedures performed via the conventional route.

  5. Stress and Reliability Analysis of a Metal-Ceramic Dental Crown

    NASA Technical Reports Server (NTRS)

    Anusavice, Kenneth J; Sokolowski, Todd M.; Hojjatie, Barry; Nemeth, Noel N.

    1996-01-01

    Interaction of mechanical and thermal stresses with the flaws and microcracks within the ceramic region of metal-ceramic dental crowns can result in catastrophic or delayed failure of these restorations. The objective of this study was to determine the combined influence of induced functional stresses and pre-existing flaws and microcracks on the time-dependent probability of failure of a metal-ceramic molar crown. A three-dimensional finite element model of a porcelain fused-to-metal (PFM) molar crown was developed using the ANSYS finite element program. The crown consisted of a body porcelain, opaque porcelain, and a metal substrate. The model had a 300 Newton load applied perpendicular to one cusp, a load of 30ON applied at 30 degrees from the perpendicular load case, directed toward the center, and a 600 Newton vertical load. Ceramic specimens were subjected to a biaxial flexure test and the load-to-failure of each specimen was measured. The results of the finite element stress analysis and the flexure tests were incorporated in the NASA developed CARES/LIFE program to determine the Weibull and fatigue parameters and time-dependent fracture reliability of the PFM crown. CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/Or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program.

  6. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  7. Centralized Cryptographic Key Management and Critical Risk Assessment - CRADA Final Report For CRADA Number NFE-11-03562

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, R. K.; Peters, Scott

    The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) Cyber Security for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing Cyber Security for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modifiedmore » and used as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  8. Cryptographic Key Management and Critical Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K

    The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) CyberSecurity for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing CyberSecurity for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modified and usedmore » as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  9. A method for interactive satellite failure diagnosis: Towards a connectionist solution

    NASA Technical Reports Server (NTRS)

    Bourret, P.; Reggia, James A.

    1989-01-01

    Various kinds of processes which allow one to make a diagnosis are analyzed. The analyses then focuses on one of these processes used for satellite failure diagnosis. This process consists of sending the satellite instructions about system status alterations: to mask the effects of one possible component failure or to look for additional abnormal measures. A formal model of this process is given. This model is an extension of a previously defined connectionist model which allows computation of ratios between the likelihoods of observed manifestations according to various diagnostic hypotheses. The expected mean value of these likelihood measures for each possible status of the satellite can be computed in a similar way. Therefore, it is possible to select the most appropriate status according to three different purposes: to confirm an hypothesis, to eliminate an hypothesis, or to choose between two hypotheses. Finally, a first connectionist schema of computation of these expected mean values is given.

  10. A preliminary design for flight testing the FINDS algorithm

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.

    1986-01-01

    This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.

  11. Availability Performance Analysis of Thermal Power Plants

    NASA Astrophysics Data System (ADS)

    Bhangu, Navneet Singh; Singh, Rupinder; Pahuja, G. L.

    2018-03-01

    This case study presents the availability evaluation method of thermal power plants for conducting performance analysis in Indian environment. A generic availability model has been proposed for a maintained system (thermal plants) using reliability block diagrams and fault tree analysis. The availability indices have been evaluated under realistic working environment using inclusion exclusion principle. Four year failure database has been used to compute availability for different combinatory of plant capacity, that is, full working state, reduced capacity or failure state. Availability is found to be very less even at full rated capacity (440 MW) which is not acceptable especially in prevailing energy scenario. One of the probable reason for this may be the difference in the age/health of existing thermal power plants which requires special attention of each unit from case to case basis. The maintenance techniques being used are conventional (50 years old) and improper in context of the modern equipment, which further aggravate the problem of low availability. This study highlights procedure for finding critical plants/units/subsystems and helps in deciding preventive maintenance program.

  12. Integrating Insults: Using Fault Tree Analysis to Guide Schizophrenia Research across Levels of Analysis

    PubMed Central

    MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.

    2016-01-01

    The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007

  13. Real-time failure control (SAFD)

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.

    1990-01-01

    The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.

  14. Probabilistic finite elements for fatigue and fracture analysis

    NASA Astrophysics Data System (ADS)

    Belytschko, Ted; Liu, Wing Kam

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  15. Composite structural materials

    NASA Technical Reports Server (NTRS)

    Ansell, G. S.; Loewy, R. G.; Wiberley, S. E.

    1979-01-01

    A multifaceted program is described in which aeronautical, mechanical, and materials engineers interact to develop composite aircraft structures. Topics covered include: (1) the design of an advanced composite elevator and a proposed spar and rib assembly; (2) optimizing fiber orientation in the vicinity of heavily loaded joints; (3) failure mechanisms and delamination; (4) the construction of an ultralight sailplane; (5) computer-aided design; finite element analysis programs, preprocessor development, and array preprocessor for SPAR; (6) advanced analysis methods for composite structures; (7) ultrasonic nondestructive testing; (8) physical properties of epoxy resins and composites; (9) fatigue in composite materials, and (10) transverse thermal expansion of carbon/epoxy composites.

  16. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1992-01-01

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  17. Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.

    1999-01-01

    A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.

  18. Stress Behaviour in Compression of Contact-Monolithic Joint of Self-Supporting Wall of Large Panel Multi-Storey Building

    NASA Astrophysics Data System (ADS)

    Derbentsev, I.; Karyakin, A. A.; Volodin, A.

    2017-11-01

    The article deals with the behaviour of a contact-monolithic joint of large-panel buildings under compression. It gives a detailed analysis and the descriptions of the stages of such joints failure based on the results of the tests and computational modelling. The article is of interest to specialists who deal with computational modelling or the research of large-panel multi-storey buildings. The text gives a valuable information on the values of their bearing capacity and flexibility, the eccentricity of load transfer from upper panel to lower, the value of thrust passed to a ceiling panel. Recommendations are given to estimate all the above-listed parameters.

  19. Y2K compliance readiness and contingency planning.

    PubMed

    Stahl, S; Cohan, D

    1999-09-01

    As the millennium approaches, discussion of "Y2K compliance" will shift to discussion of "Y2K readiness." While "compliance" focuses on the technological functioning of one's own computers, "readiness" focuses on the operational planning required in a world of interdependence, in which the functionality of one's own computers is only part of the story. "Readiness" includes the ability to cope with potential Y2K failures of vendors, suppliers, staff, banks, utility companies, and others. Administrators must apply their traditional skills of analysis, inquiry and diligence to the manifold imaginable challenges which Y2K will thrust upon their facilities. The SPICE template can be used as a systematic tool to guide planning for this historic event.

  20. Innovation for the Common Man: Avoiding the Pitfalls of Implementing New Technologies.

    ERIC Educational Resources Information Center

    Troxel, Steve

    Citing the failure of film, radio, and television to revolutionize the American education system, this paper identifies reasons for those failures and suggests ways to avoid similar failure in the diffusion of computer use in education and the diffusion of "datafication" into the homes of rural America. Four steps are identified to facilitate the…

  1. Analysis of Composite Panel-Stiffener Debonding Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James; Minguet, Pierre J.

    2007-01-01

    Interlaminar fracture mechanics has proven useful for characterizing the onset of delaminations in composites and has been used successfully primarily to investigate onset in fracture toughness specimens and laboratory size coupon type specimens. Future acceptance of the methodology by industry and certification authorities, however, requires the successful demonstration of the methodology on the structural level. For this purpose, a panel was selected that is reinforced with stiffeners. Shear loading causes the panel to buckle, and the resulting out-of-plane deformations initiate skin/stiffener separation at the location of an embedded defect. A small section of the stiffener foot, web and noodle as well as the panel skin in the vicinity of the delamination front were modeled with a local 3D solid model. Across the width of the stiffener foot, the mixedmode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. Computed failure indices were compared to corresponding results where the entire web was modeled with shell elements and only a small section of the stiffener foot and panel were modeled locally with solid elements. Including the stiffener web in the local 3D solid model increased the computed failure index. Further including the noodle and transition radius in the local 3D solid model changed the local distribution across the width. The magnitude of the failure index decreased with increasing transition radius and noodle area. For the transition radii modeled, the material properties used for the noodle area had a negligible effect on the results. The results of this study are intended to be used as a guide for conducting finite element and fracture mechanics analyses of delamination and debonding in complex structures such as integrally stiffened panels.

  2. Semiparametric regression analysis of interval-censored competing risks data.

    PubMed

    Mao, Lu; Lin, Dan-Yu; Zeng, Donglin

    2017-09-01

    Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.

  3. Joint scale-change models for recurrent events and failure time.

    PubMed

    Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun

    2017-01-01

    Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.

  4. Dam failure analysis for the Lago El Guineo Dam, Orocovis, Puerto Rico

    USGS Publications Warehouse

    Gómez-Fragoso, Julieta; Heriberto Torres-Sierra,

    2016-08-09

    The U.S. Geological Survey, in cooperation with the Puerto Rico Electric Power Authority, completed hydrologic and hydraulic analyses to assess the potential hazard to human life and property associated with the hypothetical failure of the Lago El Guineo Dam. The Lago El Guineo Dam is within the headwaters of the Río Grande de Manatí and impounds a drainage area of about 4.25 square kilometers.The hydrologic assessment was designed to determine the outflow hydrographs and peak discharges for Lago El Guineo and other subbasins in the Río Grande de Manatí hydrographic basin for three extreme rainfall events: (1) a 6-hour probable maximum precipitation event, (2) a 24-hour probable maximum precipitation event, and (3) a 24-hour, 100-year recurrence rainfall event. The hydraulic study simulated a dam failure of Lago El Guineo Dam using flood hydrographs generated from the hydrologic study. The simulated dam failure generated a hydrograph that was routed downstream from Lago El Guineo Dam through the lower reaches of the Río Toro Negro and the Río Grande de Manatí to determine water-surface profiles developed from the event-based hydrologic scenarios and “sunny day” conditions. The Hydrologic Engineering Center’s Hydrologic Modeling System (HEC–HMS) and Hydrologic Engineering Center’s River Analysis System (HEC–RAS) computer programs, developed by the U.S. Army Corps of Engineers, were used for the hydrologic and hydraulic modeling, respectively. The flow routing in the hydraulic analyses was completed using the unsteady flow module available in the HEC–RAS model.Above the Lago El Guineo Dam, the simulated inflow peak discharges from HEC–HMS resulted in about 550 and 414 cubic meters per second for the 6- and 24-hour probable maximum precipitation events, respectively. The 24-hour, 100-year recurrence storm simulation resulted in a peak discharge of about 216 cubic meters per second. For the hydrologic analysis, no dam failure conditions are considered within the model. The results of the hydrologic simulations indicated that for all hydrologic conditions scenarios, the Lago El Guineo Dam would not experience overtopping. For the dam breach hydraulic analysis, failure by piping was the selected hypothetical failure mode for the Lago El Guineo Dam.Results from the simulated dam failure of the Lago El Guineo Dam using the HEC–RAS model for the 6- and 24-hour probable maximum precipitation events indicated peak discharges below the dam of 1,342.43 and 1,434.69 cubic meters per second, respectively. Dam failure during the 24-hour, 100-year recurrence rainfall event resulted in a peak discharge directly downstream from Lago El Guineo Dam of 1,183.12 cubic meters per second. Dam failure during sunny-day conditions (no precipitation) produced a peak discharge at Lago El Guineo Dam of 1,015.31 cubic meters per second assuming the initial water-surface elevation was at the morning-glory spillway invert elevation.The results of the hydraulic analysis indicate that the flood would extend to many inhabited areas along the stream banks from the Lago El Guineo Dam to the mouth of the Río Grande as a result of the simulated failure of the Lago El Guineo Dam. Low-lying regions in the vicinity of Ciales, Manatí, and Barceloneta, Puerto Rico, are among the regions that would be most affected by failure of the Lago El Guineo Dam. Effects of the flood control (levee) structure constructed in 2000 to provide protection to the low-lying populated areas of Barceloneta, Puerto Rico, were considered in the hydraulic analysis of dam failure. The results indicate that overtopping can be expected in the aforementioned levee during 6- and 24-hour probable maximum precipitation events. The levee was not overtopped during dam failure scenarios under the 24-hour, 100-year recurrence rainfall event or sunny-day conditions.

  5. Transmission expansion with smart switching under demand uncertainty and line failures

    DOE PAGES

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.

    2016-06-07

    One of the major challenges in deciding where to build new transmission lines is that there is uncertainty regarding future loads, renewal generation output and equipment failures. We propose a robust optimization model whose transmission expansion solutions ensure that demand can be met over a wide range of conditions. Specifically, we require feasible operation for all loads and renewable generation levels within given ranges, and for all single transmission line failures. Furthermore, we consider transmission switching as an allowable recovery action. This relatively inexpensive method of redirecting power flows improves resiliency, but introduces computational challenges. Lastly, we present a novelmore » algorithm to solve this model. Computational results are discussed.« less

  6. Probabilistic sizing of laminates with uncertainties

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Liaw, D. G.; Chamis, C. C.

    1993-01-01

    A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.

  7. Latent Subgroup Analysis of a Randomized Clinical Trial Through a Semiparametric Accelerated Failure Time Mixture Model

    PubMed Central

    Altstein, L.; Li, G.

    2012-01-01

    Summary This paper studies a semiparametric accelerated failure time mixture model for estimation of a biological treatment effect on a latent subgroup of interest with a time-to-event outcome in randomized clinical trials. Latency is induced because membership is observable in one arm of the trial and unidentified in the other. This method is useful in randomized clinical trials with all-or-none noncompliance when patients in the control arm have no access to active treatment and in, for example, oncology trials when a biopsy used to identify the latent subgroup is performed only on subjects randomized to active treatment. We derive a computational method to estimate model parameters by iterating between an expectation step and a weighted Buckley-James optimization step. The bootstrap method is used for variance estimation, and the performance of our method is corroborated in simulation. We illustrate our method through an analysis of a multicenter selective lymphadenectomy trial for melanoma. PMID:23383608

  8. RI 1170 advanced strapdown gyro

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The major components of the RI 1170 gyroscope are described. A detailed functional description of the electronics including block diagrams and photographs of output waveshapes within the loop electronics are presented. An electronic data flow diagram is included. Those gyro subassemblies that were originally planned and subsequently changed or modified for one reason or another are discussed in detail. Variations to the original design included the capacitive pickoffs, torquer flexleads, magnetic suspension, gas bearings, electronic design, and packaging. The selection of components and changes from the original design and components selected are discussed. Device failures experienced throughout the program are reported and design corrections to eliminate the failure modes are noted. Major design deficiencies such as those of the MSE electronics are described in detail. Modifications made to the gas bearing parts and design improvements to the wheel are noted. Changes to the gas bearing prints are included as well as a mathematical analysis of the 1170 gas bearing wheel by computer analysis. The mean free-path effects on gas bearing performance is summarized.

  9. Critical joints in large composite aircraft structure

    NASA Technical Reports Server (NTRS)

    Nelson, W. D.; Bunin, B. L.; Hart-Smith, L. J.

    1983-01-01

    A program was conducted at Douglas Aircraft Company to develop the technology for critical structural joints of composite wing structure that meets design requirements for a 1990 commercial transport aircraft. The prime objective of the program was to demonstrate the ability to reliably predict the strength of large bolted composite joints. Ancillary testing of 180 specimens generated data on strength and load-deflection characteristics which provided input to the joint analysis. Load-sharing between fasteners in multirow bolted joints was computed by the nonlinear analysis program A4EJ. This program was used to predict strengths of 20 additional large subcomponents representing strips from a wing root chordwise splice. In most cases, the predictions were accurate to within a few percent of the test results. In some cases, the observed mode of failure was different than anticipated. The highlight of the subcomponent testing was the consistent ability to achieve gross-section failure strains close to 0.005. That represents a considerable improvement over the state of the art.

  10. Terrestrial Laser Scanner for assessing rockfall susceptibility in the Cilento rocky coast (Southern Italy)

    NASA Astrophysics Data System (ADS)

    Sorrentino, Valerio; Matasci, Battista; Abellan, Antonio; Jaboyedoff, Michel; Marino, Ermanno; Pignalosa, Antonio; Santo, Antonio

    2016-04-01

    Rockfalls and other types of landslides are the dominant processes causing a retreat of sea cliffs. The coastal areas constitute an important tourist attraction and a large number of people rest beneath the cliffs on a daily basis, considerably increasing the risk associated to rockfalls. We present an approach to assess rockfall susceptibility at the cliff scale based on terrestrial laser scanner (TLS) point clouds. The test area is a coastal cliff situated in the southern part of the Cilento (Centola Municipality, Campania Region), in which a natural arch was formed. This cliff is constituted by heavy fractured carbonate rock mass with a strong structural control. In June 2015 TLS data were acquired with long-range scanner RIEGL VZ1000®. The structural analysis of the cliff was performed in the field and using Coltop 3D software on the point cloud. As a result, 10 discontinuity sets (joint, faults and bedding planes) were individuated and the different characteristics such as orientation, spacing and persistence were measured. The kinematically unstable areas were highlighted using a script that computes an index of susceptibility to rockfalls based on the spatial distribution of failure mechanisms. The susceptibility index computation is based on the average surface that every joint set (or combinations of two joint sets in the case of wedge failure) forms on the topography according to its spacing, trace length, and incidence angle. This susceptibility index also depends on the steepness of the joint set (or of the intersection line in the case of wedge failure). As a result the most important discontinuity sets in terms of potential planar failure, wedge failure and toppling were individuated and an assessment of rockfall susceptibility at the cliff scale was achieved. Results show that the kinematically feasible failures are not equally distributed along the cliff but concentrated on certain areas. The most susceptible areas for planar failure are related to the discontinuity set K10 (71/097), whereas for toppling the highest susceptibility is reached with K1 (60/218). Concerning wedge failure, the combination of K10 and K1 yields the highest susceptibility values. It shows also clustering with higher density which is probably related to regional structures. More detailed investigations of the rockfall susceptibility and failure mechanisms will be performed during the forthcoming months. The relationship with regional structures will be also investigated in more detail. Perspectives also include using the methodology on the other side of the natural arch in order to provide a global susceptibility assessment of the area.

  11. A study of RSI under combined stresses

    NASA Technical Reports Server (NTRS)

    Kibler, J. J.; Rosen, B. W.

    1974-01-01

    The behavior of typical rigidized surface insulation material (RSI) under combined loading states was investigated. In particular, the thermal stress states induced during reentry of the space shuttle were of prime concern. A typical RSI tile was analyzed for reentry thermal stresses under computed thermal gradients for a model of the RSI material. The results of the thermal stress analyses were then used to aid in defining typical combined stress states for the failure analysis of RSI.

  12. Fatigue failure of materials under broad band random vibrations

    NASA Technical Reports Server (NTRS)

    Huang, T. C.; Lanz, R. W.

    1971-01-01

    The fatigue life of material under multifactor influence of broad band random excitations has been investigated. Parameters which affect the fatigue life are postulated to be peak stress, variance of stress and the natural frequency of the system. Experimental data were processed by the hybrid computer. Based on the experimental results and regression analysis a best predicting model has been found. All values of the experimental fatigue lives are within the 95% confidence intervals of the predicting equation.

  13. One-Dimensional Model for Mud Flows.

    DTIC Science & Technology

    1985-10-01

    law relation between the Chezy coefficient and the flow Reynolds number. Jeyapalan et al. [2], in their analysis of mine tailing dam failures...8217.. .: -:.. ; .r;./. : ... . :\\ :. . ... . RESULTS The model is compared with several dambreak experiments performed by Jeyapalan et al. [3]. In these...0.34 seconds per computational node. 5i Test 6 Test 2 Test 7 44 E 3 A2 Experimental Results0 Jeyapalan at al. (3) - C6- Numerical Results 4 8 12 i6 Time

  14. Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.

    1997-01-01

    A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.

  15. Analysis and Characterization of Damage Utilizing an Orthotropic Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive communities. In order to address a series of issues identified by the aerospace community as being desirable to include in a next generation composite impact model, an orthotropic, macroscopic constitutive model incorporating both plasticity and damage suitable for implementation within the commercial LS-DYNA computer code is being developed. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain hardening-based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used in which a load in one direction results in a stiffness reduction in multiple material coordinate directions. A detailed analysis is carried out to ensure that the strain equivalence assumption is appropriate for the derived plasticity and damage formulations that are employed in the current model. Procedures to develop the appropriate input curves for the damage model are presented and the process required to develop an appropriate characterization test matrix is discussed

  16. Analysis and Characterization of Damage Utilizing an Orthotropic Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive communities. In order to address a series of issues identified by the aerospace community as being desirable to include in a next generation composite impact model, an orthotropic, macroscopic constitutive model incorporating both plasticity and damage suitable for implementation within the commercial LS-DYNA computer code is being developed. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain hardening-based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used in which a load in one direction results in a stiffness reduction in multiple material coordinate directions. A detailed analysis is carried out to ensure that the strain equivalence assumption is appropriate for the derived plasticity and damage formulations that are employed in the current model. Procedures to develop the appropriate input curves for the damage model are presented and the process required to develop an appropriate characterization test matrix is discussed.

  17. Impact of the lower third molar presence and position on the fragility of mandibular angle and condyle: A Three-dimensional finite element study.

    PubMed

    Antic, Svetlana; Vukicevic, Arso M; Milasinovic, Marko; Saveljic, Igor; Jovicic, Gordana; Filipovic, Nenad; Rakocevic, Zoran; Djuric, Marija

    2015-07-01

    The aim of the present study was to investigate the influences of the presence and position of a lower third molar (M3) on the fragility of mandibular angle and condyle, using finite element analysis. From computed tomographic scans of a human mandible with normally erupted M3, two additional virtual models were generated: a mandibular model with partially impacted M3 and a model without M3. Two cases of impact were considered: a frontal and a lateral blow. The results are based on the chromatic analysis of the distributed von Mises and principal stresses, and calculation of their failure indices. In the frontal blow, the angle region showed the highest stress in the case with partially impacted M3, and the condylar region in the case without M3. Compressive stresses were dominant but caused no failure. Tensile stresses were recorded in the retromolar areas, but caused failure only in the case with partially impacted M3. In the lateral blow, the stress concentrated at the point of impact, in the ipsilateral and contralateral angle and condylar regions. The highest stresses were recorded in the case with partially impacted M3. Tensile stresses caused the failure on the ipsilateral side, whereas compressive stresses on the contralateral side. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  18. Innovative design of composite structures: The use of curvilinear fiber format in structural design of composites

    NASA Technical Reports Server (NTRS)

    Charette, R. F.; Hyer, M. W.

    1990-01-01

    The influence is investigated of a curvilinear fiber format on load carrying capacity of a layered fiber reinforced plate with a centrally located hole. A curvilinear fiber format is descriptive of layers in a laminate having fibers which are aligned with the principal stress directions in those layers. Laminates of five curvilinear fiber format designs and four straightline fiber format designs are considered. A quasi-isotropic laminate having a straightline fiber format is used to define a baseline design for comparison with the other laminate designs. Four different plate geometries are considered and differentiated by two values of hole diameter/plate width equal to 1/6 and 1/3, and two values of plate length/plate width equal to 2 and 1. With the plates under uniaxial tensile loading on two opposing edges, alignment of fibers in the curvilinear layers with the principal stress directions is determined analytically by an iteration procedure. In-plane tensile load capacity is computed for all of the laminate designs using a finite element analysis method. A maximum strain failure criterion and the Tsai-Wu failure criterion are applied to determine failure loads and failure modes. Resistance to buckling of the laminate designs to uniaxial compressive loading is analyzed using the commercial code Engineering Analysis Language. Results indicate that the curvilinear fiber format laminates have higher in-plane tensile load capacity and comparable buckling resistance relative to the straightline fiber format laminates.

  19. Probabilistic assessment of dynamic system performance. Part 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belhadj, Mohamed

    1993-01-01

    Accurate prediction of dynamic system failure behavior can be important for the reliability and risk analyses of nuclear power plants, as well as for their backfitting to satisfy given constraints on overall system reliability, or optimization of system performance. Global analysis of dynamic systems through investigating the variations in the structure of the attractors of the system and the domains of attraction of these attractors as a function of the system parameters is also important for nuclear technology in order to understand the fault-tolerance as well as the safety margins of the system under consideration and to insure a safemore » operation of nuclear reactors. Such a global analysis would be particularly relevant to future reactors with inherent or passive safety features that are expected to rely on natural phenomena rather than active components to achieve and maintain safe shutdown. Conventionally, failure and global analysis of dynamic systems necessitate the utilization of different methodologies which have computational limitations on the system size that can be handled. Using a Chapman-Kolmogorov interpretation of system dynamics, a theoretical basis is developed that unifies these methodologies as special cases and which can be used for a comprehensive safety and reliability analysis of dynamic systems.« less

  20. Sustainability of transport structures - some aspects of the nonlinear reliability assessment

    NASA Astrophysics Data System (ADS)

    Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír

    2017-09-01

    Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.

  1. Colovesical fistula causing an uncommon reason for failure of computed tomography colonography: a case report.

    PubMed

    Neroladaki, Angeliki; Breguet, Romain; Botsikas, Diomidis; Terraz, Sylvain; Becker, Christoph D; Montet, Xavier

    2012-07-23

    Computed tomography colonography, or virtual colonoscopy, is a good alternative to optical colonoscopy. However, suboptimal patient preparation or colon distension may reduce the diagnostic accuracy of this imaging technique. We report the case of an 83-year-old Caucasian woman who presented with a five-month history of pneumaturia and fecaluria and an acute episode of macrohematuria, leading to a high clinical suspicion of a colovesical fistula. The fistula was confirmed by standard contrast-enhanced computed tomography. Optical colonoscopy was performed to exclude the presence of an underlying colonic neoplasm. Since optical colonoscopy was incomplete, computed tomography colonography was performed, but also failed due to inadequate colon distension. The insufflated air directly accumulated within the bladder via the large fistula. Clinicians should consider colovesical fistula as a potential reason for computed tomography colonography failure.

  2. Assessment of reliability of CAD-CAM tooth-colored implant custom abutments.

    PubMed

    Guilherme, Nuno Marques; Chung, Kwok-Hung; Flinn, Brian D; Zheng, Cheng; Raigrodski, Ariel J

    2016-08-01

    Information is lacking about the fatigue resistance of computer-aided design and computer-aided manufacturing (CAD-CAM) tooth-colored implant custom abutment materials. The purpose of this in vitro study was to investigate the reliability of different types of CAD-CAM tooth-colored implant custom abutments. Zirconia (Lava Plus), lithium disilicate (IPS e.max CAD), and resin-based composite (Lava Ultimate) abutments were fabricated using CAD-CAM technology and bonded to machined titanium-6 aluminum-4 vanadium (Ti-6Al-4V) alloy inserts for conical connection implants (NobelReplace Conical Connection RP 4.3×10 mm; Nobel Biocare). Three groups (n=19) were assessed: group ZR, CAD-CAM zirconia/Ti-6Al-4V bonded abutments; group RC, CAD-CAM resin-based composite/Ti-6Al-4V bonded abutments; and group LD, CAD-CAM lithium disilicate/Ti-6Al-4V bonded abutments. Fifty-seven implant abutments were secured to implants and embedded in autopolymerizing acrylic resin according to ISO standard 14801. Static failure load (n=5) and fatigue failure load (n=14) were tested. Weibull cumulative damage analysis was used to calculate step-stress reliability at 150-N and 200-N loads with 2-sided 90% confidence limits. Representative fractured specimens were examined using stereomicroscopy and scanning electron microscopy to observe fracture patterns. Weibull plots revealed β values of 2.59 for group ZR, 0.30 for group RC, and 0.58 for group LD, indicating a wear-out or cumulative fatigue pattern for group ZR and load as the failure accelerating factor for groups RC and LD. Fractographic observation disclosed that failures initiated in the interproximal area where the lingual tensile stresses meet the compressive facial stresses for the early failure specimens. Plastic deformation of titanium inserts with fracture was observed for zirconia abutments in fatigue resistance testing. Significantly higher reliability was found in group ZR, and no significant differences in reliability were determined between groups RC and LD. Differences were found in the failure characteristics of group ZR between static and fatigue loading. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  3. Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    2008-01-01

    Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.

  4. Grid site availability evaluation and monitoring at CMS

    DOE PAGES

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  5. Grid site availability evaluation and monitoring at CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  6. Grid site availability evaluation and monitoring at CMS

    NASA Astrophysics Data System (ADS)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.

  7. A computational analysis of the ballistic performance of light-weight hybrid composite armors

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Pandurangan, B.; Koudela, K. L.; Cheeseman, B. A.

    2006-11-01

    The ability of hybrid light-weight fiber-reinforced polymer-matrix composite laminate armor to withstand the impact of a fragment simulating projectile (FSP) is investigated using a non-linear dynamics transient computational analysis. The hybrid armor is constructed using various combinations and stacking sequences of a high-strength/high-stiffness carbon fiber-reinforced epoxy (CFRE) and a high-ductility/high-toughness Kevlar fiber-reinforced epoxy (KFRE) composite laminates of different thicknesses. The results obtained indicate that at a fixed thickness of the armor both the stacking sequence and the number of CFRE/KFRE laminates substantially affect the ballistic performance of the armor. Specifically, it is found that the armor consisting of one layer of KFRE and one layer of CFRE, with KFRE laminate constituting the outer surface of the armor, possesses the maximum resistance towards the projectile-induced damage and failure. The results obtained are rationalized using an analysis of the elastic wave reflection and transmission behavior at the inter-laminate and laminate/air interfaces.

  8. Identifying failure in a tree network of a parallel computer

    DOEpatents

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  9. Underestimated prevalence of heart failure in hospital inpatients: a comparison of ICD codes and discharge letter information.

    PubMed

    Kaspar, Mathias; Fette, Georg; Güder, Gülmisal; Seidlmayer, Lea; Ertl, Maximilian; Dietrich, Georg; Greger, Helmut; Puppe, Frank; Störk, Stefan

    2018-04-17

    Heart failure is the predominant cause of hospitalization and amongst the leading causes of death in Germany. However, accurate estimates of prevalence and incidence are lacking. Reported figures originating from different information sources are compromised by factors like economic reasons or documentation quality. We implemented a clinical data warehouse that integrates various information sources (structured parameters, plain text, data extracted by natural language processing) and enables reliable approximations to the real number of heart failure patients. Performance of ICD-based diagnosis in detecting heart failure was compared across the years 2000-2015 with (a) advanced definitions based on algorithms that integrate various sources of the hospital information system, and (b) a physician-based reference standard. Applying these methods for detecting heart failure in inpatients revealed that relying on ICD codes resulted in a marked underestimation of the true prevalence of heart failure, ranging from 44% in the validation dataset to 55% (single year) and 31% (all years) in the overall analysis. Percentages changed over the years, indicating secular changes in coding practice and efficiency. Performance was markedly improved using search and permutation algorithms from the initial expert-specified query (F1 score of 81%) to the computer-optimized query (F1 score of 86%) or, alternatively, optimizing precision or sensitivity depending on the search objective. Estimating prevalence of heart failure using ICD codes as the sole data source yielded unreliable results. Diagnostic accuracy was markedly improved using dedicated search algorithms. Our approach may be transferred to other hospital information systems.

  10. CT fluoroscopy-guided renal tumour cutting needle biopsy: retrospective evaluation of diagnostic yield, safety, and risk factors for diagnostic failure.

    PubMed

    Iguchi, Toshihiro; Hiraki, Takao; Matsui, Yusuke; Fujiwara, Hiroyasu; Sakurai, Jun; Masaoka, Yoshihisa; Gobara, Hideo; Kanazawa, Susumu

    2018-01-01

    To evaluate retrospectively the diagnostic yield, safety, and risk factors for diagnostic failure of computed tomography (CT) fluoroscopy-guided renal tumour biopsy. Biopsies were performed for 208 tumours (mean diameter 2.3 cm; median diameter 2.1 cm; range 0.9-8.5 cm) in 199 patients. One hundred and ninety-nine tumours were ≤4 cm. All 208 initial procedures were divided into diagnostic success and failure groups. Multiple variables related to the patients, lesions, and procedures were assessed to determine the risk factors for diagnostic failure. After performing 208 initial and nine repeat biopsies, 180 malignancies and 15 benign tumours were pathologically diagnosed, whereas 13 were not diagnosed. In 117 procedures, 118 Grade I and one Grade IIIa adverse events (AEs) occurred. Neither Grade ≥IIIb AEs nor tumour seeding were observed within a median follow-up period of 13.7 months. Logistic regression analysis revealed only small tumour size (≤1.5 cm; odds ratio 3.750; 95% confidence interval 1.362-10.326; P = 0.011) to be a significant risk factor for diagnostic failure. CT fluoroscopy-guided renal tumour biopsy is a safe procedure with a high diagnostic yield. A small tumour size (≤1.5 cm) is a significant risk factor for diagnostic failure. • CT fluoroscopy-guided renal tumour biopsy has a high diagnostic yield. • CT fluoroscopy-guided renal tumour biopsy is safe. • Small tumour size (≤1.5 cm) is a risk factor for diagnostic failure.

  11. The preparedness of hospital Health Information Services for system failures due to internal disasters.

    PubMed

    Lee, Cheens; Robinson, Kerin M; Wendt, Kate; Williamson, Dianne

    The unimpeded functioning of hospital Health Information Services (HIS) is essential for patient care, clinical governance, organisational performance measurement, funding and research. In an investigation of hospital Health Information Services' preparedness for internal disasters, all hospitals in the state of Victoria with the following characteristics were surveyed: they have a Health Information Service/ Department; there is a Manager of the Health Information Service/Department; and their inpatient capacity is greater than 80 beds. Fifty percent of the respondents have experienced an internal disaster within the past decade, the majority affecting the Health Information Service. The most commonly occurring internal disasters were computer system failure and floods. Two-thirds of the hospitals have internal disaster plans; the most frequently occurring scenarios provided for are computer system failure, power failure and fire. More large hospitals have established back-up systems than medium- and small-size hospitals. Fifty-three percent of hospitals have a recovery plan for internal disasters. Hospitals typically self-rate as having a 'medium' level of internal disaster preparedness. Overall, large hospitals are better prepared for internal disasters than medium and small hospitals, and preparation for disruption of computer systems and medical record services is relatively high on their agendas.

  12. Probabilistic finite elements for fatigue and fracture analysis

    NASA Astrophysics Data System (ADS)

    Belytschko, Ted; Liu, Wing Kam

    1993-04-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  13. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1993-01-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  14. Modeling rock specimens through 3D printing: Tentative experiments and prospects

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Feng, Xiating; Song, Lvbo; Gong, Yahua; Zheng, Hong; Cui, Jie

    2016-02-01

    Current developments in 3D printing (3DP) technology provide the opportunity to produce rock-like specimens and geotechnical models through additive manufacturing, that is, from a file viewed with a computer to a real object. This study investigated the serviceability of 3DP products as substitutes for rock specimens and rock-type materials in experimental analysis of deformation and failure in the laboratory. These experiments were performed on two types of materials as follows: (1) compressive experiments on printed sand-powder specimens in different shapes and structures, including intact cylinders, cylinders with small holes, and cuboids with pre-existing cracks, and (2) compressive and shearing experiments on printed polylactic acid cylinders and molded shearing blocks. These tentative tests for 3DP technology have exposed its advantages in producing complicated specimens with special external forms and internal structures, the mechanical similarity of its product to rock-type material in terms of deformation and failure, and its precision in mapping shapes from the original body to the trial sample (such as a natural rock joint). These experiments and analyses also successfully demonstrate the potential and prospects of 3DP technology to assist in the deformation and failure analysis of rock-type materials, as well as in the simulation of similar material modeling experiments.

  15. Experimental and Numerical Analysis of Triaxially Braided Composites Utilizing a Modified Subcell Modeling Approach

    NASA Technical Reports Server (NTRS)

    Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.

    2015-01-01

    A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles of 0deg, 30deg, 45deg, 60deg and 90deg relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.

  16. Experimental and Numerical Analysis of Triaxially Braided Composites Utilizing a Modified Subcell Modeling Approach

    NASA Technical Reports Server (NTRS)

    Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.

    2015-01-01

    A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles [0deg, 30deg, 45deg, 60deg and 90deg] relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.

  17. Crysalis: an integrated server for computational analysis and design of protein crystallization.

    PubMed

    Wang, Huilin; Feng, Liubin; Zhang, Ziding; Webb, Geoffrey I; Lin, Donghai; Song, Jiangning

    2016-02-24

    The failure of multi-step experimental procedures to yield diffraction-quality crystals is a major bottleneck in protein structure determination. Accordingly, several bioinformatics methods have been successfully developed and employed to select crystallizable proteins. Unfortunately, the majority of existing in silico methods only allow the prediction of crystallization propensity, seldom enabling computational design of protein mutants that can be targeted for enhancing protein crystallizability. Here, we present Crysalis, an integrated crystallization analysis tool that builds on support-vector regression (SVR) models to facilitate computational protein crystallization prediction, analysis, and design. More specifically, the functionality of this new tool includes: (1) rapid selection of target crystallizable proteins at the proteome level, (2) identification of site non-optimality for protein crystallization and systematic analysis of all potential single-point mutations that might enhance protein crystallization propensity, and (3) annotation of target protein based on predicted structural properties. We applied the design mode of Crysalis to identify site non-optimality for protein crystallization on a proteome-scale, focusing on proteins currently classified as non-crystallizable. Our results revealed that site non-optimality is based on biases related to residues, predicted structures, physicochemical properties, and sequence loci, which provides in-depth understanding of the features influencing protein crystallization. Crysalis is freely available at http://nmrcen.xmu.edu.cn/crysalis/.

  18. Crysalis: an integrated server for computational analysis and design of protein crystallization

    PubMed Central

    Wang, Huilin; Feng, Liubin; Zhang, Ziding; Webb, Geoffrey I.; Lin, Donghai; Song, Jiangning

    2016-01-01

    The failure of multi-step experimental procedures to yield diffraction-quality crystals is a major bottleneck in protein structure determination. Accordingly, several bioinformatics methods have been successfully developed and employed to select crystallizable proteins. Unfortunately, the majority of existing in silico methods only allow the prediction of crystallization propensity, seldom enabling computational design of protein mutants that can be targeted for enhancing protein crystallizability. Here, we present Crysalis, an integrated crystallization analysis tool that builds on support-vector regression (SVR) models to facilitate computational protein crystallization prediction, analysis, and design. More specifically, the functionality of this new tool includes: (1) rapid selection of target crystallizable proteins at the proteome level, (2) identification of site non-optimality for protein crystallization and systematic analysis of all potential single-point mutations that might enhance protein crystallization propensity, and (3) annotation of target protein based on predicted structural properties. We applied the design mode of Crysalis to identify site non-optimality for protein crystallization on a proteome-scale, focusing on proteins currently classified as non-crystallizable. Our results revealed that site non-optimality is based on biases related to residues, predicted structures, physicochemical properties, and sequence loci, which provides in-depth understanding of the features influencing protein crystallization. Crysalis is freely available at http://nmrcen.xmu.edu.cn/crysalis/. PMID:26906024

  19. Causal Attributions of Success and Failure Made by Undergraduate Students in an Introductory-Level Computer Programming Course

    ERIC Educational Resources Information Center

    Hawi, N.

    2010-01-01

    The purpose of this research is to identify the causal attributions of business computing students in an introductory computer programming course, in the computer science department at Notre Dame University, Louaize. Forty-five male and female undergraduates who completed the computer programming course that extended for a 13-week semester…

  20. A Proposal for the Creation of a Diagnostics and Power Port Standard

    NASA Technical Reports Server (NTRS)

    Willeke, Thomas

    2005-01-01

    The contents of this paper discuss plans for communication failure due to lost hardware during Moon and Mars exploration missions. The author proposes a Diagnostics and Power Port (DPP) creation in the face of total communication failure. DDP would have a number of different power channels to replicate computer diagnostic abilities to find the root cause of failure.

  1. Reliability Evaluation of Computer Systems

    DTIC Science & Technology

    1979-04-01

    detection mechanisms. The model rrvided values for the system availa bility, mean time before failure (VITBF) , and the proportion of time that the 4 system...Stanford University Comm~iuter Science 311, (also Electrical Engineering 482), Advanced Computer Organization. Graduate course in computer architeture

  2. Computational medical imaging and hemodynamics framework for functional analysis and assessment of cardiovascular structures.

    PubMed

    Wong, Kelvin K L; Wang, Defeng; Ko, Jacky K L; Mazumdar, Jagannath; Le, Thu-Thao; Ghista, Dhanjoo

    2017-03-21

    Cardiac dysfunction constitutes common cardiovascular health issues in the society, and has been an investigation topic of strong focus by researchers in the medical imaging community. Diagnostic modalities based on echocardiography, magnetic resonance imaging, chest radiography and computed tomography are common techniques that provide cardiovascular structural information to diagnose heart defects. However, functional information of cardiovascular flow, which can in fact be used to support the diagnosis of many cardiovascular diseases with a myriad of hemodynamics performance indicators, remains unexplored to its full potential. Some of these indicators constitute important cardiac functional parameters affecting the cardiovascular abnormalities. With the advancement of computer technology that facilitates high speed computational fluid dynamics, the realization of a support diagnostic platform of hemodynamics quantification and analysis can be achieved. This article reviews the state-of-the-art medical imaging and high fidelity multi-physics computational analyses that together enable reconstruction of cardiovascular structures and hemodynamic flow patterns within them, such as of the left ventricle (LV) and carotid bifurcations. The combined medical imaging and hemodynamic analysis enables us to study the mechanisms of cardiovascular disease-causing dysfunctions, such as how (1) cardiomyopathy causes left ventricular remodeling and loss of contractility leading to heart failure, and (2) modeling of LV construction and simulation of intra-LV hemodynamics can enable us to determine the optimum procedure of surgical ventriculation to restore its contractility and health This combined medical imaging and hemodynamics framework can potentially extend medical knowledge of cardiovascular defects and associated hemodynamic behavior and their surgical restoration, by means of an integrated medical image diagnostics and hemodynamic performance analysis framework.

  3. Discovering Tradeoffs, Vulnerabilities, and Dependencies within Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Reed, P. M.

    2015-12-01

    There is a growing recognition and interest in using emerging computational tools for discovering the tradeoffs that emerge across complex combinations infrastructure options, adaptive operations, and sign posts. As a field concerned with "deep uncertainties", it is logically consistent to include a more direct acknowledgement that our choices for dealing with computationally demanding simulations, advanced search algorithms, and sensitivity analysis tools are themselves subject to failures that could adversely bias our understanding of how systems' vulnerabilities change with proposed actions. Balancing simplicity versus complexity in our computational frameworks is nontrivial given that we are often exploring high impact irreversible decisions. It is not always clear that accepted models even encompass important failure modes. Moreover as they become more complex and computationally demanding the benefits and consequences of simplifications are often untested. This presentation discusses our efforts to address these challenges through our "many-objective robust decision making" (MORDM) framework for the design and management water resources systems. The MORDM framework has four core components: (1) elicited problem conception and formulation, (2) parallel many-objective search, (3) interactive visual analytics, and (4) negotiated selection of robust alternatives. Problem conception and formulation is the process of abstracting a practical design problem into a mathematical representation. We build on the emerging work in visual analytics to exploit interactive visualization of both the design space and the objective space in multiple heterogeneous linked views that permit exploration and discovery. Many-objective search produces tradeoff solutions from potentially competing problem formulations that can each consider up to ten conflicting objectives based on current computational search capabilities. Negotiated design selection uses interactive visualization, reformulation, and optimization to discover desirable designs for implementation. Multi-city urban water supply portfolio planning will be used to illustrate the MORDM framework.

  4. Cost-of-illness studies based on massive data: a prevalence-based, top-down regression approach.

    PubMed

    Stollenwerk, Björn; Welchowski, Thomas; Vogl, Matthias; Stock, Stephanie

    2016-04-01

    Despite the increasing availability of routine data, no analysis method has yet been presented for cost-of-illness (COI) studies based on massive data. We aim, first, to present such a method and, second, to assess the relevance of the associated gain in numerical efficiency. We propose a prevalence-based, top-down regression approach consisting of five steps: aggregating the data; fitting a generalized additive model (GAM); predicting costs via the fitted GAM; comparing predicted costs between prevalent and non-prevalent subjects; and quantifying the stochastic uncertainty via error propagation. To demonstrate the method, it was applied to aggregated data in the context of chronic lung disease to German sickness funds data (from 1999), covering over 7.3 million insured. To assess the gain in numerical efficiency, the computational time of the innovative approach has been compared with corresponding GAMs applied to simulated individual-level data. Furthermore, the probability of model failure was modeled via logistic regression. Applying the innovative method was reasonably fast (19 min). In contrast, regarding patient-level data, computational time increased disproportionately by sample size. Furthermore, using patient-level data was accompanied by a substantial risk of model failure (about 80 % for 6 million subjects). The gain in computational efficiency of the innovative COI method seems to be of practical relevance. Furthermore, it may yield more precise cost estimates.

  5. Ablation study of tungsten-based nuclear thermal rocket fuel

    NASA Astrophysics Data System (ADS)

    Smith, Tabitha Elizabeth Rose

    The research described in this thesis has been performed in order to support the materials research and development efforts of NASA Marshall Space Flight Center (MSFC), of Tungsten-based Nuclear Thermal Rocket (NTR) fuel. The NTR was developed to a point of flight readiness nearly six decades ago and has been undergoing gradual modification and upgrading since then. Due to the simplicity in design of the NTR, and also in the modernization of the materials fabrication processes of nuclear fuel since the 1960's, the fuel of the NTR has been upgraded continuously. Tungsten-based fuel is of great interest to the NTR community, seeking to determine its advantages over the Carbide-based fuel of the previous NTR programs. The materials development and fabrication process contains failure testing, which is currently being conducted at MSFC in the form of heating the material externally and internally to replicate operation within the nuclear reactor of the NTR, such as with hot gas and RF coils. In order to expand on these efforts, experiments and computational studies of Tungsten and a Tungsten Zirconium Oxide sample provided by NASA have been conducted for this dissertation within a plasma arc-jet, meant to induce ablation on the material. Mathematical analysis was also conducted, for purposes of verifying experiments and making predictions. The computational method utilizes Anisimov's kinetic method of plasma ablation, including a thermal conduction parameter from the Chapman Enskog expansion of the Maxwell Boltzmann equations, and has been modified to include a tangential velocity component. Experimental data matches that of the computational data, in which plasma ablation at an angle shows nearly half the ablation of plasma ablation at no angle. Fuel failure analysis of two NASA samples post-testing was conducted, and suggestions have been made for future materials fabrication processes. These studies, including the computational kinetic model at an angle and the ablation of the NASA sample, could be applied to an atmospheric reentry body, reentering at a ballistic trajectory at hypersonic velocities.

  6. [Survival analysis with competing risks: estimating failure probability].

    PubMed

    Llorca, Javier; Delgado-Rodríguez, Miguel

    2004-01-01

    To show the impact of competing risks of death on survival analysis. We provide an example of survival time without chronic rejection after heart transplantation, where death before rejection acts as a competing risk. Using a computer simulation, we compare the Kaplan-Meier estimator and the multiple decrement model. The Kaplan-Meier method overestimated the probability of rejection. Next, we illustrate the use of the multiple decrement model to analyze secondary end points (in our example: death after rejection). Finally, we discuss Kaplan-Meier assumptions and why they fail in the presence of competing risks. Survival analysis should be adjusted for competing risks of death to avoid overestimation of the risk of rejection produced with the Kaplan-Meier method.

  7. CONFIG: Integrated engineering of systems and their operation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    This article discusses CONFIG 3, a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operations of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. CONFIG supports integration among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. CONFIG is designed to support integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems.

  8. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE PAGES

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef; ...

    2016-10-01

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  9. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  10. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    PubMed

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  11. Free versus perforator-pedicled propeller flaps in lower extremity reconstruction: What is the safest coverage? A meta-analysis.

    PubMed

    Bekara, Farid; Herlin, Christian; Somda, Serge; de Runz, Antoine; Grolleau, Jean Louis; Chaput, Benoit

    2018-01-01

    Currently, increasingly reconstructive surgeon consider the failure rates of perforator propeller flaps especially in the distal third of the lower leg are too important and prefer to return to the use of free flap at first line option with failure rates frequently lower than 5%. So, we performed a systematic review with meta-analysis comparing free flaps (perforator-based or not) and pedicled-propeller flaps to respond to the question "what is the safest coverage for distal third of the lower limb?" This review was conducted according to PRISMA criteria. From 1991 to 2015, MEDLINE®, Pubmed central, Embase and Cochrane Library were searched. The pooled estimations were performed by meta-analysis. The homogeneity Q statistic and the I 2 index were computed. We included 36 articles for free flaps (1,226 flaps) and 19 articles for pedicled-propeller flaps (302 flaps). The overall failure rate was 3.9% [95%CI:2.6-5.3] for free flaps and 2.77% [95%CI:0.0-5.6] for pedicled-propeller flaps (P = 0.36). The complication rates were 19.0% for free flaps and 21.4% for pedicled-propeller flaps (P = 0.37). In more detail, we noted for free flaps versus pedicled-propeller flaps: partial necrosis (2.70 vs. 6.88%, P = 0.001%), wound dehiscence (2.38 vs. 0.26%, P = 0.018), infection (4.45 vs. 1.22%, P = 0.009). The coverage failure rate was 5.24% [95%CI:3.68-6.81] versus 2.99% [95%CI:0.38-5.60] without significant difference (P = 0.016). In the lower limb the complications are not rare and many teams consider the free flaps to be safer. In this meta-analysis we provide evidence that failure and overall complications rate of perforator propeller flaps are comparable with free flaps. Although, partial necrosis is significantly higher for pedicled-propeller flaps than free flaps, in reality the success of coverage appears similar. © 2016 Wiley Periodicals, Inc. Microsurgery, 38:109-119, 2018. © 2016 Wiley Periodicals, Inc.

  12. Monolithic ceramic analysis using the SCARE program

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.

    1988-01-01

    The Structural Ceramics Analysis and Reliability Evaluation (SCARE) computer program calculates the fast fracture reliability of monolithic ceramic components. The code is a post-processor to the MSC/NASTRAN general purpose finite element program. The SCARE program automatically accepts the MSC/NASTRAN output necessary to compute reliability. This includes element stresses, temperatures, volumes, and areas. The SCARE program computes two-parameter Weibull strength distributions from input fracture data for both volume and surface flaws. The distributions can then be used to calculate the reliability of geometrically complex components subjected to multiaxial stress states. Several fracture criteria and flaw types are available for selection by the user, including out-of-plane crack extension theories. The theoretical basis for the reliability calculations was proposed by Batdorf. These models combine linear elastic fracture mechanics (LEFM) with Weibull statistics to provide a mechanistic failure criterion. Other fracture theories included in SCARE are the normal stress averaging technique and the principle of independent action. The objective of this presentation is to summarize these theories, including their limitations and advantages, and to provide a general description of the SCARE program, along with example problems.

  13. Fundamental analysis of the failure of polymer-based fiber reinforced composites

    NASA Technical Reports Server (NTRS)

    Kanninen, M. F.; Rybicki, E. F.; Griffith, W. I.; Broek, D.

    1976-01-01

    A mathematical model is described which will permit predictions of the strength of fiber reinforced composites containing known flaws to be made from the basic properties of their constituents. The approach was to embed a local heterogeneous region (LHR) surrounding the crack tip into an anisotropic elastic continuum. The model should (1) permit an explicit analysis of the micromechanical processes involved in the fracture process, and (2) remain simple enough to be useful in practical computations. Computations for arbitrary flaw size and orientation under arbitrary applied load combinations were performed from unidirectional composites with linear elastic-brittle constituent behavior. The mechanical properties were nominally those of graphite epoxy. With the rupture properties arbitrarily varied to test the capability of the model to reflect real fracture modes in fiber composites, it was shown that fiber breakage, matrix crazing, crack bridging, matrix-fiber debonding, and axial splitting can all occur during a period of (gradually) increasing load prior to catastrophic fracture. The computations reveal qualitatively the sequential nature of the stable crack process that precedes fracture.

  14. Research on computer aided testing of pilot response to critical in-flight events

    NASA Technical Reports Server (NTRS)

    Giffin, W. C.; Rockwell, T. H.; Smith, P. J.

    1984-01-01

    Experiments on pilot decision making are described. The development of models of pilot decision making in critical in flight events (CIFE) are emphasized. The following tests are reported on the development of: (1) a frame system representation describing how pilots use their knowledge in a fault diagnosis task; (2) assessment of script norms, distance measures, and Markov models developed from computer aided testing (CAT) data; and (3) performance ranking of subject data. It is demonstrated that interactive computer aided testing either by touch CRT's or personal computers is a useful research and training device for measuring pilot information management in diagnosing system failures in simulated flight situations. Performance is dictated by knowledge of aircraft sybsystems, initial pilot structuring of the failure symptoms and efficient testing of plausible causal hypotheses.

  15. Uncertainty analysis as essential step in the establishment of the dynamic Design Space of primary drying during freeze-drying.

    PubMed

    Mortier, Séverine Thérèse F C; Van Bockstal, Pieter-Jan; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2016-06-01

    Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze-drying, there are only two main process variables to be set, i.e. the shelf temperature and the chamber pressure, however preferably in a dynamic way. This manuscript focuses on the essential use of uncertainty analysis for the determination and experimental verification of the dynamic primary drying Design Space for pharmaceutical freeze-drying. Traditionally, the chamber pressure and shelf temperature are kept constant during primary drying, leading to less optimal process conditions. In this paper it is demonstrated how a mechanistic model of the primary drying step gives the opportunity to determine the optimal dynamic values for both process variables during processing, resulting in a dynamic Design Space with a well-known risk of failure. This allows running the primary drying process step as time efficient as possible, hereby guaranteeing that the temperature at the sublimation front does not exceed the collapse temperature. The Design Space is the multidimensional combination and interaction of input variables and process parameters leading to the expected product specifications with a controlled (i.e., high) probability. Therefore, inclusion of parameter uncertainty is an essential part in the definition of the Design Space, although it is often neglected. To quantitatively assess the inherent uncertainty on the parameters of the mechanistic model, an uncertainty analysis was performed to establish the borders of the dynamic Design Space, i.e. a time-varying shelf temperature and chamber pressure, associated with a specific risk of failure. A risk of failure acceptance level of 0.01%, i.e. a 'zero-failure' situation, results in an increased primary drying process time compared to the deterministic dynamic Design Space; however, the risk of failure is under control. Experimental verification revealed that only a risk of failure acceptance level of 0.01% yielded a guaranteed zero-defect quality end-product. The computed process settings with a risk of failure acceptance level of 0.01% resulted in a decrease of more than half of the primary drying time in comparison with a regular, conservative cycle with fixed settings. Copyright © 2016. Published by Elsevier B.V.

  16. Elastic Rock Heterogeneity Controls Brittle Rock Failure during Hydraulic Fracturing

    NASA Astrophysics Data System (ADS)

    Langenbruch, C.; Shapiro, S. A.

    2014-12-01

    For interpretation and inversion of microseismic data it is important to understand, which properties of the reservoir rock control the occurrence probability of brittle rock failure and associated seismicity during hydraulic stimulation. This is especially important, when inverting for key properties like permeability and fracture conductivity. Although it became accepted that seismic events are triggered by fluid flow and the resulting perturbation of the stress field in the reservoir rock, the magnitude of stress perturbations, capable of triggering failure in rocks, can be highly variable. The controlling physical mechanism of this variability is still under discussion. We compare the occurrence of microseismic events at the Cotton Valley gas field to elastic rock heterogeneity, obtained from measurements along the treatment wells. The heterogeneity is characterized by scale invariant fluctuations of elastic properties. We observe that the elastic heterogeneity of the rock formation controls the occurrence of brittle failure. In particular, we find that the density of events is increasing with the Brittleness Index (BI) of the rock, which is defined as a combination of Young's modulus and Poisson's ratio. We evaluate the physical meaning of the BI. By applying geomechanical investigations we characterize the influence of fluctuating elastic properties in rocks on the probability of brittle rock failure. Our analysis is based on the computation of stress fluctuations caused by elastic heterogeneity of rocks. We find that elastic rock heterogeneity causes stress fluctuations of significant magnitude. Moreover, the stress changes necessary to open and reactivate fractures in rocks are strongly related to fluctuations of elastic moduli. Our analysis gives a physical explanation to the observed relation between elastic heterogeneity of the rock formation and the occurrence of brittle failure during hydraulic reservoir stimulations. A crucial factor for understanding seismicity in unconventional reservoirs is the role of anisotropy of rocks. We evaluate an elastic VTI rock model corresponding to a shale gas reservoir in the Horn River Basin to understand the relation between stress, event occurrence and elastic heterogeneity in anisotropic rocks.

  17. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  18. The ac propulsion system for an electric vehicle, phase 1

    NASA Astrophysics Data System (ADS)

    Geppert, S.

    1981-08-01

    A functional prototype of an electric vehicle ac propulsion system was built consisting of a 18.65 kW rated ac induction traction motor, pulse width modulated (PWM) transistorized inverter, two speed mechanically shifted automatic transmission, and an overall drive/vehicle controller. Design developmental steps, and test results of individual components and the complex system on an instrumented test frame are described. Computer models were developed for the inverter, motor and a representative vehicle. A preliminary reliability model and failure modes effects analysis are given.

  19. The ac propulsion system for an electric vehicle, phase 1

    NASA Technical Reports Server (NTRS)

    Geppert, S.

    1981-01-01

    A functional prototype of an electric vehicle ac propulsion system was built consisting of a 18.65 kW rated ac induction traction motor, pulse width modulated (PWM) transistorized inverter, two speed mechanically shifted automatic transmission, and an overall drive/vehicle controller. Design developmental steps, and test results of individual components and the complex system on an instrumented test frame are described. Computer models were developed for the inverter, motor and a representative vehicle. A preliminary reliability model and failure modes effects analysis are given.

  20. Analytical redundancy management mechanization and flight data analysis for the F-8 digital fly-by-wire aircraft flight control sensors

    NASA Technical Reports Server (NTRS)

    Deckert, J. C.

    1983-01-01

    The details are presented of an onboard digital computer algorithm designed to reliably detect and isolate the first failure in a duplex set of flight control sensors aboard the NASA F-8 digital fly-by-wire aircraft. The algorithm's successful flight test program is summarized, and specific examples are presented of algorithm behavior in response to software-induced signal faults, both with and without aircraft parameter modeling errors.

  1. Survival analysis in telemetry studies: The staggered entry design

    USGS Publications Warehouse

    Pollock, K.H.; Winterstein, S.R.; Bunck, C.M.; Curtis, P.D.

    1989-01-01

    A simple description of the Kaplan-Meier procedure is presented with an example using northern bobwhite quail survival data. The Kaplan- Meier procedure was then generalized to allow gradual (or staggered) entry of animals into the study, allowing animals being lost (or censored) due to radio failure, radio loss, or emigration of the animal from the study area. Additionally, the applicability and generalization of the log rank test, a test to compare two survival distributions, was demonstrated. Computer program was developed and is available from authors.

  2. Reliability assessment of multiple quantum well avalanche photodiodes

    NASA Technical Reports Server (NTRS)

    Yun, Ilgu; Menkara, Hicham M.; Wang, Yang; Oguzman, Isamil H.; Kolnik, Jan; Brennan, Kevin F.; May, Gray S.; Wagner, Brent K.; Summers, Christopher J.

    1995-01-01

    The reliability of doped-barrier AlGaAs/GsAs multi-quantum well avalanche photodiodes fabricated by molecular beam epitaxy is investigated via accelerated life tests. Dark current and breakdown voltage were the parameters monitored. The activation energy of the degradation mechanism and median device lifetime were determined. Device failure probability as a function of time was computed using the lognormal model. Analysis using the electron beam induced current method revealed the degradation to be caused by ionic impurities or contamination in the passivation layer.

  3. [The applied of computer for analysis on contraceptive efficacy of IUD of rural women in Guangdong province].

    PubMed

    Jia, G H

    1989-12-01

    This paper discussed the usage and effect of IUD-type O use for rural married women in Guangdong province. The continuation rate of IUD-type O is 71.7 per cent 100 women in one year. The main problem for failure was expulsion. This paper have used a combination of univariate and multivariate analytic methods. On the whole, the important factors were number of gravid and parity, number of induced abortion and medical technical level etc.

  4. Analysis methods for Kevlar shield response to rotor fragments

    NASA Technical Reports Server (NTRS)

    Gerstle, J. H.

    1977-01-01

    Several empirical and analytical approaches to rotor burst shield sizing are compared and principal differences in metal and fabric dynamic behavior are discussed. The application of transient structural response computer programs to predict Kevlar containment limits is described. For preliminary shield sizing, present analytical methods are useful if insufficient test data for empirical modeling are available. To provide other information useful for engineering design, analytical methods require further developments in material characterization, failure criteria, loads definition, and post-impact fragment trajectory prediction.

  5. Use of Failure in IS Development Statistics: Lessons for IS Curriculum Design

    ERIC Educational Resources Information Center

    Longenecker, Herbert H., Jr.; Babb, Jeffry; Waguespack, Leslie; Tastle, William; Landry, Jeff

    2016-01-01

    The evolution of computing education reflects the history of the professional practice of computing. Keeping computing education current has been a major challenge due to the explosive advances in technologies. Academic programs in Information Systems, a long-standing computing discipline, develop and refine the theory and practice of computing…

  6. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...

  7. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...

  8. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...

  9. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    NASA Astrophysics Data System (ADS)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  10. Detection of Failure in Asynchronous Motor Using Soft Computing Method

    NASA Astrophysics Data System (ADS)

    Vinoth Kumar, K.; Sony, Kevin; Achenkunju John, Alan; Kuriakose, Anto; John, Ano P.

    2018-04-01

    This paper investigates the stator short winding failure of asynchronous motor also their effects on motor current spectrums. A fuzzy logic approach i.e., model based technique possibly will help to detect the asynchronous motor failure. Actually, fuzzy logic similar to humanoid intelligent methods besides expected linguistic empowering inferences through vague statistics. The dynamic model is technologically advanced for asynchronous motor by means of fuzzy logic classifier towards investigate the stator inter turn failure in addition open phase failure. A hardware implementation was carried out with LabVIEW for the online-monitoring of faults.

  11. Payload maintenance cost model for the space telescope

    NASA Technical Reports Server (NTRS)

    White, W. L.

    1980-01-01

    An optimum maintenance cost model for the space telescope for a fifteen year mission cycle was developed. Various documents and subsequent updates of failure rates and configurations were made. The reliability of the space telescope for one year, two and one half years, and five years were determined using the failure rates and configurations. The failure rates and configurations were also used in the maintenance simulation computer model which simulate the failure patterns for the fifteen year mission life of the space telescope. Cost algorithms associated with the maintenance options as indicated by the failure patterns were developed and integrated into the model.

  12. Failure warning of hydrous sandstone based on electroencephalogram technique

    NASA Astrophysics Data System (ADS)

    Tao, Kai; Zheng, Wei

    2018-06-01

    Sandstone is a type of rock mass that widely exists in nature. Moisture is an important factor that leads to sandstone structural failure. The major failure assessment methods of hydrous sandstone at present cannot satisfy real-time and portability requirements, especially lacks of warning function. In this study, acoustic emission (AE) and computed tomography (CT) techniques are combined for real-time failure assessment of hydrous sandstone. Eight visual colors for warning are screened according to different failure states, and an electroencephalogram (EEG) experiment is conducted to demonstrate their diverse excitations of the human brain's concentration.

  13. Computer Simulation Is an Undervalued Tool for Genetic Analysis: A Historical View and Presentation of SHIMSHON – A Web-Based Genetic Simulation Package

    PubMed Central

    Greenberg, David A.

    2011-01-01

    Computer simulation methods are under-used tools in genetic analysis because simulation approaches have been portrayed as inferior to analytic methods. Even when simulation is used, its advantages are not fully exploited. Here, I present SHIMSHON, our package of genetic simulation programs that have been developed, tested, used for research, and used to generated data for Genetic Analysis Workshops (GAW). These simulation programs, now web-accessible, can be used by anyone to answer questions about designing and analyzing genetic disease studies for locus identification. This work has three foci: (1) the historical context of SHIMSHON's development, suggesting why simulation has not been more widely used so far. (2) Advantages of simulation: computer simulation helps us to understand how genetic analysis methods work. It has advantages for understanding disease inheritance and methods for gene searches. Furthermore, simulation methods can be used to answer fundamental questions that either cannot be answered by analytical approaches or cannot even be defined until the problems are identified and studied, using simulation. (3) I argue that, because simulation was not accepted, there was a failure to grasp the meaning of some simulation-based studies of linkage. This may have contributed to perceived weaknesses in linkage analysis; weaknesses that did not, in fact, exist. PMID:22189467

  14. Application of transient CFD-procedures for S-shape computation in pump-turbines with and without FSI

    NASA Astrophysics Data System (ADS)

    Casartelli, E.; Mangani, L.; Ryan, O.; Schmid, A.

    2016-11-01

    CFD has entered the product development process in hydraulic machines since more than three decades. Beside the actual design process, in which the most appropriate geometry for a certain task is iteratively sought, several steady-state simulations and related analyses are performed with the help of CFD. Basic transient CFD-analysis is becoming more and more routine for rotor-stator interaction assessment, but in general unsteady CFD is still not standard due to the large computational effort. Especially for FSI simulations, where mesh motion is involved, a considerable amount of computational time is necessary for the mesh handling and deformation as well as the related unsteady flow field resolution. Therefore this kind of CFD computations are still unusual and mostly performed during trouble-shooting analysis rather than in the standard development process, i.e. in order to understand what went wrong instead of preventing failure or even better to increase the available knowledge. In this paper the application of an efficient and particularly robust algorithm for fast computations with moving mesh is presented for the analysis of transient effects encountered during highly dynamic procedures in the operation of a pump-turbine, like runaway at fixed GV position and load-rejection with GV motion imposed as one-way FSI. In both cases the computations extend through the S-shape of the machine in the turbine-brake and reverse pump domain, showing that such exotic computations can be perform on a more regular base, even if quite time consuming. Beside the presentation of the procedure and global results, some highlights in the encountered flow-physics are also given.

  15. The use of fractography to supplement analysis of bone mechanical properties in different strains of mice.

    PubMed

    Wise, L M; Wang, Z; Grynpas, M D

    2007-10-01

    Fractography has not been fully developed as a useful technique in assessing failure mechanisms of bone. While fracture surfaces of osteonal bone have been explored, this may not apply to conventional mechanical testing of mouse bone. Thus, the focus of this work was to develop and evaluate the efficacy of a fractography protocol for use in supplementing the interpretation of failure mechanisms in mouse bone. Micro-computed tomography and three-point bending were performed on femora of two groups of 6-month-old mice (C57BL/6 and a mixed strain background of 129SV/C57BL6). SEM images of fracture surfaces were collected, and areas of "tension", "compression" and "transition" were identified. Percent areas of roughness were identified and estimated within areas of "tension" and "compression" and subsequently compared to surface roughness measurements generated from an optical profiler. Porosity parameters were determined on the tensile side. Linear regression analysis was performed to evaluate correlations between certain parameters. Results show that 129 mice exhibit significantly increased bone mineral density (BMD), number of "large" pores, failure strength, elastic modulus and energy to failure compared to B6 mice (p<0.001). Both 129 and B6 mice exhibit significantly (p<0.01) more percent areas of tension (49+/-1%, 42+/-2%; respectively) compared to compression (26+/-2%, 31+/-1%; respectively). In terms of "roughness", B6 mice exhibit significantly less "rough" areas (30+/-4%) compared to "smooth" areas (70+/-4%) on the tensile side only (p<0.001). Qualitatively, 129 mice demonstrate more evidence of bone toughening through fiber bridging and loosely connected fiber bundles. The number of large pores is positively correlated with failure strength (p=0.004), elastic modulus (p=0.002) and energy to failure (p=0.041). Percent area of tensile surfaces is positively correlated with failure strength (p<0.001), elastic modulus (p=0.016) and BMD (p=0.037). Percent area of rough compressive surfaces is positively correlated with energy to failure (p=0.039). Evaluation of fracture surfaces has helped to explain why 129 mice have increased mechanical properties compared to B6 mice, namely via toughening mechanisms on the compressive side of failure. Several correlations exist between fractography parameters and mechanical behavior, supporting the utility of fractography with skeletal mouse models.

  16. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  17. Mixed-mode cyclic debonding of adhesively bonded composite joints. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Rezaizadeh, M. A.; Mall, S.

    1985-01-01

    A combined experimental-analytical investigation to characterize the cyclic failure mechanism of a simple composite-to-composite bonded joint is conducted. The cracked lap shear (CLS) specimens of graphite/epoxy adherend bonded with EC-3445 adhesive are tested under combined mode 1 and 2 loading. In all specimens tested, fatigue failure occurs in the form of cyclic debonding. The cyclic debond growth rates are measured. The finite element analysis is employed to compute the mode 1, mode 2, and total strain energy release rates (i.e., GI, GII, and GT). A wide range of mixed-mode loading, i.e., GI/GII ranging from 0.03 to 0.38, is obtained. The total strain energy release rate, G sub T, appeared to be the driving parameter for cyclic debonding in the tested composite bonded system.

  18. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  19. Test of the FDTD accuracy in the analysis of the scattering resonances associated with high-Q whispering-gallery modes of a circular cylinder.

    PubMed

    Boriskin, Artem V; Boriskina, Svetlana V; Rolland, Anthony; Sauleau, Ronan; Nosich, Alexander I

    2008-05-01

    Our objective is the assessment of the accuracy of a conventional finite-difference time-domain (FDTD) code in the computation of the near- and far-field scattering characteristics of a circular dielectric cylinder. We excite the cylinder with an electric or magnetic line current and demonstrate the failure of the two-dimensional FDTD algorithm to accurately characterize the emission rate and the field patterns near high-Q whispering-gallery-mode resonances. This is proven by comparison with the exact series solutions. The computational errors in the emission rate are then studied at the resonances still detectable with FDTD, i.e., having Q-factors up to 10(3).

  20. Conversion-Integration of MSFC Nonlinear Signal Diagnostic Analysis Algorithms for Realtime Execution of MSFC's MPP Prototype System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1996-01-01

    NASA's advanced propulsion system Small Scale Magnetic Disturbances/Advanced Technology Development (SSME/ATD) has been undergoing extensive flight certification and developmental testing, which involves large numbers of health monitoring measurements. To enhance engine safety and reliability, detailed analysis and evaluation of the measurement signals are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce the risk of catastrophic system failures and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. During the development of SSME, ASRI participated in the research and development of several advanced non- linear signal diagnostic methods for health monitoring and failure prediction in turbomachinery components. However, due to the intensive computational requirement associated with such advanced analysis tasks, current SSME dynamic data analysis and diagnostic evaluation is performed off-line following flight or ground test with a typical diagnostic turnaround time of one to two days. The objective of MSFC's MPP Prototype System is to eliminate such 'diagnostic lag time' by achieving signal processing and analysis in real-time. Such an on-line diagnostic system can provide sufficient lead time to initiate corrective action and also to enable efficient scheduling of inspection, maintenance and repair activities. The major objective of this project was to convert and implement a number of advanced nonlinear diagnostic DSP algorithms in a format consistent with that required for integration into the Vanderbilt Multigraph Architecture (MGA) Model Based Programming environment. This effort will allow the real-time execution of these algorithms using the MSFC MPP Prototype System. ASRI has completed the software conversion and integration of a sequence of nonlinear signal analysis techniques specified in the SOW for real-time execution on MSFC's MPP Prototype. This report documents and summarizes the results of the contract tasks; provides the complete computer source code; including all FORTRAN/C Utilities; and all other utilities/supporting software libraries that are required for operation.

  1. Bricklayer Static Analysis

    NASA Astrophysics Data System (ADS)

    Harris, Christopher

    In the U.S., science and math are taking spotlight in education, and rightfully so as they directly impact economic progression. Curiously absent is computer science, which despite its numerous job opportunities and growth does not have as much focus. This thesis develops a source code analysis framework using language translation, and machine learning classifiers to analyze programs written in Bricklayer for the purposes of programmatically identifying relative success or failure of a students Bricklayer program, helping teachers scale in the amount of students they can support, and providing better messaging. The thesis uses as a case study a set of student programs to demonstrate the possibilities of the framework.

  2. CELFE: Coupled Eulerian-Lagrangian Finite Element program for high velocity impact. Part 1: Theory and formulation. [hydroelasto-viscoplastic model

    NASA Technical Reports Server (NTRS)

    Lee, C. H.

    1978-01-01

    A 3-D finite element program capable of simulating the dynamic behavior in the vicinity of the impact point, together with predicting the dynamic response in the remaining part of the structural component subjected to high velocity impact is discussed. The finite algorithm is formulated in a general moving coordinate system. In the vicinity of the impact point contained by a moving failure front, the relative velocity of the coordinate system will approach the material particle velocity. The dynamic behavior inside the region is described by Eulerian formulation based on a hydroelasto-viscoplastic model. The failure front which can be regarded as the boundary of the impact zone is described by a transition layer. The layer changes the representation from the Eulerian mode to the Lagrangian mode outside the failure front by varying the relative velocity of the coordinate system to zero. The dynamic response in the remaining part of the structure described by the Lagrangian formulation is treated using advanced structural analysis. An interfacing algorithm for coupling CELFE with NASTRAN is constructed to provide computational capabilities for large structures.

  3. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    NASA Astrophysics Data System (ADS)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  4. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC. Quarterly report January through March 2011. Year 1 Quarter 2 progress report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S. A.; Kulak, R. F.; Bojanowski, C.

    2011-05-19

    This project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at the Turner-Fairbank Highway Research Center for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focusmore » of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of January through March 2011.« less

  5. Mid-term survival analysis of closed wedge high tibial osteotomy: A comparative study of computer-assisted and conventional techniques.

    PubMed

    Bae, Dae Kyung; Song, Sang Jun; Kim, Kang Il; Hur, Dong; Jeong, Ho Yeon

    2016-03-01

    The purpose of the present study was to compare the clinical and radiographic results and survival rates between computer-assisted and conventional closing wedge high tibial osteotomies (HTOs). Data from a consecutive cohort comprised of 75 computer-assisted HTOs and 75 conventional HTOs were retrospectively reviewed. The Knee Society knee and function scores, Hospital for Special Surgery (HSS) score and femorotibial angle (FTA) were compared between the two groups. Survival rates were also compared with procedure failure. The knee and function scores at one year postoperatively were slightly better in the computer-assisted group than those in conventional group (90.1 vs. 86.1) (82.0 vs. 76.0). The HSS scores at one year postoperatively were slightly better for the computer-assisted HTOs than those of conventional HTOs (89.5 vs. 81.8). The inlier of the postoperative FTA was wider in the computer-assisted group than that in the conventional HTO group (88.0% vs. 58.7%), and mean postoperative FTA was greater in the computer-assisted group that in the conventional HTO group (valgus 9.0° vs. valgus 7.6°, p<0.001). The five- and 10-year survival rates were 97.1% and 89.6%, respectively. No difference was detected in nine-year survival rates (p=0.369) between the two groups, although the clinical and radiographic results were better in the computer-assisted group that those in the conventional HTO group. Mid-term survival rates did not differ between computer-assisted and conventional HTOs. A comparative analysis of longer-term survival rate is required to demonstrate the long-term benefit of computer-assisted HTO. III. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Developing Crash-Resistant Electronic Services.

    ERIC Educational Resources Information Center

    Almquist, Arne J.

    1997-01-01

    Libraries' dependence on computers can lead to frustrations for patrons and staff during downtime caused by computer system failures. Advice for reducing the number of crashes is provided, focusing on improved training for systems staff, better management of library systems, and the development of computer systems using quality components which…

  7. Point-shear wave elastography predicts liver hypertrophy after portal vein embolization and postoperative liver failure.

    PubMed

    Hocquelet, A; Frulio, N; Gallo, G; Laurent, C; Papadopoulos, P; Salut, C; Denys, A; Trillaud, H

    2018-06-01

    To correlate point-shear wave elastography (SWE) with liver hypertrophy after right portal vein embolization (RPVE) and to determine its usefulness in predicting postoperative liver failure in patients undergoing partial liver resection. Point-SWE was performed the day before RPVE in 56 patients (41 men) with a median age of 66 years. The percentage (%) of future remnant liver (FRL) volume increase was defined as: %FRL post -%FRL pre %FRL pre ×100 and assessed on computed tomography performed 4 weeks after RPVE. Median (range) %FRL pre and %FRL post was respectively, 31.5% (12-48%) and 41% (23-61%) (P<0.001), with a median %FRL volume increase of 25.6% (-8; 123%). SWE correlated with %FRL volume increase (P=-0.510; P<0.001). SWV (P=0.003) and %FRL pre (P<0.001) were associated with %FRL volume increase at multivariate regression analysis. Forty-three patients (77%) were operated. Postoperative liver failure occurred in 14 patients (32.5%). Median SWE was different between the group with (1.68m/s) and without liver failure (1.07m/s) (P=0.018). The AUROC of SWE predicting liver failure was 0.724 with a best cut-off of 1.31m/s, corresponding to a sensitivity of 21%, specificity of 96%, positive predictive value 75% and negative predictive value of 72%. SWE was the single independent preoperative variable associated with liver failure. SWE assessed by point-SWE is a simple and useful tool to predict the FRL volume increase and postoperative liver failure in a population of patients with liver tumor. Copyright © 2018 Société française de radiologie. Published by Elsevier Masson SAS. All rights reserved.

  8. Cycles till failure of silver-zinc cells with competing failure modes - Preliminary data analysis

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.

    1980-01-01

    The data analysis of cycles to failure of silver-zinc electrochemical cells with competing failure modes is presented. The test ran 129 cells through charge-discharge cycles until failure; preliminary data analysis consisted of response surface estimate of life. Batteries fail through low voltage condition and an internal shorting condition; a competing failure modes analysis was made using maximum likelihood estimation for the extreme value life distribution. Extensive residual plotting and probability plotting were used to verify data quality and selection of model.

  9. Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)

    NASA Astrophysics Data System (ADS)

    Hancher, M.

    2013-12-01

    Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.

  10. Computers in medicine: liability issues for physicians.

    PubMed

    Hafner, A W; Filipowicz, A B; Whitely, W P

    1989-07-01

    Physicians routinely use computers to store, access, and retrieve medical information. As computer use becomes even more widespread in medicine, failure to utilize information systems may be seen as a violation of professional custom and lead to findings of professional liability. Even when a technology is not widespread, failure to incorporate it into medical practice may give rise to liability if the technology is accessible to the physician and reduces risk to the patient. Improvement in the availability of medical information sources imposes a greater burden on the physician to keep current and to obtain informed consent from patients. To routinely perform computer-assisted literature searches for informed consent and diagnosis is 'good medicine'. Clinical and diagnostic applications of computer technology now include computer-assisted decision making with the aid of sophisticated databases. Although such systems will expand the knowledge base and competence of physicians, malfunctioning software raises a major liability question. Also, complex computer-driven technology is used in direct patient care. Defective or improperly used hardware or software can lead to patient injury, thus raising additional complicated questions of professional liability and product liability.

  11. Palliative care consultations for heart failure patients: how many, when, and why?

    PubMed

    Bakitas, Marie; Macmartin, Meredith; Trzepkowski, Kenneth; Robert, Alina; Jackson, Lisa; Brown, Jeremiah R; Dionne-Odom, James N; Kono, Alan

    2013-03-01

    In preparation for development of a palliative care intervention for patients with heart failure (HF) and their caregivers, we aimed to characterize the HF population receiving palliative care consultations (PCCs). Reviewing charts from January 2006 to April 2011, we analyzed HF patient data including demographic and clinical characteristics, Seattle Heart Failure scores, and PCCs. Using Atlas qualitative software, we conducted a content analysis of PCC notes to characterize palliative care assessment and treatment recommendations. There were 132 HF patients with PCCs, of which 37% were New York Heart Association functional class III and 50% functional class IV. Retrospectively computed Seattle Heart Failure scores predicted 1-year mortality of 29% [interquartile range (IQR) 19-45] and median life expectancy of 2.8 years [IQR 1.6-4.2] years. Of the 132 HF patients, 115 (87%) had died by the time of the audit. In that cohort the actual median time from PCC to death was 21 [IQR 3-125] days. Reasons documented for PCCs included goals of care (80%), decision making (24%), hospice referral/discussion (24%), and symptom management (8%). Despite recommendations, PCCs are not being initiated until the last month of life. Earlier referral for PCC may allow for integration of a broader array of palliative care services. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Elastic and failure response of imperfect three-dimensional metallic lattices: the role of geometric defects induced by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Kamm, Paul; García-Moreno, Francisco; Banhart, John; Pasini, Damiano

    2017-10-01

    This paper examines three-dimensional metallic lattices with regular octet and rhombicuboctahedron units fabricated with geometric imperfections via Selective Laser Sintering. We use X-ray computed tomography to capture morphology, location, and distribution of process-induced defects with the aim of studying their role in the elastic response, damage initiation, and failure evolution under quasi-static compression. Testing results from in-situ compression tomography show that each lattice exhibits a distinct failure mechanism that is governed not only by cell topology but also by geometric defects induced by additive manufacturing. Extracted from X-ray tomography images, the statistical distributions of three sets of defects, namely strut waviness, strut thickness variation, and strut oversizing, are used to develop numerical models of statistically representative lattices with imperfect geometry. Elastic and failure responses are predicted within 10% agreement from the experimental data. In addition, a computational study is presented to shed light into the relationship between the amplitude of selected defects and the reduction of elastic properties compared to their nominal values. The evolution of failure mechanisms is also explained with respect to strut oversizing, a parameter that can critically cause failure mode transitions that are not visible in defect-free lattices.

  13. Refusal to participate in heart failure studies: do age and gender matter?

    PubMed Central

    Harrison, Jordan M; Jung, Miyeon; Lennie, Terry A; Moser, Debra K; Smith, Dean G; Dunbar, Sandra B; Ronis, David L; Koelling, Todd M; Giordani, Bruno; Riley, Penny L; Pressler, Susan J

    2018-01-01

    Aims and objectives The objective of this retrospective study was to evaluate reasons heart failure patients decline study participation, to inform interventions to improve enrollment. Background Failure to enrol older heart failure patients (age > 65) and women in studies may lead to sampling bias, threatening study validity. Design This study was a retrospective analysis of refusal data from four heart failure studies that enrolled 788 patients in four states. Methods Chi-Square and a pooled t-test were computed to analyse refusal data (n = 300) obtained from heart failure patients who were invited to participate in one of the four studies but declined. Results Refusal reasons from 300 patients (66% men, mean age 65 33) included: not interested (n = 163), too busy (n = 64), travel burden (n = 50), too sick (n = 38), family problems (n = 14), too much commitment (n = 13) and privacy concerns (n = 4). Chi-Square analyses showed no differences in frequency of reasons (p > 0 05) between men and women. Patients who refused were older, on average, than study participants. Conclusions Some reasons were patient-dependent; others were study-dependent. With ‘not interested’ as the most common reason, cited by over 50% of patients who declined, recruitment measures should be targeted at stimulating patients’ interest. Additional efforts may be needed to recruit older participants. However, reasons for refusal were consistent regardless of gender. Relevance to clinical practice Heart failure researchers should proactively approach a greater proportion of women and patients over age 65. With no gender differences in type of reasons for refusal, similar recruitment strategies can be used for men and women. However, enrolment of a representative proportion of women in heart failure studies has proven elusive and may require significant effort from researchers. Employing strategies to stimulate interest in studies is essential for recruiting heart failure patients, who overwhelmingly cited lack of interest as the top reason for refusal. PMID:26914834

  14. A real time microcomputer implementation of sensor failure detection for turbofan engines

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1989-01-01

    An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.

  15. A dual-mode generalized likelihood ratio approach to self-reorganizing digital flight control system design

    NASA Technical Reports Server (NTRS)

    Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.

    1975-01-01

    The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.

  16. Advanced cloud fault tolerance system

    NASA Astrophysics Data System (ADS)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  17. Bayesian design of decision rules for failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Willsky, A. S.

    1984-01-01

    The formulation of the decision making process of a failure detection algorithm as a Bayes sequential decision problem provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Bayesian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is potentially a useful one.

  18. Data processing device test apparatus and method therefor

    DOEpatents

    Wilcox, Richard Jacob; Mulig, Jason D.; Eppes, David; Bruce, Michael R.; Bruce, Victoria J.; Ring, Rosalinda M.; Cole, Jr., Edward I.; Tangyunyong, Paiboon; Hawkins, Charles F.; Louie, Arnold Y.

    2003-04-08

    A method and apparatus mechanism for testing data processing devices are implemented. The test mechanism isolates critical paths by correlating a scanning microscope image with a selected speed path failure. A trigger signal having a preselected value is generated at the start of each pattern vector. The sweep of the scanning microscope is controlled by a computer, which also receives and processes the image signals returned from the microscope. The value of the trigger signal is correlated with a set of pattern lines being driven on the DUT. The trigger is either asserted or negated depending the detection of a pattern line failure and the particular line that failed. In response to the detection of the particular speed path failure being characterized, and the trigger signal, the control computer overlays a mask on the image of the device under test (DUT). The overlaid image provides a visual correlation of the failure with the structural elements of the DUT at the level of resolution of the microscope itself.

  19. Numerical simulations of SHPB experiments for the dynamic compressive strength and failure of ceramics

    NASA Astrophysics Data System (ADS)

    Anderson, Charles E., Jr.; O'Donoghue, Padraic E.; Lankford, James; Walker, James D.

    1992-06-01

    Complementary to a study of the compressive strength of ceramic as a function of strain rate and confinement, numerical simulations of the split-Hopkinson pressure bar (SHPB) experiments have been performed using the two-dimensional wave propagation computer program HEMP. The numerical effort had two main thrusts. Firstly, the interpretation of the experimental data relies on several assumptions. The numerical simulations were used to investigate the validity of these assumptions. The second part of the effort focused on computing the idealized constitutive response of a ceramic within the SHPB experiment. These numerical results were then compared against experimental data. Idealized models examined included a perfectly elastic material, an elastic-perfectly plastic material, and an elastic material with failure. Post-failure material was modeled as having either no strength, or a strength proportional to the mean stress. The effects of confinement were also studied. Conclusions concerning the dynamic behavior of a ceramic up to and after failure are drawn from the numerical study.

  20. Economic consequences of aviation system disruptions: A reduced-form computable general equilibrium analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zhenhua; Rose, Adam Z.; Prager, Fynnwin

    The state of the art approach to economic consequence analysis (ECA) is computable general equilibrium (CGE) modeling. However, such models contain thousands of equations and cannot readily be incorporated into computerized systems used by policy analysts to yield estimates of economic impacts of various types of transportation system failures due to natural hazards, human related attacks or technological accidents. This paper presents a reduced-form approach to simplify the analytical content of CGE models to make them more transparent and enhance their utilization potential. The reduced-form CGE analysis is conducted by first running simulations one hundred times, varying key parameters, suchmore » as magnitude of the initial shock, duration, location, remediation, and resilience, according to a Latin Hypercube sampling procedure. Statistical analysis is then applied to the “synthetic data” results in the form of both ordinary least squares and quantile regression. The analysis yields linear equations that are incorporated into a computerized system and utilized along with Monte Carlo simulation methods for propagating uncertainties in economic consequences. Although our demonstration and discussion focuses on aviation system disruptions caused by terrorist attacks, the approach can be applied to a broad range of threat scenarios.« less

Top