Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H. W.; Kurth, R. E.
1991-01-01
The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.
Combined loading criterial influence on structural performance
NASA Technical Reports Server (NTRS)
Kuchta, B. J.; Sealey, D. M.; Howell, L. J.
1972-01-01
An investigation was conducted to determine the influence of combined loading criteria on the space shuttle structural performance. The study consisted of four primary phases: Phase (1) The determination of the sensitivity of structural weight to various loading parameters associated with the space shuttle. Phase (2) The determination of the sensitivity of structural weight to various levels of loading parameter variability and probability. Phase (3) The determination of shuttle mission loading parameters variability and probability as a function of design evolution and the identification of those loading parameters where inadequate data exists. Phase (4) The determination of rational methods of combining both deterministic time varying and probabilistic loading parameters to provide realistic design criteria. The study results are presented.
Probabilistic simulation of stress concentration in composite laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, L.
1993-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The probabilistic composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties while probabilistic finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate stress concentration factors such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using it to simulate the stress concentration factors in composite laminates made from three different composite systems. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the stress concentration factors are influenced by local stiffness variables, by load eccentricities and by initial stress fields.
Probabilistic Simulation of Stress Concentration in Composite Laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, D. G.
1994-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors (SCF's) in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties, whereas the finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate SCF's, such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using is to simulate the SCF's in three different composite laminates. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the SCF's are influenced by local stiffness variables, by load eccentricities, and by initial stress fields.
Probabilistic structural analysis methods for space propulsion system components
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
The development of a three-dimensional inelastic analysis methodology for the Space Shuttle main engine (SSME) structural components is described. The methodology is composed of: (1) composite load spectra, (2) probabilistic structural analysis methods, (3) the probabilistic finite element theory, and (4) probabilistic structural analysis. The methodology has led to significant technical progress in several important aspects of probabilistic structural analysis. The program and accomplishments to date are summarized.
Probabilistic load simulation: Code development status
NASA Astrophysics Data System (ADS)
Newell, J. F.; Ho, H.
1991-05-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
Probabilistic sizing of laminates with uncertainties
NASA Technical Reports Server (NTRS)
Shah, A. R.; Liaw, D. G.; Chamis, C. C.
1993-01-01
A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.
Probabilistic Analysis of a Composite Crew Module
NASA Technical Reports Server (NTRS)
Mason, Brian H.; Krishnamurthy, Thiagarajan
2011-01-01
An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.
Dynamic Probabilistic Instability of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2009-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.
NASA Technical Reports Server (NTRS)
Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George
2000-01-01
This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.
Probabilistic structural analysis methods of hot engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1989-01-01
Development of probabilistic structural analysis methods for hot engine structures at Lewis Research Center is presented. Three elements of the research program are: (1) composite load spectra methodology; (2) probabilistic structural analysis methodology; and (3) probabilistic structural analysis application. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) turbine blade temperature, pressure, and torque of the space shuttle main engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; and (3) evaluation of the failure probability. Collectively, the results demonstrate that the structural durability of hot engine structural components can be effectively evaluated in a formal probabilistic/reliability framework.
Alternate Methods in Refining the SLS Nozzle Plug Loads
NASA Technical Reports Server (NTRS)
Burbank, Scott; Allen, Andrew
2013-01-01
Numerical analysis has shown that the SLS nozzle environmental barrier (nozzle plug) design is inadequate for the prelaunch condition, which consists of two dominant loads: 1) the main engines startup pressure and 2) an environmentally induced pressure. Efforts to reduce load conservatisms included a dynamic analysis which showed a 31% higher safety factor compared to the standard static analysis. The environmental load is typically approached with a deterministic method using the worst possible combinations of pressures and temperatures. An alternate probabilistic approach, utilizing the distributions of pressures and temperatures, resulted in a 54% reduction in the environmental pressure load. A Monte Carlo simulation of environmental load that used five years of historical pressure and temperature data supported the results of the probabilistic analysis, indicating the probabilistic load is reflective of a 3-sigma condition (1 in 370 probability). Utilizing the probabilistic load analysis eliminated excessive conservatisms and will prevent a future overdesign of the nozzle plug. Employing a similar probabilistic approach to other design and analysis activities can result in realistic yet adequately conservative solutions.
Life Predicted in a Probabilistic Design Space for Brittle Materials With Transient Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Palfi, Tamas; Reh, Stefan
2005-01-01
Analytical techniques have progressively become more sophisticated, and now we can consider the probabilistic nature of the entire space of random input variables on the lifetime reliability of brittle structures. This was demonstrated with NASA s CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code combined with the commercially available ANSYS/Probabilistic Design System (ANSYS/PDS), a probabilistic analysis tool that is an integral part of the ANSYS finite-element analysis program. ANSYS/PDS allows probabilistic loads, component geometry, and material properties to be considered in the finite-element analysis. CARES/Life predicts the time dependent probability of failure of brittle material structures under generalized thermomechanical loading--such as that found in a turbine engine hot-section. Glenn researchers coupled ANSYS/PDS with CARES/Life to assess the effects of the stochastic variables of component geometry, loading, and material properties on the predicted life of the component for fully transient thermomechanical loading and cyclic loading.
Meso-Scale Modeling of Spall in a Heterogeneous Two-Phase Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springer, Harry Keo
2008-07-11
The influence of the heterogeneous second-phase particle structure and applied loading conditions on the ductile spall response of a model two-phase material was investigated. Quantitative metallography, three-dimensional (3D) meso-scale simulations (MSS), and small-scale spall experiments provided the foundation for this study. Nodular ductile iron (NDI) was selected as the model two-phase material for this study because it contains a large and readily identifiable second- phase particle population. Second-phase particles serve as the primary void nucleation sites in NDI and are, therefore, central to its ductile spall response. A mathematical model was developed for the NDI second-phase volume fraction that accountedmore » for the non-uniform particle size and spacing distributions within the framework of a length-scale dependent Gaussian probability distribution function (PDF). This model was based on novel multiscale sampling measurements. A methodology was also developed for the computer generation of representative particle structures based on their mathematical description, enabling 3D MSS. MSS were used to investigate the effects of second-phase particle volume fraction and particle size, loading conditions, and physical domain size of simulation on the ductile spall response of a model two-phase material. MSS results reinforce existing model predictions, where the spall strength metric (SSM) logarithmically decreases with increasing particle volume fraction. While SSM predictions are nearly independent of applied load conditions at lower loading rates, which is consistent with previous studies, loading dependencies are observed at higher loading rates. There is also a logarithmic decrease in SSM for increasing (initial) void size, as well. A model was developed to account for the effects of loading rate, particle size, matrix sound-speed, and, in the NDI-specific case, the probabilistic particle volume fraction model. Small-scale spall experiments were designed and executed for the purpose of validating closely-coupled 3D MSS. While the spall strength is nearly independent of specimen thickness, the fragment morphology varies widely. Detailed MSS demonstrate that the interactions between the tensile release waves are altered by specimen thickness and that these interactions are primarily responsible for fragment formation. MSS also provided insights on the regional amplification of damage, which enables the development of predictive void evolution models.« less
Pérez, M A
2012-12-01
Probabilistic analyses allow the effect of uncertainty in system parameters to be determined. In the literature, many researchers have investigated static loading effects on dental implants. However, the intrinsic variability and uncertainty of most of the main problem parameters are not accounted for. The objective of this research was to apply a probabilistic computational approach to predict the fatigue life of three different commercial dental implants considering the variability and uncertainty in their fatigue material properties and loading conditions. For one of the commercial dental implants, the influence of its diameter in the fatigue life performance was also studied. This stochastic technique was based on the combination of a probabilistic finite element method (PFEM) and a cumulative damage approach known as B-model. After 6 million of loading cycles, local failure probabilities of 0.3, 0.4 and 0.91 were predicted for the Lifecore, Avinent and GMI implants, respectively (diameter of 3.75mm). The influence of the diameter for the GMI implant was studied and the results predicted a local failure probability of 0.91 and 0.1 for the 3.75mm and 5mm, respectively. In all cases the highest failure probability was located at the upper screw-threads. Therefore, the probabilistic methodology proposed herein may be a useful tool for performing a qualitative comparison between different commercial dental implants. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Inferential Framework for Autonomous Cryogenic Loading Operations
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Khasin, Michael; Timucin, Dogan; Sass, Jared; Perotti, Jose; Brown, Barbara
2017-01-01
We address problem of autonomous cryogenic management of loading operations on the ground and in space. As a step towards solution of this problem we develop a probabilistic framework for inferring correlations parameters of two-fluid cryogenic flow. The simulation of two-phase cryogenic flow is performed using nearly-implicit scheme. A concise set of cryogenic correlations is introduced. The proposed approach is applied to an analysis of the cryogenic flow in experimental Propellant Loading System built at NASA KSC. An efficient simultaneous optimization of a large number of model parameters is demonstrated and a good agreement with the experimental data is obtained.
NASA Astrophysics Data System (ADS)
Hoffmann, K.; Srouji, R. G.; Hansen, S. O.
2017-12-01
The technology development within the structural design of long-span bridges in Norwegian fjords has created a need for reformulating the calculation format and the physical quantities used to describe the properties of wind and the associated wind-induced effects on bridge decks. Parts of a new probabilistic format describing the incoming, undisturbed wind is presented. It is expected that a fixed probabilistic format will facilitate a more physically consistent and precise description of the wind conditions, which in turn increase the accuracy and considerably reduce uncertainties in wind load assessments. Because the format is probabilistic, a quantification of the level of safety and uncertainty in predicted wind loads is readily accessible. A simple buffeting response calculation demonstrates the use of probabilistic wind data in the assessment of wind loads and responses. Furthermore, vortex-induced fatigue damage is discussed in relation to probabilistic wind turbulence data and response measurements from wind tunnel tests.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
A probabilistic watershed-based framework was developed to encompass wadeable streams within all three ecoregions of West Virginia, with the exclusion noted below. In Phase I of the project (year 2001), we developed and applied a probabilistic watershed-based sampling framework ...
Probabilistic Meteorological Characterization for Turbine Loads
NASA Astrophysics Data System (ADS)
Kelly, M.; Larsen, G.; Dimitrov, N. K.; Natarajan, A.
2014-06-01
Beyond the existing, limited IEC prescription to describe fatigue loads on wind turbines, we look towards probabilistic characterization of the loads via analogous characterization of the atmospheric flow, particularly for today's "taller" turbines with rotors well above the atmospheric surface layer. Based on both data from multiple sites as well as theoretical bases from boundary-layer meteorology and atmospheric turbulence, we offer probabilistic descriptions of shear and turbulence intensity, elucidating the connection of each to the other as well as to atmospheric stability and terrain. These are used as input to loads calculation, and with a statistical loads output description, they allow for improved design and loads calculations.
Moving Aerospace Structural Design Practice to a Load and Resistance Factor Approach
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.; Raju, Ivatury S.
2016-01-01
Aerospace structures are traditionally designed using the factor of safety (FOS) approach. The limit load on the structure is determined and the structure is then designed for FOS times the limit load - the ultimate load. Probabilistic approaches utilize distributions for loads and strengths. Failures are predicted to occur in the region of intersection of the two distributions. The load and resistance factor design (LRFD) approach judiciously combines these two approaches by intensive calibration studies on loads and strength to result in structures that are efficient and reliable. This paper discusses these three approaches.
Prados-Privado, María; Gehrke, Sérgio A; Rojo, Rosa; Prados-Frutos, Juan Carlos
2018-06-11
The aim of this study was to fully characterize the mechanical behavior of an external hexagonal implant connection (ø3.5 mm, 10-mm length) with an in vitro study, a three-dimensional finite element analysis, and a probabilistic fatigue study. Ten implant-abutment assemblies were randomly divided into two groups, five were subjected to a fracture test to obtain the maximum fracture load, and the remaining were exposed to a fatigue test with 360,000 cycles of 150 ± 10 N. After mechanical cycling, all samples were attached to the torque-testing machine and the removal torque was measured in Newton centimeters. A finite element analysis (FEA) was then executed in ANSYS® to verify all results obtained in the mechanical tests. Finally, due to the randomness of the fatigue phenomenon, a probabilistic fatigue model was computed to obtain the probability of failure associated with each cycle load. FEA demonstrated that the fracture corresponded with a maximum stress of 2454 MPa obtained in the in vitro fracture test. Mean life was verified by the three methods. Results obtained by the FEA, the in vitro test, and the probabilistic approaches were in accordance. Under these conditions, no mechanical etiology failure is expected to occur up to 100,000 cycles. Graphical abstract ᅟ.
The composite load spectra project
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H.; Kurth, R. E.
1990-01-01
Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.
NASA Technical Reports Server (NTRS)
Cruse, T. A.
1987-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.
1988-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.
Probabilistic structural analysis using a general purpose finite element program
NASA Astrophysics Data System (ADS)
Riha, D. S.; Millwater, H. R.; Thacker, B. H.
1992-07-01
This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.
Concurrent Probabilistic Simulation of High Temperature Composite Structural Response
NASA Technical Reports Server (NTRS)
Abdi, Frank
1996-01-01
A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.
NASA Astrophysics Data System (ADS)
Wang, Yaping; Lin, Shunjiang; Yang, Zhibin
2017-05-01
In the traditional three-phase power flow calculation of the low voltage distribution network, the load model is described as constant power. Since this model cannot reflect the characteristics of actual loads, the result of the traditional calculation is always different from the actual situation. In this paper, the load model in which dynamic load represented by air conditioners parallel with static load represented by lighting loads is used to describe characteristics of residents load, and the three-phase power flow calculation model is proposed. The power flow calculation model includes the power balance equations of three-phase (A,B,C), the current balance equations of phase 0, and the torque balancing equations of induction motors in air conditioners. And then an alternating iterative algorithm of induction motor torque balance equations with each node balance equations is proposed to solve the three-phase power flow model. This method is applied to an actual low voltage distribution network of residents load, and by the calculation of three different operating states of air conditioners, the result demonstrates the effectiveness of the proposed model and the algorithm.
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
Probabilistic Structural Analysis of SSME Turbopump Blades: Probabilistic Geometry Effects
NASA Technical Reports Server (NTRS)
Nagpal, V. K.
1985-01-01
A probabilistic study was initiated to evaluate the precisions of the geometric and material properties tolerances on the structural response of turbopump blades. To complete this study, a number of important probabilistic variables were identified which are conceived to affect the structural response of the blade. In addition, a methodology was developed to statistically quantify the influence of these probabilistic variables in an optimized way. The identified variables include random geometric and material properties perturbations, different loadings and a probabilistic combination of these loadings. Influences of these probabilistic variables are planned to be quantified by evaluating the blade structural response. Studies of the geometric perturbations were conducted for a flat plate geometry as well as for a space shuttle main engine blade geometry using a special purpose code which uses the finite element approach. Analyses indicate that the variances of the perturbations about given mean values have significant influence on the response.
NASA Astrophysics Data System (ADS)
Halder, A.; Miller, F. J.
1982-03-01
A probabilistic model to evaluate the risk of liquefaction at a site and to limit or eliminate damage during earthquake induced liquefaction is proposed. The model is extended to consider three dimensional nonhomogeneous soil properties. The parameters relevant to the liquefaction phenomenon are identified, including: (1) soil parameters; (2) parameters required to consider laboratory test and sampling effects; and (3) loading parameters. The fundamentals of risk based design concepts pertient to liquefaction are reviewed. A detailed statistical evaluation of the soil parameters in the proposed liquefaction model is provided and the uncertainty associated with the estimation of in situ relative density is evaluated for both direct and indirect methods. It is found that the liquefaction potential the uncertainties in the load parameters could be higher than those in the resistance parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. Schroeder; R. W. Youngblood
The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less
Simulation of probabilistic wind loads and building analysis
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Chamis, Christos C.
1991-01-01
Probabilistic wind loads likely to occur on a structure during its design life are predicted. Described here is a suitable multifactor interactive equation (MFIE) model and its use in the Composite Load Spectra (CLS) computer program to simulate the wind pressure cumulative distribution functions on four sides of a building. The simulated probabilistic wind pressure load was applied to a building frame, and cumulative distribution functions of sway displacements and reliability against overturning were obtained using NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), a stochastic finite element computer code. The geometry of the building and the properties of building members were also considered as random in the NESSUS analysis. The uncertainties of wind pressure, building geometry, and member section property were qualified in terms of their respective sensitivities on the structural response.
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
Processing of probabilistic information in weight perception and motor prediction.
Trampenau, Leif; van Eimeren, Thilo; Kuhtz-Buschbeck, Johann
2017-02-01
We studied the effects of probabilistic cues, i.e., of information of limited certainty, in the context of an action task (GL: grip-lift) and of a perceptual task (WP: weight perception). Normal subjects (n = 22) saw four different probabilistic visual cues, each of which announced the likely weight of an object. In the GL task, the object was grasped and lifted with a pinch grip, and the peak force rates indicated that the grip and load forces were scaled predictively according to the probabilistic information. The WP task provided the expected heaviness related to each probabilistic cue; the participants gradually adjusted the object's weight until its heaviness matched the expected weight for a given cue. Subjects were randomly assigned to two groups: one started with the GL task and the other one with the WP task. The four different probabilistic cues influenced weight adjustments in the WP task and peak force rates in the GL task in a similar manner. The interpretation and utilization of the probabilistic information was critically influenced by the initial task. Participants who started with the WP task classified the four probabilistic cues into four distinct categories and applied these categories to the subsequent GL task. On the other side, participants who started with the GL task applied three distinct categories to the four cues and retained this classification in the following WP task. The initial strategy, once established, determined the way how the probabilistic information was interpreted and implemented.
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Kurth, R. E.; Ho, H.
1986-01-01
A multiyear program is performed with the objective to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts. Progress of the first year's effort includes completion of a sufficient portion of each task -- probabilistic models, code development, validation, and an initial operational code. This code has from its inception an expert system philosophy that could be added to throughout the program and in the future. The initial operational code is only applicable to turbine blade type loadings. The probabilistic model included in the operational code has fitting routines for loads that utilize a modified Discrete Probabilistic Distribution termed RASCAL, a barrier crossing method and a Monte Carlo method. An initial load model was developed by Battelle that is currently used for the slowly varying duty cycle type loading. The intent is to use the model and related codes essentially in the current form for all loads that are based on measured or calculated data that have followed a slowly varying profile.
A look-ahead probabilistic contingency analysis framework incorporating smart sampling techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Etingov, Pavel V.; Ren, Huiying
2016-07-18
This paper describes a framework of incorporating smart sampling techniques in a probabilistic look-ahead contingency analysis application. The predictive probabilistic contingency analysis helps to reflect the impact of uncertainties caused by variable generation and load on potential violations of transmission limits.
Probabilistic liquefaction triggering based on the cone penetration test
Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Tokimatsu, K.
2005-01-01
Performance-based earthquake engineering requires a probabilistic treatment of potential failure modes in order to accurately quantify the overall stability of the system. This paper is a summary of the application portions of the probabilistic liquefaction triggering correlations proposed recently proposed by Moss and co-workers. To enable probabilistic treatment of liquefaction triggering, the variables comprising the seismic load and the liquefaction resistance were treated as inherently uncertain. Supporting data from an extensive Cone Penetration Test (CPT)-based liquefaction case history database were used to develop a probabilistic correlation. The methods used to measure the uncertainty of the load and resistance variables, how the interactions of these variables were treated using Bayesian updating, and how reliability analysis was applied to produce curves of equal probability of liquefaction are presented. The normalization for effective overburden stress, the magnitude correlated duration weighting factor, and the non-linear shear mass participation factor used are also discussed.
Dynamic Stability of Uncertain Laminated Beams Under Subtangential Loads
NASA Technical Reports Server (NTRS)
Goyal, Vijay K.; Kapania, Rakesh K.; Adelman, Howard (Technical Monitor); Horta, Lucas (Technical Monitor)
2002-01-01
Because of the inherent complexity of fiber-reinforced laminated composites, it can be challenging to manufacture composite structures according to their exact design specifications, resulting in unwanted material and geometric uncertainties. In this research, we focus on the deterministic and probabilistic stability analysis of laminated structures subject to subtangential loading, a combination of conservative and nonconservative tangential loads, using the dynamic criterion. Thus a shear-deformable laminated beam element, including warping effects, is derived to study the deterministic and probabilistic response of laminated beams. This twenty-one degrees of freedom element can be used for solving both static and dynamic problems. In the first-order shear deformable model used here we have employed a more accurate method to obtain the transverse shear correction factor. The dynamic version of the principle of virtual work for laminated composites is expressed in its nondimensional form and the element tangent stiffness and mass matrices are obtained using analytical integration The stability is studied by giving the structure a small disturbance about an equilibrium configuration, and observing if the resulting response remains small. In order to study the dynamic behavior by including uncertainties into the problem, three models were developed: Exact Monte Carlo Simulation, Sensitivity Based Monte Carlo Simulation, and Probabilistic FEA. These methods were integrated into the developed finite element analysis. Also, perturbation and sensitivity analysis have been used to study nonconservative problems, as well as to study the stability analysis, using the dynamic criterion.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Kurth, R. E.; Ho, H.
1991-01-01
The objective of this program is to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen posts and system ducting. The first approach will consist of using state of the art probabilistic methods to describe the individual loading conditions and combinations of these loading conditions to synthesize the composite load spectra simulation. The second approach will consist of developing coupled models for composite load spectra simulation which combine the deterministic models for composite load dynamic, acoustic, high pressure, and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients will then be determined using advanced probabilistic simulation methods with and without strategically selected experimental data.
Fracture mechanics analysis of cracked structures using weight function and neural network method
NASA Astrophysics Data System (ADS)
Chen, J. G.; Zang, F. G.; Yang, Y.; Shi, K. K.; Fu, X. L.
2018-06-01
Stress intensity factors(SIFs) due to thermal-mechanical load has been established by using weight function method. Two reference stress states sere used to determine the coefficients in the weight function. Results were evaluated by using data from literature and show a good agreement between them. So, the SIFs can be determined quickly using the weight function obtained when cracks subjected to arbitrary loads, and presented method can be used for probabilistic fracture mechanics analysis. A probabilistic methodology considering Monte-Carlo with neural network (MCNN) has been developed. The results indicate that an accurate probabilistic characteristic of the KI can be obtained by using the developed method. The probability of failure increases with the increasing of loads, and the relationship between is nonlinear.
Barratt, Paul A; Selfe, James
2018-06-01
To improve outcomes of physiotherapy treatment for patients with Lateral Epicondylalgia. A systematic audit and quality improvement project over three phases, each of one year duration. Salford Royal NHS Foundation Trust Teaching Hospital Musculoskeletal Physiotherapy out-patients department. n=182. Phase one - individual discretion; Phase two - strengthening as a core treatment however individual discretion regarding prescription and implementation; Phase three - standardised protocol using high load isometric exercise, progressing on to slow combined concentric & eccentric strengthening. Global Rating of Change Scale, Pain-free grip strength, Patient Rated Tennis Elbow Evaluation, Tampa Scale of Kinesophobia-11. Phase three demonstrated a reduction in the average number of treatments by 42% whilst improving the number of responders to treatment by 8% compared to phase one. Complete cessation of non-evidence based treatments was also observed by phase three. Strengthening should be a core treatment for LE. Load setting needs to be sufficient. In phase three of the audit a standardised tendon loading programme using patient specific high load isometric exercises into discomfort/pain demonstrated a higher percentage of responders compared to previous phases. Copyright © 2017 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Probabilistic finite elements for fracture mechanics
NASA Technical Reports Server (NTRS)
Besterfield, Glen
1988-01-01
The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.
Probabilistic Prediction of Lifetimes of Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.
2006-01-01
ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.
Probabilistic structural analysis methods of hot engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1989-01-01
Development of probabilistic structural analysis methods for hot engine structures is a major activity at Lewis Research Center. Recent activities have focused on extending the methods to include the combined uncertainties in several factors on structural response. This paper briefly describes recent progress on composite load spectra models, probabilistic finite element structural analysis, and probabilistic strength degradation modeling. Progress is described in terms of fundamental concepts, computer code development, and representative numerical results.
Probabilistic structural analysis of space propulsion system LOX post
NASA Technical Reports Server (NTRS)
Newell, J. F.; Rajagopal, K. R.; Ho, H. W.; Cunniff, J. M.
1990-01-01
The probabilistic structural analysis program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress; Cruse et al., 1988) is applied to characterize the dynamic loading and response of the Space Shuttle main engine (SSME) LOX post. The design and operation of the SSME are reviewed; the LOX post structure is described; and particular attention is given to the generation of composite load spectra, the finite-element model of the LOX post, and the steps in the NESSUS structural analysis. The results are presented in extensive tables and graphs, and it is shown that NESSUS correctly predicts the structural effects of changes in the temperature loading. The probabilistic approach also facilitates (1) damage assessments for a given failure model (based on gas temperature, heat-shield gap, and material properties) and (2) correlation of the gas temperature with operational parameters such as engine thrust.
Krejsa, Martin; Janas, Petr; Yilmaz, Işık; Marschalko, Marian; Bouchal, Tomas
2013-01-01
The load-carrying system of each construction should fulfill several conditions which represent reliable criteria in the assessment procedure. It is the theory of structural reliability which determines probability of keeping required properties of constructions. Using this theory, it is possible to apply probabilistic computations based on the probability theory and mathematic statistics. Development of those methods has become more and more popular; it is used, in particular, in designs of load-carrying structures with the required level or reliability when at least some input variables in the design are random. The objective of this paper is to indicate the current scope which might be covered by the new method—Direct Optimized Probabilistic Calculation (DOProC) in assessments of reliability of load-carrying structures. DOProC uses a purely numerical approach without any simulation techniques. This provides more accurate solutions to probabilistic tasks, and, in some cases, such approach results in considerably faster completion of computations. DOProC can be used to solve efficiently a number of probabilistic computations. A very good sphere of application for DOProC is the assessment of the bolt reinforcement in the underground and mining workings. For the purposes above, a special software application—“Anchor”—has been developed. PMID:23935412
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie
2010-10-01
The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters. © 2010 SETAC.
Probabilistic application of a fugacity model to predict triclosan fate during wastewater treatment.
Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie
2010-07-01
The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters. (c) 2010 SETAC.
Development of a probabilistic PCB-bioaccumulation model for six fish species in the Hudson River
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stackelberg, K. von; Menzie, C.
1995-12-31
In 1984 the US Environmental Protection Agency (USEPA) completed a Feasibility Study on the Hudson River that investigated remedial alternatives and issued a Record of Decision (ROD) later that year. In December 1989 USEPA decided to reassess the No Action decision for Hudson River sediments. This reassessment consists of three phases: Interim Characterization and Evaluation (Phase 1); Further Site Characterization and Analysis (Phase 2); and, Feasibility study (Phase 3). A Phase 1 report was completed in August, 1991. The team then completed a Final Work Plan for Phase 2 in September 1992. This work plan identified various PCB fate andmore » transport modeling activities to support the Hudson River PCB Reassessment Remedial Investigation and Feasibility Study (RI/FS). This talk provides a description of the development of a Probabilistic bioaccumulation models to describe the uptake of PCBs on a congener-specific basis in six fish species. The authors have developed a framework for relating body burdens of PCBs in fish to exposure concentrations in Hudson River water and sediments. This framework is used to understand historical and current relationships as well as to predict fish body burdens for future conditions under specific remediation and no action scenarios. The framework incorporates a probabilistic approach to predict distributions in PCB body burdens for selected fish species. These models can predict single population statistics such as the average expected values of PCBs under specific scenarios as well as the distribution of expected concentrations.« less
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
On the Accuracy of Probabilistic Bucking Load Prediction
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.
2001-01-01
The buckling strength of thin-walled stiffened or unstiffened, metallic or composite shells is of major concern in aeronautical and space applications. The difficulty to predict the behavior of axially compressed thin-walled cylindrical shells continues to worry design engineers as we enter the third millennium. Thanks to extensive research programs in the late sixties and early seventies and the contributions of many eminent scientists, it is known that buckling strength calculations are affected by the uncertainties in the definition of the parameters of the problem such as definition of loads, material properties, geometric variables, edge support conditions, and the accuracy of the engineering models and analysis tools used in the design phase. The NASA design criteria monographs from the late sixties account for these design uncertainties by the use of a lump sum safety factor. This so-called 'empirical knockdown factor gamma' usually results in overly conservative design. Recently new reliability based probabilistic design procedure for buckling critical imperfect shells have been proposed. It essentially consists of a stochastic approach which introduces an improved 'scientific knockdown factor lambda(sub a)', that is not as conservative as the traditional empirical one. In order to incorporate probabilistic methods into a High Fidelity Analysis Approach one must be able to assess the accuracy of the various steps that must be executed to complete a reliability calculation. In the present paper the effect of size of the experimental input sample on the predicted value of the scientific knockdown factor lambda(sub a) calculated by the First-Order, Second-Moment Method is investigated.
Concentrating the phase of a coherent state by means of probabilistic amplification
NASA Astrophysics Data System (ADS)
Usuga, Mario A.; Müller, Christian R.; Wittmann, Christoffer; Marek, Petr; Filip, Radim; Marquardt, Christoph; Leuchs, Gerd; Andersen, Ulrik L.
2011-10-01
We discuss the recent implementation of phase concentration of an optical coherent state by use of a probabilistic noiseless amplifier. The operation of the amplifier is described pictorially with phase space diagrams, and the experimental results are outlined.
Use of model calibration to achieve high accuracy in analysis of computer networks
Frogner, Bjorn; Guarro, Sergio; Scharf, Guy
2004-05-11
A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.
Influence of load type on power factor and harmonic composition of three-phase rectifier current
NASA Astrophysics Data System (ADS)
Nikolayzin, N. V.; Vstavskaya, E. V.; Konstantinov, V. I.; Konstantinova, O. V.
2018-05-01
This article is devoted to research of the harmonic composition of the three-phase rectifier current consumed when it operates with different types of load. The results are compared with Standard requirements.
Probabilistic models for reactive behaviour in heterogeneous condensed phase media
NASA Astrophysics Data System (ADS)
Baer, M. R.; Gartling, D. K.; DesJardin, P. E.
2012-02-01
This work presents statistically-based models to describe reactive behaviour in heterogeneous energetic materials. Mesoscale effects are incorporated in continuum-level reactive flow descriptions using probability density functions (pdfs) that are associated with thermodynamic and mechanical states. A generalised approach is presented that includes multimaterial behaviour by treating the volume fraction as a random kinematic variable. Model simplifications are then sought to reduce the complexity of the description without compromising the statistical approach. Reactive behaviour is first considered for non-deformable media having a random temperature field as an initial state. A pdf transport relationship is derived and an approximate moment approach is incorporated in finite element analysis to model an example application whereby a heated fragment impacts a reactive heterogeneous material which leads to a delayed cook-off event. Modelling is then extended to include deformation effects associated with shock loading of a heterogeneous medium whereby random variables of strain, strain-rate and temperature are considered. A demonstrative mesoscale simulation of a non-ideal explosive is discussed that illustrates the joint statistical nature of the strain and temperature fields during shock loading to motivate the probabilistic approach. This modelling is derived in a Lagrangian framework that can be incorporated in continuum-level shock physics analysis. Future work will consider particle-based methods for a numerical implementation of this modelling approach.
Robertazzi, Thomas G.; Skiena, Steven; Wang, Kai
2017-08-08
Provided are an apparatus and method for load-balancing of a three-phase electric power distribution system having a multi-phase feeder, including obtaining topology information of the feeder identifying supply points for customer loads and feeder sections between the supply points, obtaining customer information that includes peak customer load at each of the points between each of the feeder sections, performing a phase balancing analysis, and recommending phase assignment at the customer load supply points.
NASA Technical Reports Server (NTRS)
Fayssal, Safie; Weldon, Danny
2008-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program called Constellation to send crew and cargo to the international Space Station, to the moon, and beyond. As part of the Constellation program, a new launch vehicle, Ares I, is being developed by NASA Marshall Space Flight Center. Designing a launch vehicle with high reliability and increased safety requires a significant effort in understanding design variability and design uncertainty at the various levels of the design (system, element, subsystem, component, etc.) and throughout the various design phases (conceptual, preliminary design, etc.). In a previous paper [1] we discussed a probabilistic functional failure analysis approach intended mainly to support system requirements definition, system design, and element design during the early design phases. This paper provides an overview of the application of probabilistic engineering methods to support the detailed subsystem/component design and development as part of the "Design for Reliability and Safety" approach for the new Ares I Launch Vehicle. Specifically, the paper discusses probabilistic engineering design analysis cases that had major impact on the design and manufacturing of the Space Shuttle hardware. The cases represent important lessons learned from the Space Shuttle Program and clearly demonstrate the significance of probabilistic engineering analysis in better understanding design deficiencies and identifying potential design improvement for Ares I. The paper also discusses the probabilistic functional failure analysis approach applied during the early design phases of Ares I and the forward plans for probabilistic design analysis in the detailed design and development phases.
Probabilistic Design and Analysis Framework
NASA Technical Reports Server (NTRS)
Strack, William C.; Nagpal, Vinod K.
2010-01-01
PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.
Phase transitions in coupled map lattices and in associated probabilistic cellular automata.
Just, Wolfram
2006-10-01
Analytical tools are applied to investigate piecewise linear coupled map lattices in terms of probabilistic cellular automata. The so-called disorder condition of probabilistic cellular automata is closely related with attracting sets in coupled map lattices. The importance of this condition for the suppression of phase transitions is illustrated by spatially one-dimensional systems. Invariant densities and temporal correlations are calculated explicitly. Ising type phase transitions are found for one-dimensional coupled map lattices acting on repelling sets and for a spatially two-dimensional Miller-Huse-like system with stable long time dynamics. Critical exponents are calculated within a finite size scaling approach. The relevance of detailed balance of the resulting probabilistic cellular automaton for the critical behavior is pointed out.
This paper presents a probabilistic framework for the assessment of groundwater pollution potential by pesticides in two adjacent agricultural watersheds in the Mid-Altantic Coastal Plain. Indices for estimating streams vulnerability to pollutants' load from the surficial aquifer...
The pdf approach to turbulent polydispersed two-phase flows
NASA Astrophysics Data System (ADS)
Minier, Jean-Pierre; Peirano, Eric
2001-10-01
The purpose of this paper is to develop a probabilistic approach to turbulent polydispersed two-phase flows. The two-phase flows considered are composed of a continuous phase, which is a turbulent fluid, and a dispersed phase, which represents an ensemble of discrete particles (solid particles, droplets or bubbles). Gathering the difficulties of turbulent flows and of particle motion, the challenge is to work out a general modelling approach that meets three requirements: to treat accurately the physically relevant phenomena, to provide enough information to address issues of complex physics (combustion, polydispersed particle flows, …) and to remain tractable for general non-homogeneous flows. The present probabilistic approach models the statistical dynamics of the system and consists in simulating the joint probability density function (pdf) of a number of fluid and discrete particle properties. A new point is that both the fluid and the particles are included in the pdf description. The derivation of the joint pdf model for the fluid and for the discrete particles is worked out in several steps. The mathematical properties of stochastic processes are first recalled. The various hierarchies of pdf descriptions are detailed and the physical principles that are used in the construction of the models are explained. The Lagrangian one-particle probabilistic description is developed first for the fluid alone, then for the discrete particles and finally for the joint fluid and particle turbulent systems. In the case of the probabilistic description for the fluid alone or for the discrete particles alone, numerical computations are presented and discussed to illustrate how the method works in practice and the kind of information that can be extracted from it. Comments on the current modelling state and propositions for future investigations which try to link the present work with other ideas in physics are made at the end of the paper.
Rapid removal of nitrobenzene in a three-phase ozone loaded system with gas-liquid-liquid
Li, Shiyin; Zhu, Jiangpeng; Wang, Guoxiang; Ni, Lixiao; Zhang, Yong; Green, Christopher T.
2015-01-01
This study explores the removal rate of nitrobenzene (NB) using a new gas-liquid-liquid (G-L-L) three-phase ozone loaded system consisting of a gaseous ozone, an aqueous solvent phase, and a fluorinated solvent phase (perfluorodecalin, or FDC). The removal rate of NB was quantified in relation to six factors including 1) initial pH, 2) initial NB dosage, 3) gaseous ozone dosage, 4) free radical scavenger, 5) FDC pre-aerated gaseous ozone, and 6) reuse of FDC. The NB removal rate is positively affected by the first three factors. Compared with the conventional gas-liquid (water) (G-L) two-phase ozonation system, the free radical scavenger (tertiary butyl alcohol) has much less influence on the removal rate of NB in the G-L-L system. The FDC loaded ozone acts as an ozone reservoir and serves as the main reactive phase in the G-L-L three-phase system. The reuse of FDC has little influence on the removal rate of NB. These experimental results suggest that the oxidation efficiency of ozonation in the G-L-L three-phase system is better than that in the conventional G-L two-phase system.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The fourth year of technical developments on the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) system for Probabilistic Structural Analysis Methods is summarized. The effort focused on the continued expansion of the Probabilistic Finite Element Method (PFEM) code, the implementation of the Probabilistic Boundary Element Method (PBEM), and the implementation of the Probabilistic Approximate Methods (PAppM) code. The principal focus for the PFEM code is the addition of a multilevel structural dynamics capability. The strategy includes probabilistic loads, treatment of material, geometry uncertainty, and full probabilistic variables. Enhancements are included for the Fast Probability Integration (FPI) algorithms and the addition of Monte Carlo simulation as an alternate. Work on the expert system and boundary element developments continues. The enhanced capability in the computer codes is validated by applications to a turbine blade and to an oxidizer duct.
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
Probabilistic Assessment of National Wind Tunnel
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M.; Chamis, C. C.
1996-01-01
A preliminary probabilistic structural assessment of the critical section of National Wind Tunnel (NWT) is performed using NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) computer code. Thereby, the capabilities of NESSUS code have been demonstrated to address reliability issues of the NWT. Uncertainties in the geometry, material properties, loads and stiffener location on the NWT are considered to perform the reliability assessment. Probabilistic stress, frequency, buckling, fatigue and proof load analyses are performed. These analyses cover the major global and some local design requirements. Based on the assumed uncertainties, the results reveal the assurance of minimum 0.999 reliability for the NWT. Preliminary life prediction analysis results show that the life of the NWT is governed by the fatigue of welds. Also, reliability based proof test assessment is performed.
NASA Astrophysics Data System (ADS)
Diniş, C. M.; Cunţan, C. D.; Rob, R. O. S.; Popa, G. N.
2018-01-01
The paper presents the analysis of a power factor with capacitors banks, without series coils, used for improving power factor for a three-phase and single-phase inductive loads. In the experimental measurements, to improve the power factor, the Roederstein ESTAmat RPR power factor controller can command up to twelve capacitors banks, while experimenting using only six capacitors banks. Six delta capacitors banks with approximately equal reactive powers were used for experimentation. The experimental measurements were carried out with a three-phase power quality analyser which worked in three cases: a case without a controller with all capacitors banks permanently parallel connected with network, and two other cases with power factor controller (one with setting power factor at 0.92 and the other one at 1). When performing experiments with the power factor controller, a current transformer was used to measure the current on one phase (at a more charged or less loaded phase).
Probabilistic Dynamic Buckling of Smart Composite Shells
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented to evaluate the deterministic and nondeterministic dynamic buckling of smart composite shells. The combined use of composite mechanics, finite element computer codes, and probabilistic analysis enable the effective assessment of the dynamic buckling load of smart composite shells. A universal plot is generated to estimate the dynamic buckling load of composite shells at various load rates and probabilities. The shell structure is also evaluated with smart fibers embedded in the plies right below the outer plies. The results show that, on the average, the use of smart fibers improved the shell buckling resistance by about 10 percent at different probabilities and delayed the buckling occurrence time. The probabilistic sensitivities results indicate that uncertainties in the fiber volume ratio and ply thickness have major effects on the buckling load while uncertainties in the electric field strength and smart material volume fraction have moderate effects. For the specific shell considered in this evaluation, the use of smart composite material is not recommended because the shell buckling resistance can be improved by simply re-arranging the orientation of the outer plies, as shown in the dynamic buckling analysis results presented in this report.
Probabilistic Dynamic Buckling of Smart Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2007-01-01
A computational simulation method is presented to evaluate the deterministic and nondeterministic dynamic buckling of smart composite shells. The combined use of intraply hybrid composite mechanics, finite element computer codes, and probabilistic analysis enable the effective assessment of the dynamic buckling load of smart composite shells. A universal plot is generated to estimate the dynamic buckling load of composite shells at various load rates and probabilities. The shell structure is also evaluated with smart fibers embedded in the plies right next to the outer plies. The results show that, on the average, the use of smart fibers improved the shell buckling resistance by about 10% at different probabilities and delayed the buckling occurrence time. The probabilistic sensitivities results indicate that uncertainties in the fiber volume ratio and ply thickness have major effects on the buckling load while uncertainties in the electric field strength and smart material volume fraction have moderate effects. For the specific shell considered in this evaluation, the use of smart composite material is not recommended because the shell buckling resistance can be improved by simply re-arranging the orientation of the outer plies, as shown in the dynamic buckling analysis results presented in this report.
Incorporating seismic phase correlations into a probabilistic model of global-scale seismology
NASA Astrophysics Data System (ADS)
Arora, Nimar
2013-04-01
We present a probabilistic model of seismic phases whereby the attributes of the body-wave phases are correlated to those of the first arriving P phase. This model has been incorporated into NET-VISA (Network processing Vertically Integrated Seismic Analysis) a probabilistic generative model of seismic events, their transmission, and detection on a global seismic network. In the earlier version of NET-VISA, seismic phase were assumed to be independent of each other. Although this didn't affect the quality of the inferred seismic bulletin, for the most part, it did result in a few instances of anomalous phase association. For example, an S phase with a smaller slowness than the corresponding P phase. We demonstrate that the phase attributes are indeed highly correlated, for example the uncertainty in the S phase travel time is significantly reduced given the P phase travel time. Our new model exploits these correlations to produce better calibrated probabilities for the events, as well as fewer anomalous associations.
Recent developments of the NESSUS probabilistic structural analysis computer program
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.
1992-01-01
The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.
NASA Astrophysics Data System (ADS)
He, Jingjing; Wang, Dengjiang; Zhang, Weifang
2015-03-01
This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.
1993-09-01
Different Size Transformers (Per Transformer ) 41 15 Additional Energy Losses for Mis-Sized Transformers (Per Transformer ) 42 16 Power System ...directly affects the amount of neutral line power loss in the system . Since most Army three-phase loads are distribution transformers spread out over a...61 Balancing Three-Phase Loads Balancing Feeder Circuit Loads Power Factor Correction Optimal Transformer Sizing Conductor Sizing Combined
Model of Mixing Layer With Multicomponent Evaporating Drops
NASA Technical Reports Server (NTRS)
Bellan, Josette; Le Clercq, Patrick
2004-01-01
A mathematical model of a three-dimensional mixing layer laden with evaporating fuel drops composed of many chemical species has been derived. The study is motivated by the fact that typical real petroleum fuels contain hundreds of chemical species. Previously, for the sake of computational efficiency, spray studies were performed using either models based on a single representative species or models based on surrogate fuels of at most 15 species. The present multicomponent model makes it possible to perform more realistic simulations by accounting for hundreds of chemical species in a computationally efficient manner. The model is used to perform Direct Numerical Simulations in continuing studies directed toward understanding the behavior of liquid petroleum fuel sprays. The model includes governing equations formulated in an Eulerian and a Lagrangian reference frame for the gas and the drops, respectively. This representation is consistent with the expected volumetrically small loading of the drops in gas (of the order of 10 3), although the mass loading can be substantial because of the high ratio (of the order of 103) between the densities of liquid and gas. The drops are treated as point sources of mass, momentum, and energy; this representation is consistent with the drop size being smaller than the Kolmogorov scale. Unsteady drag, added-mass effects, Basset history forces, and collisions between the drops are neglected, and the gas is assumed calorically perfect. The model incorporates the concept of continuous thermodynamics, according to which the chemical composition of a fuel is described probabilistically, by use of a distribution function. Distribution functions generally depend on many parameters. However, for mixtures of homologous species, the distribution can be approximated with acceptable accuracy as a sole function of the molecular weight. The mixing layer is initially laden with drops in its lower stream, and the drops are colder than the gas. Drop evaporation leads to a change in the gas-phase composition, which, like the composition of the drops, is described in a probabilistic manner
Analysis of the geometrical-probabilistic models of electrocrystallization
NASA Astrophysics Data System (ADS)
Isaev, V. A.; Grishenkova, O. V.; Zaykov, Yu. P.
2016-08-01
The formation of a three-dimensional electrode deposit under potentiostatic conditions, including the stages of nucleation, growth, and overlap of growing new-phase clusters and their diffusion zones, is considered. The models of electrochemical phase formation for kinetics- and diffusion-controlled growth are analyzed, and the correctness of the approximations used in these models is estimated. The possibility of application of these models to an analysis of the electrodeposition of silicon from molten salts is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford Kuofei
Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less
Evolution and stability of altruist strategies in microbial games
NASA Astrophysics Data System (ADS)
Adami, Christoph; Schossau, Jory; Hintze, Arend
2012-01-01
When microbes compete for limited resources, they often engage in chemical warfare using bacterial toxins. This competition can be understood in terms of evolutionary game theory (EGT). We study the predictions of EGT for the bacterial “suicide bomber” game in terms of the phase portraits of population dynamics, for parameter combinations that cover all interesting games for two-players, and seven of the 38 possible phase portraits of the three-player game. We compare these predictions to simulations of these competitions in finite well-mixed populations, but also allowing for probabilistic rather than pure strategies, as well as Darwinian adaptation over tens of thousands of generations. We find that Darwinian evolution of probabilistic strategies stabilizes games of the rock-paper-scissors type that emerge for parameters describing realistic bacterial populations, and point to ways in which the population fixed point can be selected by changing those parameters.
Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design
NASA Technical Reports Server (NTRS)
Kuguoglu, Latife; Ludwiczak, Damian
2006-01-01
The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staschus, K.
1985-01-01
In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less
Elasto-limited plastic analysis of structures for probabilistic conditions
NASA Astrophysics Data System (ADS)
Movahedi Rad, M.
2018-06-01
With applying plastic analysis and design methods, significant saving in material can be obtained. However, as a result of this benefit excessive plastic deformations and large residual displacements might develop, which in turn might lead to unserviceability and collapse of the structure. In this study, for deterministic problem the residual deformation of structures is limited by considering a constraint on the complementary strain energy of the residual forces. For probabilistic problem the constraint for the complementary strain energy of the residual forces is given randomly and critical stresses updated during the iteration. Limit curves are presented for the plastic limit load factors. The results show that these constraints have significant effects on the load factors. The formulations of the deterministic and probabilistic problems lead to mathematical programming which are solved by the use of nonlinear algorithm.
Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2011-01-01
A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.
Probabilistic Simulation for Combined Cycle Fatigue in Composites
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2010-01-01
A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multifactor interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.
Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2010-01-01
A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, K.A.
1992-12-01
This study investigated the behavior of the SCS6/Ti-15-3 metal matrix composite with a quasi-isotropic layup when tested under static and fatigue conditions. Specimens were subjected to in-phase and out-of-phase thermo-mechanical and isothermal fatigue loading. In-phase and isothermal loading produced a fiber dominated failure while the out-of-phase loading produced a matrix dominated failure. Also, fiber domination in all three profiles was present at higher maximum applied loads and al three profiles demonstrated matrix domination at lower maximum applied loads. Thus, failure is both profile dependent and load equipment. Additional analyses, using laminated plate theory, Halpin-Tsai equations, METCAN, and the Linear Lifemore » Fraction Model (LLFM), showed: the as-received specimens contained plies where a portion of the fibers are debonded from the matrix; during fatigue cycling, the 90 deg. plies and a percentage of the 45 deg. plies failed immediately with greater damage becoming evident with additional cycles; and, the LLFM suggests that there may be a non-linear combination of fiber and matrix domination for in-phase and isothermal cycling.« less
NASA Technical Reports Server (NTRS)
Bast, Callie C.; Jurena, Mark T.; Godines, Cody R.; Chamis, Christos C. (Technical Monitor)
2001-01-01
This project included both research and education objectives. The goal of this project was to advance innovative research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction for improved reliability and safety of structural components of aerospace and aircraft propulsion systems. Research and education partners included Glenn Research Center (GRC) and Southwest Research Institute (SwRI) along with the University of Texas at San Antonio (UTSA). SwRI enhanced the NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) code and provided consulting support for NESSUS-related activities at UTSA. NASA funding supported three undergraduate students, two graduate students, a summer course instructor and the Principal Investigator. Matching funds from UTSA provided for the purchase of additional equipment for the enhancement of the Advanced Interactive Computational SGI Lab established during the first year of this Partnership Award to conduct the probabilistic finite element summer courses. The research portion of this report presents the cumulation of work performed through the use of the probabilistic finite element program, NESSUS, Numerical Evaluation and Structures Under Stress, and an embedded Material Strength Degradation (MSD) model. Probabilistic structural analysis provided for quantification of uncertainties associated with the design, thus enabling increased system performance and reliability. The structure examined was a Space Shuttle Main Engine (SSME) fuel turbopump blade. The blade material analyzed was Inconel 718, since the MSD model was previously calibrated for this material. Reliability analysis encompassing the effects of high temperature and high cycle fatigue, yielded a reliability value of 0.99978 using a fully correlated random field for the blade thickness. The reliability did not change significantly for a change in distribution type except for a change in distribution from Gaussian to Weibull for the centrifugal load. The sensitivity factors determined to be most dominant were the centrifugal loading and the initial strength of the material. These two sensitivity factors were influenced most by a change in distribution type from Gaussian to Weibull. The education portion of this report describes short-term and long-term educational objectives. Such objectives serve to integrate research and education components of this project resulting in opportunities for ethnic minority students, principally Hispanic. The primary vehicle to facilitate such integration was the teaching of two probabilistic finite element method courses to undergraduate engineering students in the summers of 1998 and 1999.
NASA Technical Reports Server (NTRS)
McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.
2012-01-01
This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.
Long-term strength and damage accumulation in laminates
NASA Astrophysics Data System (ADS)
Dzenis, Yuris A.; Joshi, Shiv P.
1993-04-01
A modified version of the probabilistic model developed by authors for damage evolution analysis of laminates subjected to random loading is utilized to predict long-term strength of laminates. The model assumes that each ply in a laminate consists of a large number of mesovolumes. Probabilistic variation functions for mesovolumes stiffnesses as well as strengths are used in the analysis. Stochastic strains are calculated using the lamination theory and random function theory. Deterioration of ply stiffnesses is calculated on the basis of the probabilities of mesovolumes failures using the theory of excursions of random process beyond the limits. Long-term strength and damage accumulation in a Kevlar/epoxy laminate under tension and complex in-plane loading are investigated. Effects of the mean level and stochastic deviation of loading on damage evolution and time-to-failure of laminate are discussed. Long-term cumulative damage at the time of the final failure at low loading levels is more than at high loading levels. The effect of the deviation in loading is more pronounced at lower mean loading levels.
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Wing, Kam Liu
1987-01-01
In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.
Design for cyclic loading endurance of composites
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.
1993-01-01
The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.
Assuring Life in Composite Systems
NASA Technical Reports Server (NTRS)
Chamis, Christos c.
2008-01-01
A computational simulation method is presented to assure life in composite systems by using dynamic buckling of smart composite shells as an example. The combined use of composite mechanics, finite element computer codes, and probabilistic analysis enable the effective assessment of the dynamic buckling load of smart composite shells. A universal plot is generated to estimate the dynamic buckling load of composite shells at various load rates and probabilities. The shell structure is also evaluated with smart fibers embedded in the plies right below the outer plies. The results show that, on the average, the use of smart fibers improved the shell buckling resistance by about 9% at different probabilities and delayed the buckling occurrence time. The probabilistic sensitivities results indicate that uncertainties in the fiber volume ratio and ply thickness have major effects on the buckling load. The uncertainties in the electric field strength and smart material volume fraction have moderate effects and thereby in the assured life of the shell.
Reliability and risk assessment of structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1991-01-01
Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.
Probabilistic simulation of uncertainties in thermal structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Shiao, Michael
1990-01-01
Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.
Probabilistic Fatigue Life Updating for Railway Bridges Based on Local Inspection and Repair.
Lee, Young-Joo; Kim, Robin E; Suh, Wonho; Park, Kiwon
2017-04-24
Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target bridge life, bridge maintenance activities such as local inspection and repair should be undertaken properly. However, this is a challenging task because there are various sources of uncertainty associated with aging bridges, train loads, environmental conditions, and maintenance work. Therefore, to perform optimal risk-based maintenance of railway bridges, it is essential to estimate the probabilistic fatigue life of a railway bridge and update the life information based on the results of local inspections and repair. Recently, a system reliability approach was proposed to evaluate the fatigue failure risk of structural systems and update the prior risk information in various inspection scenarios. However, this approach can handle only a constant-amplitude load and has limitations in considering a cyclic load with varying amplitude levels, which is the major loading pattern generated by train traffic. In addition, it is not feasible to update the prior risk information after bridges are repaired. In this research, the system reliability approach is further developed so that it can handle a varying-amplitude load and update the system-level risk of fatigue failure for railway bridges after inspection and repair. The proposed method is applied to a numerical example of an in-service railway bridge, and the effects of inspection and repair on the probabilistic fatigue life are discussed.
Probabilistic Fatigue Life Updating for Railway Bridges Based on Local Inspection and Repair
Lee, Young-Joo; Kim, Robin E.; Suh, Wonho; Park, Kiwon
2017-01-01
Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target bridge life, bridge maintenance activities such as local inspection and repair should be undertaken properly. However, this is a challenging task because there are various sources of uncertainty associated with aging bridges, train loads, environmental conditions, and maintenance work. Therefore, to perform optimal risk-based maintenance of railway bridges, it is essential to estimate the probabilistic fatigue life of a railway bridge and update the life information based on the results of local inspections and repair. Recently, a system reliability approach was proposed to evaluate the fatigue failure risk of structural systems and update the prior risk information in various inspection scenarios. However, this approach can handle only a constant-amplitude load and has limitations in considering a cyclic load with varying amplitude levels, which is the major loading pattern generated by train traffic. In addition, it is not feasible to update the prior risk information after bridges are repaired. In this research, the system reliability approach is further developed so that it can handle a varying-amplitude load and update the system-level risk of fatigue failure for railway bridges after inspection and repair. The proposed method is applied to a numerical example of an in-service railway bridge, and the effects of inspection and repair on the probabilistic fatigue life are discussed. PMID:28441768
Remembrance of inferences past: Amortization in human hypothesis generation.
Dasgupta, Ishita; Schulz, Eric; Goodman, Noah D; Gershman, Samuel J
2018-05-21
Bayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain's limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants' responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Velazquez, Antonio; Swartz, R. Andrew
2012-04-01
Wind energy is an increasingly important component of this nation's renewable energy portfolio, however safe and economical wind turbine operation is a critical need to ensure continued adoption. Safe operation of wind turbine structures requires not only information regarding their condition, but their operational environment. Given the difficulty inherent in SHM processes for wind turbines (damage detection, location, and characterization), some uncertainty in conditional assessment is expected. Furthermore, given the stochastic nature of the loading on turbine structures, a probabilistic framework is appropriate to characterize their risk of failure at a given time. Such information will be invaluable to turbine controllers, allowing them to operate the structures within acceptable risk profiles. This study explores the characterization of the turbine loading and response envelopes for critical failure modes of the turbine blade structures. A framework is presented to develop an analytical estimation of the loading environment (including loading effects) based on the dynamic behavior of the blades. This is influenced by behaviors including along and across-wind aero-elastic effects, wind shear gradient, tower shadow effects, and centrifugal stiffening effects. The proposed solution includes methods that are based on modal decomposition of the blades and require frequent updates to the estimated modal properties to account for the time-varying nature of the turbine and its environment. The estimated demand statistics are compared to a code-based resistance curve to determine a probabilistic estimate of the risk of blade failure given the loading environment.
NASA Technical Reports Server (NTRS)
Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.
1995-01-01
Volume 5 is Appendix C, Auxiliary Shuttle Risk Analyses, and contains the following reports: Probabilistic Risk Assessment of Space Shuttle Phase 1 - Space Shuttle Catastrophic Failure Frequency Final Report; Risk Analysis Applied to the Space Shuttle Main Engine - Demonstration Project for the Main Combustion Chamber Risk Assessment; An Investigation of the Risk Implications of Space Shuttle Solid Rocket Booster Chamber Pressure Excursions; Safety of the Thermal Protection System of the Space Shuttle Orbiter - Quantitative Analysis and Organizational Factors; Space Shuttle Main Propulsion Pressurization System Probabilistic Risk Assessment, Final Report; and Space Shuttle Probabilistic Risk Assessment Proof-of-Concept Study - Auxiliary Power Unit and Hydraulic Power Unit Analysis Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Jun-li; Han, Xiaochun; Heuser, Brent J.
2016-04-01
High-energy synchrotron X-ray diffraction was utilized to study the mechanical response of the f.c.c delta hydride phase, the intermetallic precipitation with hexagonal C14 lave phase and the alpha-Zr phase in the Zircaloy-4 materials with a hydride rim/blister structure near one surface of the material during in-situ uniaxial tension experiment at 200 degrees C. The f.c.c delta was the only hydride phase observed in the rim/blister structure. The conventional Rietveld refinement was applied to measure the macro-strain equivalent response of the three phases. Two regions were delineated in the applied load versus lattice strain measurement: a linear elastic strain region andmore » region that exhibited load partitioning. Load partitioning was quantified by von Mises analysis. The three phases were observed to have similar elastic modulus at 200 degrees C.« less
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating
Lee, Young-Joo; Cho, Soojin
2016-01-01
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125
High Cycle Fatigue (HCF) Science and Technology Program 2002 Annual Report
2003-08-01
Turbine Engine Airfoils, Phase I 4.3 Probabilistic Design of Turbine Engine Airfoils, Phase II 4.4 Probabilistic Blade Design System 4.5...XTL17/SE2 7.4 Conclusion 8.0 TEST AND EVALUATION 8.1 Characterization Test Protocol 8.2 Demonstration Test Protocol 8.3 Development of Multi ...transparent and opaque overlays for processing. The objective of the SBIR Phase I program was to identify and evaluate promising methods for
Probabilistic evaluation of fuselage-type composite structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1992-01-01
A methodology is developed to computationally simulate the uncertain behavior of composite structures. The uncertain behavior includes buckling loads, natural frequencies, displacements, stress/strain etc., which are the consequences of the random variation (scatter) of the primitive (independent random) variables in the constituent, ply, laminate and structural levels. This methodology is implemented in the IPACS (Integrated Probabilistic Assessment of Composite Structures) computer code. A fuselage-type composite structure is analyzed to demonstrate the code's capability. The probability distribution functions of the buckling loads, natural frequency, displacement, strain and stress are computed. The sensitivity of each primitive (independent random) variable to a given structural response is also identified from the analyses.
NASA Technical Reports Server (NTRS)
Onwubiko, Chin-Yere; Onyebueke, Landon
1996-01-01
The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.
NASA Astrophysics Data System (ADS)
Jung, Byung Ik; Cho, Yong Sun; Park, Hyoung Min; Chung, Dong Chul; Choi, Hyo Sang
2013-01-01
The South Korean power grid has a network structure for the flexible operation of the system. The continuously increasing power demand necessitated the increase of power facilities, which decreased the impedance in the power system. As a result, the size of the fault current in the event of a system fault increased. As this increased fault current size is threatening the breaking capacity of the circuit breaker, the main protective device, a solution to this problem is needed. The superconducting fault current limiter (SFCL) has been designed to address this problem. SFCL supports the stable operation of the circuit breaker through its excellent fault-current-limiting operation [1-5]. In this paper, the quench and fault current limiting characteristics of the flux-coupling-type SFCL with one three-phase transformer were compared with those of the same SFCL type but with three single-phase transformers. In the case of the three-phase transformers, both the superconducting elements of the fault and sound phases were quenched, whereas in the case of the single-phase transformer, only that of the fault phase was quenched. For the fault current limiting rate, both cases showed similar rates for the single line-to-ground fault, but for the three-wire earth fault, the fault current limiting rate of the single-phase transformer was over 90% whereas that of the three-phase transformer was about 60%. It appears that when the three-phase transformer was used, the limiting rate decreased because the fluxes by the fault current of each phase were linked in one core. When the power loads of the superconducting elements were compared by fault type, the initial (half-cycle) load was great when the single-phase transformer was applied, whereas for the three-phase transformer, its power load was slightly lower at the initial stage but became greater after the half fault cycle.
Kerner, Boris S; Klenov, Sergey L; Schreckenberg, Michael
2014-05-01
Physical features of induced phase transitions in a metastable free flow at an on-ramp bottleneck in three-phase and two-phase cellular automaton (CA) traffic-flow models have been revealed. It turns out that at given flow rates at the bottleneck, to induce a moving jam (F → J transition) in the metastable free flow through the application of a time-limited on-ramp inflow impulse, in both two-phase and three-phase CA models the same critical amplitude of the impulse is required. If a smaller impulse than this critical one is applied, neither F → J transition nor other phase transitions can occur in the two-phase CA model. We have found that in contrast with the two-phase CA model, in the three-phase CA model, if the same smaller impulse is applied, then a phase transition from free flow to synchronized flow (F → S transition) can be induced at the bottleneck. This explains why rather than the F → J transition, in the three-phase theory traffic breakdown at a highway bottleneck is governed by an F → S transition, as observed in real measured traffic data. None of two-phase traffic-flow theories incorporates an F → S transition in a metastable free flow at the bottleneck that is the main feature of the three-phase theory. On the one hand, this shows the incommensurability of three-phase and two-phase traffic-flow theories. On the other hand, this clarifies why none of the two-phase traffic-flow theories can explain the set of fundamental empirical features of traffic breakdown at highway bottlenecks.
Probabilistic fatigue methodology for six nines reliability
NASA Technical Reports Server (NTRS)
Everett, R. A., Jr.; Bartlett, F. D., Jr.; Elber, Wolf
1990-01-01
Fleet readiness and flight safety strongly depend on the degree of reliability that can be designed into rotorcraft flight critical components. The current U.S. Army fatigue life specification for new rotorcraft is the so-called six nines reliability, or a probability of failure of one in a million. The progress of a round robin which was established by the American Helicopter Society (AHS) Subcommittee for Fatigue and Damage Tolerance is reviewed to investigate reliability-based fatigue methodology. The participants in this cooperative effort are in the U.S. Army Aviation Systems Command (AVSCOM) and the rotorcraft industry. One phase of the joint activity examined fatigue reliability under uniquely defined conditions for which only one answer was correct. The other phases were set up to learn how the different industry methods in defining fatigue strength affected the mean fatigue life and reliability calculations. Hence, constant amplitude and spectrum fatigue test data were provided so that each participant could perform their standard fatigue life analysis. As a result of this round robin, the probabilistic logic which includes both fatigue strength and spectrum loading variability in developing a consistant reliability analysis was established. In this first study, the reliability analysis was limited to the linear cumulative damage approach. However, it is expected that superior fatigue life prediction methods will ultimately be developed through this open AHS forum. To that end, these preliminary results were useful in identifying some topics for additional study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuchs, E.F.; You, Y.; Roesler, D.J.
This paper proposes a new model for three-phase transformers with three legs with and without tank under DC bias based on electric and magnetic circuit theory. For the calculation of the nonsinusoidal no-load currents, a combination of time and frequency domains is used. The analysis shows that (1) asymmetric three-phase transformers with three legs generate magnetizing currents with triplen harmonics not being of the zero-sequence type. (2) The wave shapes of the three magnetizing currents of (asymmetric) transformers are dependent on the phase sequence. (3) The magnetic history of transformer magnetization -- due to residual magnetization and hysteresis of themore » tank -- cannot be ignored if a DC bias is present and the magnetic influence of the tank is relatively strong, e.g., for oil-cooled transformers. (4) Symmetric three-phase transformers with three legs generate no-load currents without triplen harmonics. (5) The effects of DC bias currents (e.g., reactive power demand, harmonic distortion) can be suppressed employing symmetric three-phase transformers with three legs including tank. Measurements corroborate computational results; thus this nonlinear model is valid and accurate.« less
Acoustic emission based damage localization in composites structures using Bayesian identification
NASA Astrophysics Data System (ADS)
Kundu, A.; Eaton, M. J.; Al-Jumali, S.; Sikdar, S.; Pullin, R.
2017-05-01
Acoustic emission based damage detection in composite structures is based on detection of ultra high frequency packets of acoustic waves emitted from damage sources (such as fibre breakage, fatigue fracture, amongst others) with a network of distributed sensors. This non-destructive monitoring scheme requires solving an inverse problem where the measured signals are linked back to the location of the source. This in turn enables rapid deployment of mitigative measures. The presence of significant amount of uncertainty associated with the operating conditions and measurements makes the problem of damage identification quite challenging. The uncertainties stem from the fact that the measured signals are affected by the irregular geometries, manufacturing imprecision, imperfect boundary conditions, existing damages/structural degradation, amongst others. This work aims to tackle these uncertainties within a framework of automated probabilistic damage detection. The method trains a probabilistic model of the parametrized input and output model of the acoustic emission system with experimental data to give probabilistic descriptors of damage locations. A response surface modelling the acoustic emission as a function of parametrized damage signals collected from sensors would be calibrated with a training dataset using Bayesian inference. This is used to deduce damage locations in the online monitoring phase. During online monitoring, the spatially correlated time data is utilized in conjunction with the calibrated acoustic emissions model to infer the probabilistic description of the acoustic emission source within a hierarchical Bayesian inference framework. The methodology is tested on a composite structure consisting of carbon fibre panel with stiffeners and damage source behaviour has been experimentally simulated using standard H-N sources. The methodology presented in this study would be applicable in the current form to structural damage detection under varying operational loads and would be investigated in future studies.
NASA Astrophysics Data System (ADS)
Tippett, Michael K.; Ranganathan, Meghana; L'Heureux, Michelle; Barnston, Anthony G.; DelSole, Timothy
2017-05-01
Here we examine the skill of three, five, and seven-category monthly ENSO probability forecasts (1982-2015) from single and multi-model ensemble integrations of the North American Multimodel Ensemble (NMME) project. Three-category forecasts are typical and provide probabilities for the ENSO phase (El Niño, La Niña or neutral). Additional forecast categories indicate the likelihood of ENSO conditions being weak, moderate or strong. The level of skill observed for differing numbers of forecast categories can help to determine the appropriate degree of forecast precision. However, the dependence of the skill score itself on the number of forecast categories must be taken into account. For reliable forecasts with same quality, the ranked probability skill score (RPSS) is fairly insensitive to the number of categories, while the logarithmic skill score (LSS) is an information measure and increases as categories are added. The ignorance skill score decreases to zero as forecast categories are added, regardless of skill level. For all models, forecast formats and skill scores, the northern spring predictability barrier explains much of the dependence of skill on target month and forecast lead. RPSS values for monthly ENSO forecasts show little dependence on the number of categories. However, the LSS of multimodel ensemble forecasts with five and seven categories show statistically significant advantages over the three-category forecasts for the targets and leads that are least affected by the spring predictability barrier. These findings indicate that current prediction systems are capable of providing more detailed probabilistic forecasts of ENSO phase and amplitude than are typically provided.
Probabilistic assessment of uncertain adaptive hybrid composites
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1994-01-01
Adaptive composite structures using actuation materials, such as piezoelectric fibers, were assessed probabilistically utilizing intraply hybrid composite mechanics in conjunction with probabilistic composite structural analysis. Uncertainties associated with the actuation material as well as the uncertainties in the regular (traditional) composite material properties were quantified and considered in the assessment. Static and buckling analyses were performed for rectangular panels with various boundary conditions and different control arrangements. The probability density functions of the structural behavior, such as maximum displacement and critical buckling load, were computationally simulated. The results of the assessment indicate that improved design and reliability can be achieved with actuation material.
Probabilistic safety assessment of the design of a tall buildings under the extreme load
DOE Office of Scientific and Technical Information (OSTI.GOV)
Králik, Juraj, E-mail: juraj.kralik@stuba.sk
2016-06-08
The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
Probabilistic safety assessment of the design of a tall buildings under the extreme load
NASA Astrophysics Data System (ADS)
Králik, Juraj
2016-06-01
The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.
2016-05-11
the phases of the system load and ground, so to size the voltage divider appropriately Vsys is set equal to the maximum phase-to-ground voltage. The...civilian and military systems is increasing due to technological improvements in power conversion and changing requirements in system loads. The development...of high-power pulsed loads on naval platforms, such as the Laser Weapon System (LaWS) and the electromagnetic railgun, calls for the ability to
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
Load partitioning in Ai{sub 2}0{sub 3-}Al composites with three- dimensional periodic architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, M. L.; Rao, R.; Almer, J. D.
2009-05-01
Interpenetrating composites are created by infiltration of liquid aluminum into three-dimensional (3-D) periodic Al{sub 2}O{sub 3} preforms with simple tetragonal symmetry produced by direct-write assembly. Volume-averaged lattice strains in the Al{sub 2}O{sub 3} phase of the composite are measured by synchrotron X-ray diffraction for various uniaxial compression stresses up to -350MPa. Load transfer, found by diffraction to occur from the metal phase to the ceramic phase, is in general agreement with simple rule-of-mixture models and in better agreement with more complex, 3-D finite-element models that account for metal plasticity and details of the geometry of both phases. Spatially resolved diffractionmore » measurements show variations in load transfer at two different positions within the composite.« less
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; ...
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less
Probabilistic Sizing and Verification of Space Ceramic Structures
NASA Astrophysics Data System (ADS)
Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit
2012-07-01
Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.
Learning to breathe? Feedforward regulation of the inspiratory motor drive.
Zaman, Jonas; Van den Bergh, Omer; Fannes, Stien; Van Diest, Ilse
2014-09-15
Claims have been made that breathing is in part controlled by feedforward regulation. In a classical conditioning paradigm, we investigated anticipatory increases in the inspiratory motor drive as measured by inspiratory occlusion pressure (P100). In an acquisition phase, an experimental group (N=13) received a low-intensity resistive load (5 cmH2O/l/s) for three consecutive inspirations as Conditioned Stimulus (CS), preceding a load of a stronger intensity (20 cmH2O/l/s) for three subsequent inspirations as unconditioned stimulus (US). The control group (N=11) received the low-intensity load for six consecutive inspirations. In a post-acquisition phase both groups received the low-intensity load for six consecutive inspirations. Responses to the CS-load only differed between groups during the first acquisition trials and a strong increase in P100 during the US-loads was observed, which habituated across the experiment. Our results suggest that the disruption caused by adding low to moderate resistive loads to three consecutive inspirations results in a short-lasting anticipatory increase in inspiratory motor drive. Copyright © 2014 Elsevier B.V. All rights reserved.
Reprint of "Learning to breathe? Feedforward regulation of the inspiratory motor drive".
Zaman, Jonas; Van den Bergh, Omer; Fannes, Stien; Van Diest, Ilse
2014-12-01
Claims have been made that breathing is in part controlled by feedforward regulation. In a classical conditioning paradigm, we investigated anticipatory increases in the inspiratory motor drive as measured by inspiratory occlusion pressure (P100). In an acquisition phase, an experimental group (N = 13) received a low-intensity resistive load (5 cmH2O/l/s) for three consecutive inspirations as Conditioned Stimulus (CS), preceding a load of a stronger intensity (20 cmH2O/l/s) for three subsequent inspirations as unconditioned stimulus (US). The control group (N = 11) received the low-intensity load for six consecutive inspirations. In a post-acquisition phase both groups received the low-intensity load for six consecutive inspirations. Responses to the CS-load only differed between groups during the first acquisition trials and a strong increase in P100 during the US-loads was observed, which habituated across the experiment. Our results suggest that the disruption caused by adding low to moderate resistive loads to three consecutive inspirations results in a short-lasting anticipatory increase in inspiratory motor drive. Copyright © 2014 Elsevier B.V. All rights reserved.
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Unknown loads affect force production capacity in early phases of bench press throws.
Hernández Davó, J L; Sabido Solana, R; Sarabia Marínm, J M; Sánchez Martos, Á; Moya Ramón, M
2015-10-01
Explosive strength training aims to improve force generation in early phases of movement due to its importance in sport performance. The present study examined the influence of lack of knowledge about the load lifted in explosive parameters during bench press throws. Thirteen healthy young men (22.8±2.0 years) participated in the study. Participants performed bench press throws with three different loads (30, 50 and 70% of 1 repetition maximum) in two different conditions (known and unknown loads). In unknown condition, loads were changed within sets in each repetition and participants did not know the load, whereas in known condition the load did not change within sets and participants had knowledge about the load lifted. Results of repeated-measures ANOVA revealed that unknown conditions involves higher power in the first 30, 50, 100 and 150 ms with the three loads, higher values of ratio of force development in those first instants, and differences in time to reach maximal rate of force development with 50 and 70% of 1 repetition maximum. This study showed that unknown conditions elicit higher values of explosive parameters in early phases of bench press throws, thereby this kind of methodology could be considered in explosive strength training.
The probabilistic nature of preferential choice.
Rieskamp, Jörg
2008-11-01
Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes different choices in nearly identical situations, or why the magnitude of these inconsistencies varies in different situations. To illustrate the advantage of probabilistic theories, three probabilistic theories of decision making under risk are compared with their deterministic counterparts. The probabilistic theories are (a) a probabilistic version of a simple choice heuristic, (b) a probabilistic version of cumulative prospect theory, and (c) decision field theory. By testing the theories with the data from three experimental studies, the superiority of the probabilistic models over their deterministic counterparts in predicting people's decisions under risk become evident. When testing the probabilistic theories against each other, decision field theory provides the best account of the observed behavior.
New approaches to provide ride-through for critical loads in electric power distribution systems
NASA Astrophysics Data System (ADS)
Montero-Hernandez, Oscar C.
2001-07-01
The extensive use of electronic circuits has enabled modernization, automation, miniaturization, high quality, low cost, and other achievements regarding electric loads in the last decades. However, modern electronic circuits and systems are extremely sensitive to disturbances from the electric power supply. In fact, the rate at which these disturbances happen is considerable as has been documented in recent years. In response to the power quality concerns presented previously, this dissertation is proposing new approaches to provide ride-through for critical loads during voltage disturbances with emphasis on voltage sags. In this dissertation, a new approach based on an AC-DC-AC system is proposed to provide ride-through for critical loads connected in buildings and/or an industrial system. In this approach, a three-phase IGBT inverter with a built in Dc-link voltage regulator is suitably controlled along with static by-pass switches to provide continuous power to critical loads. During a disturbance, the input utility source is disconnected and the power from the inverter is connected to the load. The remaining voltage in the AC supply is converted to DC and compensated before being applied to the inverter and the load. After detecting normal utility conditions, power from the utility is restored to the critical load. In order to achieve an extended ride-through capability a second approach is introduced. In this case, the Dc-link voltage regulator is performed by a DC-DC Buck-Boost converter. This new approach has the capability to mitigate voltage variations below and above the nominal value. In the third approach presented in this dissertation, a three-phase AC to AC boost converter is investigated. This converter provides a boosting action for the utility input voltages, right before they are applied to the load. The proposed Pulse Width Modulation (PWM) control strategy ensures independent control of each phase and compensates for both single-phase or poly-phase voltage sags. Algorithms capable of detecting voltage disturbances such as voltage sags, voltage swells, flicker, frequency change, and harmonics in a fast and reliable way are investigated and developed in this dissertation as an essential part of the approaches previously described. Simulation and experimental work has been done to validate the feasibility of all approaches under the most common voltage disturbances such as single-phase voltage sags and three-phase voltage sags.
A probabilistic bridge safety evaluation against floods.
Liao, Kuo-Wei; Muto, Yasunori; Chen, Wei-Lun; Wu, Bang-Ho
2016-01-01
To further capture the influences of uncertain factors on river bridge safety evaluation, a probabilistic approach is adopted. Because this is a systematic and nonlinear problem, MPP-based reliability analyses are not suitable. A sampling approach such as a Monte Carlo simulation (MCS) or importance sampling is often adopted. To enhance the efficiency of the sampling approach, this study utilizes Bayesian least squares support vector machines to construct a response surface followed by an MCS, providing a more precise safety index. Although there are several factors impacting the flood-resistant reliability of a bridge, previous experiences and studies show that the reliability of the bridge itself plays a key role. Thus, the goal of this study is to analyze the system reliability of a selected bridge that includes five limit states. The random variables considered here include the water surface elevation, water velocity, local scour depth, soil property and wind load. Because the first three variables are deeply affected by river hydraulics, a probabilistic HEC-RAS-based simulation is performed to capture the uncertainties in those random variables. The accuracy and variation of our solutions are confirmed by a direct MCS to ensure the applicability of the proposed approach. The results of a numerical example indicate that the proposed approach can efficiently provide an accurate bridge safety evaluation and maintain satisfactory variation.
Probabilistic Structural Analysis Program
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
Probabilistic evaluation of SSME structural components
NASA Astrophysics Data System (ADS)
Rajagopal, K. R.; Newell, J. F.; Ho, H.
1991-05-01
The application is described of Composite Load Spectra (CLS) and Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) family of computer codes to the probabilistic structural analysis of four Space Shuttle Main Engine (SSME) space propulsion system components. These components are subjected to environments that are influenced by many random variables. The applications consider a wide breadth of uncertainties encountered in practice, while simultaneously covering a wide area of structural mechanics. This has been done consistent with the primary design requirement for each component. The probabilistic application studies are discussed using finite element models that have been typically used in the past in deterministic analysis studies.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
1991-06-01
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Probabilistic short-term volcanic hazard in phases of unrest: A case study for tephra fallout
NASA Astrophysics Data System (ADS)
Selva, Jacopo; Costa, Antonio; Sandri, Laura; Macedonio, Giovanni; Marzocchi, Warner
2014-12-01
During volcanic crises, volcanologists estimate the impact of possible imminent eruptions usually through deterministic modeling of the effects of one or a few preestablished scenarios. Despite such an approach may bring an important information to the decision makers, the sole use of deterministic scenarios does not allow scientists to properly take into consideration all uncertainties, and it cannot be used to assess quantitatively the risk because the latter unavoidably requires a probabilistic approach. We present a model based on the concept of Bayesian event tree (hereinafter named BET_VH_ST, standing for Bayesian event tree for short-term volcanic hazard), for short-term near-real-time probabilistic volcanic hazard analysis formulated for any potential hazardous phenomenon accompanying an eruption. The specific goal of BET_VH_ST is to produce a quantitative assessment of the probability of exceedance of any potential level of intensity for a given volcanic hazard due to eruptions within restricted time windows (hours to days) in any area surrounding the volcano, accounting for all natural and epistemic uncertainties. BET_VH_ST properly assesses the conditional probability at each level of the event tree accounting for any relevant information derived from the monitoring system, theoretical models, and the past history of the volcano, propagating any relevant epistemic uncertainty underlying these assessments. As an application example of the model, we apply BET_VH_ST to assess short-term volcanic hazard related to tephra loading during Major Emergency Simulation Exercise, a major exercise at Mount Vesuvius that took place from 19 to 23 October 2006, consisting in a blind simulation of Vesuvius reactivation, from the early warning phase up to the final eruption, including the evacuation of a sample of about 2000 people from the area at risk. The results show that BET_VH_ST is able to produce short-term forecasts of the impact of tephra fall during a rapidly evolving crisis, accurately accounting for and propagating all uncertainties and enabling rational decision making under uncertainty.
Biases in probabilistic category learning in relation to social anxiety
Abraham, Anna; Hermann, Christiane
2015-01-01
Instrumental learning paradigms are rarely employed to investigate the mechanisms underlying acquired fear responses in social anxiety. Here, we adapted a probabilistic category learning paradigm to assess information processing biases as a function of the degree of social anxiety traits in a sample of healthy individuals without a diagnosis of social phobia. Participants were presented with three pairs of neutral faces with differing probabilistic accuracy contingencies (A/B: 80/20, C/D: 70/30, E/F: 60/40). Upon making their choice, negative and positive feedback was conveyed using angry and happy faces, respectively. The highly socially anxious group showed a strong tendency to be more accurate at learning the probability contingency associated with the most ambiguous stimulus pair (E/F: 60/40). Moreover, when pairing the most positively reinforced stimulus or the most negatively reinforced stimulus with all the other stimuli in a test phase, the highly socially anxious group avoided the most negatively reinforced stimulus significantly more than the control group. The results are discussed with reference to avoidance learning and hypersensitivity to negative socially evaluative information associated with social anxiety. PMID:26347685
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
This annual report summarizes the work completed during the third year of technical effort on the referenced contract. Principal developments continue to focus on the Probabilistic Finite Element Method (PFEM) which has been under development for three years. Essentially all of the linear capabilities within the PFEM code are in place. Major progress in the application or verifications phase was achieved. An EXPERT module architecture was designed and partially implemented. EXPERT is a user interface module which incorporates an expert system shell for the implementation of a rule-based interface utilizing the experience and expertise of the user community. The Fast Probability Integration (FPI) Algorithm continues to demonstrate outstanding performance characteristics for the integration of probability density functions for multiple variables. Additionally, an enhanced Monte Carlo simulation algorithm was developed and demonstrated for a variety of numerical strategies.
Probabilistic structural analysis methods and applications
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.
1988-01-01
An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.
NASA Astrophysics Data System (ADS)
Králik, Juraj
2017-07-01
The paper presents the probabilistic and sensitivity analysis of the efficiency of the damping devices cover of nuclear power plant under impact of the container of nuclear fuel of type TK C30 drop. The finite element idealization of nuclear power plant structure is used in space. The steel pipe damper system is proposed for dissipation of the kinetic energy of the container free fall. The experimental results of the shock-damper basic element behavior under impact loads are presented. The Newmark integration method is used for solution of the dynamic equations. The sensitivity and probabilistic analysis of damping devices was realized in the AntHILL and ANSYS software.
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1993-01-01
A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.
Probabilistic simulation of the human factor in structural reliability
NASA Astrophysics Data System (ADS)
Chamis, Christos C.; Singhal, Surendra N.
1994-09-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Probabilistic Simulation of the Human Factor in Structural Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Singhal, Surendra N.
1994-01-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
DISCOUNTING OF DELAYED AND PROBABILISTIC LOSSES OVER A WIDE RANGE OF AMOUNTS
Green, Leonard; Myerson, Joel; Oliveira, Luís; Chang, Seo Eun
2014-01-01
The present study examined delay and probability discounting of hypothetical monetary losses over a wide range of amounts (from $20 to $500,000) in order to determine how amount affects the parameters of the hyperboloid discounting function. In separate conditions, college students chose between immediate payments and larger, delayed payments and between certain payments and larger, probabilistic payments. The hyperboloid function accurately described both types of discounting, and amount of loss had little or no systematic effect on the degree of discounting. Importantly, the amount of loss also had little systematic effect on either the rate parameter or the exponent of the delay and probability discounting functions. The finding that the parameters of the hyperboloid function remain relatively constant across a wide range of amounts of delayed and probabilistic loss stands in contrast to the robust amount effects observed with delayed and probabilistic rewards. At the individual level, the degree to which delayed losses were discounted was uncorrelated with the degree to which probabilistic losses were discounted, and delay and probability loaded on two separate factors, similar to what is observed with delayed and probabilistic rewards. Taken together, these findings argue that although delay and probability discounting involve fundamentally different decision-making mechanisms, nevertheless the discounting of delayed and probabilistic losses share an insensitivity to amount that distinguishes it from the discounting of delayed and probabilistic gains. PMID:24745086
NASA Astrophysics Data System (ADS)
Kala, J.; Bajer, M.; Barnat, J.; Smutný, J.
2010-12-01
Pedestrian-induced vibrations are a criterion for serviceability. This loading is significant for light-weight footbridge structures, but was established as a basic loading for the ceilings of various ordinary buildings. Wide variations of this action exist. To verify the different conclusions of various authors, vertical pressure measurements invoked during walking were performed. In the article the approaches of different design codes are also shown.
Load compensation as a function of state during sleep onset.
Gora, J; Kay, A; Colrain, I M; Kleiman, J; Trinder, J
1998-06-01
Ventilation decreases and airway resistance increases with the loss of electroencephalogram alpha activity at sleep onset. The aim of this study was to determine whether reflexive load compensation is lost immediately on the loss of alpha activity. Six healthy male subjects were studied under two conditions (load and control-no load), in three states (continuous alpha, continuous theta, and immediately after a transition from alpha to theta), and in two phases (early and late sleep onset). Ventilation and respiratory timing were measured. A comparison of loaded with control conditions indicated that loading had no effect on inspiratory minute ventilation during continuous alpha (differential effect of 0.00 l/min) and only a small, nonsignificant effect in theta immediately after phase 2 transitions (0.31 l/min), indicating a preservation of load compensation at these times. However, there were significant decreases in inspiratory minute ventilation on loaded trials during continuous theta in phase 2 (0.77 l/min) and phase 3 (1.15 l/min) and during theta immediately after a transition in phase 3 (0.87 l/min), indicating a lack of reflexive load compensation. The results indicate that, because reflex load compensation is state dependent, state-related changes in airway resistance contribute to state-related changes in ventilation during sleep onset. However, this effect was slightly delayed with transitions into theta early in sleep.
laboratory's understanding of capacity value in modern power systems and enjoys applying probabilistic systems efficiency and load management opportunities Education M.E.S. in Environment and Resource Studies, University
Reliability analysis of composite structures
NASA Technical Reports Server (NTRS)
Kan, Han-Pin
1992-01-01
A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.
Commercialization of NESSUS: Status
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Millwater, Harry R.
1991-01-01
A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.
Develop Probabilistic Tsunami Design Maps for ASCE 7
NASA Astrophysics Data System (ADS)
Wei, Y.; Thio, H. K.; Chock, G.; Titov, V. V.
2014-12-01
A national standard for engineering design for tsunami effects has not existed before and this significant risk is mostly ignored in engineering design. The American Society of Civil Engineers (ASCE) 7 Tsunami Loads and Effects Subcommittee is completing a chapter for the 2016 edition of ASCE/SEI 7 Standard. Chapter 6, Tsunami Loads and Effects, would become the first national tsunami design provisions. These provisions will apply to essential facilities and critical infrastructure. This standard for tsunami loads and effects will apply to designs as part of the tsunami preparedness. The provisions will have significance as the post-tsunami recovery tool, to plan and evaluate for reconstruction. Maps of 2,500-year probabilistic tsunami inundation for Alaska, Washington, Oregon, California, and Hawaii need to be developed for use with the ASCE design provisions. These new tsunami design zone maps will define the coastal zones where structures of greater importance would be designed for tsunami resistance and community resilience. The NOAA Center for Tsunami Research (NCTR) has developed 75 tsunami inundation models as part of the operational tsunami model forecast capability for the U.S. coastline. NCTR, UW, and URS are collaborating with ASCE to develop the 2,500-year tsunami design maps for the Pacific states using these tsunami models. This ensures the probabilistic criteria are established in ASCE's tsunami design maps. URS established a Probabilistic Tsunami Hazard Assessment approach consisting of a large amount of tsunami scenarios that include both epistemic uncertainty and aleatory variability (Thio et al., 2010). Their study provides 2,500-year offshore tsunami heights at the 100-m water depth, along with the disaggregated earthquake sources. NOAA's tsunami models are used to identify a group of sources that produce these 2,500-year tsunami heights. The tsunami inundation limits and runup heights derived from these sources establish the tsunami design map for the study site. ASCE's Energy Grad Line Analysis then uses these modeling constraints to derive hydrodynamic forces for structures within the tsunami design zone. The probabilistic tsunami design maps will be validated by comparison to state inundation maps under the coordination of the National Tsunami Hazard Mitigation Program.
Uncertainty characterization approaches for risk assessment of DBPs in drinking water: a review.
Chowdhury, Shakhawat; Champagne, Pascale; McLellan, P James
2009-04-01
The management of risk from disinfection by-products (DBPs) in drinking water has become a critical issue over the last three decades. The areas of concern for risk management studies include (i) human health risk from DBPs, (ii) disinfection performance, (iii) technical feasibility (maintenance, management and operation) of treatment and disinfection approaches, and (iv) cost. Human health risk assessment is typically considered to be the most important phase of the risk-based decision-making or risk management studies. The factors associated with health risk assessment and other attributes are generally prone to considerable uncertainty. Probabilistic and non-probabilistic approaches have both been employed to characterize uncertainties associated with risk assessment. The probabilistic approaches include sampling-based methods (typically Monte Carlo simulation and stratified sampling) and asymptotic (approximate) reliability analysis (first- and second-order reliability methods). Non-probabilistic approaches include interval analysis, fuzzy set theory and possibility theory. However, it is generally accepted that no single method is suitable for the entire spectrum of problems encountered in uncertainty analyses for risk assessment. Each method has its own set of advantages and limitations. In this paper, the feasibility and limitations of different uncertainty analysis approaches are outlined for risk management studies of drinking water supply systems. The findings assist in the selection of suitable approaches for uncertainty analysis in risk management studies associated with DBPs and human health risk.
NASA Astrophysics Data System (ADS)
Grunthal, Gottfried; Stromeyer, Dietrich; Bosse, Christian; Cotton, Fabrice; Bindi, Dino
2017-04-01
The seismic load parameters for the upcoming National Annex to the Eurocode 8 result from the reassessment of the seismic hazard supported by the German Institution for Civil Engineering . This 2016 version of hazard assessment for Germany as target area was based on a comprehensive involvement of all accessible uncertainties in models and parameters into the approach and the provision of a rational framework for facilitating the uncertainties in a transparent way. The developed seismic hazard model represents significant improvements; i.e. it is based on updated and extended databases, comprehensive ranges of models, robust methods and a selection of a set of ground motion prediction equations of their latest generation. The output specifications were designed according to the user oriented needs as suggested by two review teams supervising the entire project. In particular, seismic load parameters were calculated for rock conditions with a vS30 of 800 ms-1 for three hazard levels (10%, 5% and 2% probability of occurrence or exceedance within 50 years) in form of, e.g., uniform hazard spectra (UHS) based on 19 sprectral periods in the range of 0.01 - 3s, seismic hazard maps for spectral response accelerations for different spectral periods or for macroseismic intensities. The developed hazard model consists of a logic tree with 4040 end branches and essential innovations employed to capture epistemic uncertainties and aleatory variabilities. The computation scheme enables the sound calculation of the mean and any quantile of required seismic load parameters. Mean, median and 84th percentiles of load parameters were provided together with the full calculation model to clearly illustrate the uncertainties of such a probabilistic assessment for a region of a low-to-moderate level of seismicity. The regional variations of these uncertainties (e.g. ratios between the mean and median hazard estimations) were analyzed and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renlund, Anita Mariana; Tappan, Alexander Smith; Miller, Jill C.
The HMX {beta}-{delta} solid-solid phase transition, which occurs as HMX is heated near 170 C, is linked to increased reactivity and sensitivity to initiation. Thermally damaged energetic materials (EMs) containing HMX therefore may present a safety concern. Information about the phase transition is vital to predictive safety models for HMX and HMX-containing EMs. We report work on monitoring the phase transition with real-time Raman spectroscopy aimed towards obtaining a better understanding of physical properties of HMX through the phase transition. HMX samples were confined in a cell of minimal free volume in a displacement-controlled or load-controlled arrangement. The cell wasmore » heated and then cooled at controlled rates while real-time Raman spectroscopic measurements were performed. Raman spectroscopy provides a clear distinction between the phases of HMX because the vibrational transitions of the molecule change with conformational changes associated with the phase transition. Temperature of phase transition versus load data are presented for both the heating and cooling cycles in the load-controlled apparatus, and general trends are discussed. A weak dependence of the temperature of phase transition on load was discovered during the heating cycle, with higher loads causing the phase transition to occur at a higher temperature. This was especially true in the temperature of completion of phase transition data as opposed to the temperature of onset of phase transition data. A stronger dependence on load was observed in the cooling cycle, with higher loads causing the reverse phase transitions to occur at a higher cooling temperature. Also, higher loads tended to cause the phase transition to occur over a longer period of time in the heating cycle and over a shorter period of time in the cooling cycle. All three of the pure HMX phases ({alpha}, {beta} and {delta}) were detected on cooling of the heated samples, either in pure form or as a mixture.« less
Assessing Footwear Effects from Principal Features of Plantar Loading during Running.
Trudeau, Matthieu B; von Tscharner, Vinzenz; Vienneau, Jordyn; Hoerzer, Stefan; Nigg, Benno M
2015-09-01
The effects of footwear on the musculoskeletal system are commonly assessed by interpreting the resultant force at the foot during the stance phase of running. However, this approach overlooks loading patterns across the entire foot. An alternative technique for assessing foot loading across different footwear conditions is possible using comprehensive analysis tools that extract different foot loading features, thus enhancing the functional interpretation of the differences across different interventions. The purpose of this article was to use pattern recognition techniques to develop and use a novel comprehensive method for assessing the effects of different footwear interventions on plantar loading. A principal component analysis was used to extract different loading features from the stance phase of running, and a support vector machine (SVM) was used to determine whether and how these loading features were different across three shoe conditions. The results revealed distinct loading features at the foot during the stance phase of running. The loading features determined from the principal component analysis allowed successful classification of all three shoe conditions using the SVM. Several differences were found in the location and timing of the loading across each pairwise shoe comparison using the output from the SVM. The analysis approach proposed can successfully be used to compare different loading patterns with a much greater resolution than has been reported previously. This study has several important applications. One such application is that it would not be relevant for a user to select a shoe or for a manufacturer to alter a shoe's construction if the classification across shoe conditions would not have been significant.
PROBABILISTIC SAFETY ASSESSMENT OF OPERATIONAL ACCIDENTS AT THE WASTE ISOLATION PILOT PLANT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rucker, D.F.
2000-09-01
This report presents a probabilistic safety assessment of radioactive doses as consequences from accident scenarios to complement the deterministic assessment presented in the Waste Isolation Pilot Plant (WIPP) Safety Analysis Report (SAR). The International Council of Radiation Protection (ICRP) recommends both assessments be conducted to ensure that ''an adequate level of safety has been achieved and that no major contributors to risk are overlooked'' (ICRP 1993). To that end, the probabilistic assessment for the WIPP accident scenarios addresses the wide range of assumptions, e.g. the range of values representing the radioactive source of an accident, that could possibly have beenmore » overlooked by the SAR. Routine releases of radionuclides from the WIPP repository to the environment during the waste emplacement operations are expected to be essentially zero. In contrast, potential accidental releases from postulated accident scenarios during waste handling and emplacement could be substantial, which necessitates the need for radiological air monitoring and confinement barriers (DOE 1999). The WIPP Safety Analysis Report (SAR) calculated doses from accidental releases to the on-site (at 100 m from the source) and off-site (at the Exclusive Use Boundary and Site Boundary) public by a deterministic approach. This approach, as demonstrated in the SAR, uses single-point values of key parameters to assess the 50-year, whole-body committed effective dose equivalent (CEDE). The basic assumptions used in the SAR to formulate the CEDE are retained for this report's probabilistic assessment. However, for the probabilistic assessment, single-point parameter values were replaced with probability density functions (PDF) and were sampled over an expected range. Monte Carlo simulations were run, in which 10,000 iterations were performed by randomly selecting one value for each parameter and calculating the dose. Statistical information was then derived from the 10,000 iteration batch, which included 5%, 50%, and 95% dose likelihood, and the sensitivity of each assumption to the calculated doses. As one would intuitively expect, the doses from the probabilistic assessment for most scenarios were found to be much less than the deterministic assessment. The lower dose of the probabilistic assessment can be attributed to a ''smearing'' of values from the high and low end of the PDF spectrum of the various input parameters. The analysis also found a potential weakness in the deterministic analysis used in the SAR, a detail on drum loading was not taken into consideration. Waste emplacement operations thus far have handled drums from each shipment as a single unit, i.e. drums from each shipment are kept together. Shipments typically come from a single waste stream, and therefore the curie loading of each drum can be considered nearly identical to that of its neighbor. Calculations show that if there are large numbers of drums used in the accident scenario assessment, e.g. 28 drums in the waste hoist failure scenario (CH5), then the probabilistic dose assessment calculations will diverge from the deterministically determined doses. As it is currently calculated, the deterministic dose assessment assumes one drum loaded to the maximum allowable (80 PE-Ci), and the remaining are 10% of the maximum. The effective average of drum curie content is therefore less in the deterministic assessment than the probabilistic assessment for a large number of drums. EEG recommends that the WIPP SAR calculations be revisited and updated to include a probabilistic safety assessment.« less
NASA Technical Reports Server (NTRS)
Pool, Kirby V.
1989-01-01
This volume summarizes the analysis used to assess the structural life of the Space Shuttle Main Engine (SSME) High Pressure Fuel Turbo-Pump (HPFTP) Third Stage Impeller. This analysis was performed in three phases, all using the DIAL finite element code. The first phase was a static stress analysis to determine the mean (non-varying) stress and static margin of safety for the part. The loads involved were steady state pressure and centrifugal force due to spinning. The second phase of the analysis was a modal survey to determine the vibrational modes and natural frequencies of the impeller. The third phase was a dynamic response analysis to determine the alternating component of the stress due to time varying pressure impulses at the outlet (diffuser) side of the impeller. The results of the three phases of the analysis show that the Third Stage Impeller operates very near the upper limits of its capability at full power level (FPL) loading. The static loading alone creates stresses in some areas of the shroud which exceed the yield point of the material. Additional cyclic loading due to the dynamic force could lead to a significant reduction in the life of this part. The cyclic stresses determined in the dynamic response phase of this study are based on an assumption regarding the magnitude of the forcing function.
SRB attrition rate study of the aft skirt due to water impact cavity collapse loading
NASA Technical Reports Server (NTRS)
Crockett, C. D.
1976-01-01
A methodology was presented so that realistic attrition prediction could aid in selecting an optimum design option for minimizing the effects of updated loads on the Space Shuttle Solid Rocket Booster (SRB) aft skirt. The updated loads resulted in water impact attrition rates greater than 10 percent for the aft skirt structure. Adding weight to reinforce the aft skirt was undesirable. The refined method treats the occurrences of the load distribution probabilistically, radially and longitudinally, with respect to the critical structural response.
Effect of Cyclic Thermo-Mechanical Loads on Fatigue Reliability in Polymer Matrix Composites
NASA Technical Reports Server (NTRS)
Shah, A. R.; Murthy, P. L. N.; Chamis, C. C.
1996-01-01
A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multi-factor interaction relationship developed at NASA Lewis Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability- based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)(sub s) graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.
NASA Astrophysics Data System (ADS)
Dai, Hong-Yi; Kuang, Le-Man; Li, Cheng-Zu
2005-07-01
We propose a scheme to probabilistically teleport an unknown arbitrary three-level two-particle state by using two partial entangled two-particle states of three-level as the quantum channel. The classical communication cost required in the ideal probabilistic teleportation process is also calculated. This scheme can be directly generalized to teleport an unknown and arbitrary three-level K-particle state by using K partial entangled two-particle states of three-level as the quantum channel. The project supported by National Fundamental Research Program of China under Grant No. 2001CB309310, National Natural Science Foundation of China under Grant Nos. 10404039 and 10325523
Sim, Taeyong; Choi, Ahnryul; Lee, Soeun; Mun, Joung Hwan
2017-10-01
The transition phase of a golf swing is considered to be a decisive instant required for a powerful swing. However, at the same time, the low back torsional loads during this phase can have a considerable effect on golf-related low back pain (LBP). Previous efforts to quantify the transition phase were hampered by problems with accuracy due to methodological limitations. In this study, vector-coding technique (VCT) method was proposed as a comprehensive methodology to quantify the precise transition phase and examine low back torsional load. Towards this end, transition phases were assessed using three different methods (VCT, lead hand speed and X-factor stretch) and compared; then, low back torsional load during the transition phase was examined. As a result, the importance of accurate transition phase quantification has been documented. The largest torsional loads were observed in healthy professional golfers (10.23 ± 1.69 N · kg -1 ), followed by professional golfers with a history of LBP (7.93 ± 1.79 N · kg -1 ), healthy amateur golfers (1.79 ± 1.05 N · kg -1 ) and amateur golfers with a history of LBP (0.99 ± 0.87 N · kg -1 ), which order was equal to that of the transition phase magnitudes of each group. These results indicate the relationship between the transition phase and LBP history and the dependency of the torsional load magnitude on the transition phase.
Probability-Based Design Criteria of the ASCE 7 Tsunami Loads and Effects Provisions (Invited)
NASA Astrophysics Data System (ADS)
Chock, G.
2013-12-01
Mitigation of tsunami risk requires a combination of emergency preparedness for evacuation in addition to providing structural resilience of critical facilities, infrastructure, and key resources necessary for immediate response and economic and social recovery. Critical facilities would include emergency response, medical, tsunami refuges and shelters, ports and harbors, lifelines, transportation, telecommunications, power, financial institutions, and major industrial/commercial facilities. The Tsunami Loads and Effects Subcommittee of the ASCE/SEI 7 Standards Committee is developing a proposed new Chapter 6 - Tsunami Loads and Effects for the 2016 edition of the ASCE 7 Standard. ASCE 7 provides the minimum design loads and requirements for structures subject to building codes such as the International Building Code utilized in the USA. In this paper we will provide a review emphasizing the intent of these new code provisions and explain the design methodology. The ASCE 7 provisions for Tsunami Loads and Effects enables a set of analysis and design methodologies that are consistent with performance-based engineering based on probabilistic criteria. . The ASCE 7 Tsunami Loads and Effects chapter will be initially applicable only to the states of Alaska, Washington, Oregon, California, and Hawaii. Ground shaking effects and subsidence from a preceding local offshore Maximum Considered Earthquake will also be considered prior to tsunami arrival for Alaska and states in the Pacific Northwest regions governed by nearby offshore subduction earthquakes. For national tsunami design provisions to achieve a consistent reliability standard of structural performance for community resilience, a new generation of tsunami inundation hazard maps for design is required. The lesson of recent tsunami is that historical records alone do not provide a sufficient measure of the potential heights of future tsunamis. Engineering design must consider the occurrence of events greater than scenarios in the historical record, and should properly be based on the underlying seismicity of subduction zones. Therefore, Probabilistic Tsunami Hazard Analysis (PTHA) consistent with source seismicity must be performed in addition to consideration of historical event scenarios. A method of Probabilistic Tsunami Hazard Analysis has been established that is generally consistent with Probabilistic Seismic Hazard Analysis in the treatment of uncertainty. These new tsunami design zone maps will define the coastal zones where structures of greater importance would be designed for tsunami resistance and community resilience. Structural member acceptability criteria will be based on performance objectives for a 2,500-year Maximum Considered Tsunami. The approach developed by the ASCE Tsunami Loads and Effects Subcommittee of the ASCE 7 Standard would result in the first national unification of tsunami hazard criteria for design codes reflecting the modern approach of Performance-Based Engineering.
Chebabhi, A; Fellah, M K; Kessal, A; Benkhoris, M F
2015-07-01
In this paper the performances of three reference currents and DC bus voltage control techniques for Three-Phase Four-Wire Four-Leg SAPF are compared for balanced and unbalanced load conditions. The main goals are to minimize the harmonics, reduce the magnitude of neutral current, eliminate the zero-sequence current components caused by single-phase nonlinear loads and compensate the reactive power, and on the other hand improve performances such as robustness, stabilization, trajectory pursuit, and reduce time response. The three techniques are analyzed mathematically and simulation results are compared. The techniques considered for comparative study are the PI Control, Sliding Mode Control and the Backstepping Control. Synchronous reference frame theory (SRF) in the dqo-axes is used to generate the reference currents, of the inverter. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Probabilistic lifetime strength of aerospace materials via computational simulation
NASA Technical Reports Server (NTRS)
Boyce, Lola; Keating, Jerome P.; Lovelace, Thomas B.; Bast, Callie C.
1991-01-01
The results of a second year effort of a research program are presented. The research included development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic phenomenological constitutive relationship, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects of primitive variables. These primitive variables often originate in the environment and may include stress from loading, temperature, chemical, or radiation attack. This multifactor interaction constitutive equation is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the constitutive equation using actual experimental materials data together with the multiple linear regression of that data.
Quantification of uncertainties in the performance of smart composite structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1993-01-01
A composite wing with spars, bulkheads, and built-in control devices is evaluated using a method for the probabilistic assessment of smart composite structures. Structural responses (such as change in angle of attack, vertical displacements, and stresses in regular plies with traditional materials and in control plies with mixed traditional and actuation materials) are probabilistically assessed to quantify their respective scatter. Probabilistic sensitivity factors are computed to identify those parameters that have a significant influence on a specific structural response. Results show that the uncertainties in the responses of smart composite structures can be quantified. Responses such as structural deformation, ply stresses, frequencies, and buckling loads in the presence of defects can be reliably controlled to satisfy specified design requirements.
Quantitative microbiological risk assessment in food industry: Theory and practical application.
Membré, Jeanne-Marie; Boué, Géraldine
2018-04-01
The objective of this article is to bring scientific background as well as practical hints and tips to guide risk assessors and modelers who want to develop a quantitative Microbiological Risk Assessment (MRA) in an industrial context. MRA aims at determining the public health risk associated with biological hazards in a food. Its implementation in industry enables to compare the efficiency of different risk reduction measures, and more precisely different operational settings, by predicting their effect on the final model output. The first stage in MRA is to clearly define the purpose and scope with stakeholders, risk assessors and modelers. Then, a probabilistic model is developed; this includes schematically three important phases. Firstly, the model structure has to be defined, i.e. the connections between different operational processing steps. An important step in food industry is the thermal processing leading to microbial inactivation. Growth of heat-treated surviving microorganisms and/or post-process contamination during storage phase is also important to take into account. Secondly, mathematical equations are determined to estimate the change of microbial load after each processing step. This phase includes the construction of model inputs by collecting data or eliciting experts. Finally, the model outputs are obtained by simulation procedures, they have to be interpreted and communicated to targeted stakeholders. In this latter phase, tools such as what-if scenarios provide an essential added value. These different MRA phases are illustrated through two examples covering important issues in industry. The first one covers process optimization in a food safety context, the second one covers shelf-life determination in a food quality context. Although both contexts required the same methodology, they do not have the same endpoint: up to the human health in the foie gras case-study illustrating here a safety application, up to the food portion in the brioche case-study illustrating here a quality application. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nanoscale elastic modulus variation in loaded polymeric micelle reactors.
Solmaz, Alim; Aytun, Taner; Deuschle, Julia K; Ow-Yang, Cleva W
2012-07-17
Tapping mode atomic force microscopy (TM-AFM) enables mapping of chemical composition at the nanoscale by taking advantage of the variation in phase angle shift arising from an embedded second phase. We demonstrate that phase contrast can be attributed to the variation in elastic modulus during the imaging of zinc acetate (ZnAc)-loaded reverse polystyrene-block-poly(2-vinylpyridine) (PS-b-P2VP) diblock co-polymer micelles less than 100 nm in diameter. Three sample configurations were characterized: (i) a 31.6 μm thick polystyrene (PS) support film for eliminating the substrate contribution, (ii) an unfilled PS-b-P2VP micelle supported by the same PS film, and (iii) a ZnAc-loaded PS-b-P2VP micelle supported by the same PS film. Force-indentation (F-I) curves were measured over unloaded micelles on the PS film and over loaded micelles on the PS film, using standard tapping mode probes of three different spring constants, the same cantilevers used for imaging of the samples before and after loading. For calibration of the tip geometry, nanoindentation was performed on the bare PS film. The resulting elastic modulus values extracted by applying the Hertz model were 8.26 ± 3.43 GPa over the loaded micelles and 4.17 ± 1.65 GPa over the unloaded micelles, confirming that phase contrast images of a monolayer of loaded micelles represent maps of the nanoscale chemical and mechanical variation. By calibrating the tip geometry indirectly using a known soft material, we are able to use the same standard tapping mode cantilevers for both imaging and indentation.
Advanced Software for Analysis of High-Speed Rolling-Element Bearings
NASA Technical Reports Server (NTRS)
Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.
2003-01-01
COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.
NASA Astrophysics Data System (ADS)
Barati, M.; Arbab Chirani, S.; Kadkhodaei, M.; Saint-Sulpice, L.; Calloch, S.
2017-02-01
The behaviors of shape memory alloys (SMAs) strongly depend on the presence of different phases: austenite, thermally-induced martensite and stress-induced martensite. Consequently, it is important to know the phase volume fraction of each phases and their evolution during thermomechanical loadings. In this work, a three-phase proportioning method based on electric resistivity variation of a CuAlBe SMA is proposed. Simple thermomechanical loadings (i. e. pseudoplasticity and pseudoelasticity), one-way shape memory effect, recovery stress, assisted two-way memory effect at different level of stress and cyclic pseudoelasticity tests are investigated. Based on the electric resistivity results, during each loading path, evolution of the microstructure is determined. The origin of residual strain observed during the considered thermomechanical loadings is discussed. A special attention is paid to two-way shape memory effect generated after considered cyclic loadings and its relation with the developed residual strain. These results permit to identify and to validate the macroscopic models of SMAs behaviors.
Limited-scope probabilistic safety analysis for the Los Alamos Meson Physics Facility (LAMPF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharirli, M.; Rand, J.L.; Sasser, M.K.
1992-01-01
The reliability of instrumentation and safety systems is a major issue in the operation of accelerator facilities. A probabilistic safety analysis was performed or the key safety and instrumentation systems at the Los Alamos Meson Physics Facility (LAMPF). in Phase I of this unique study, the Personnel Safety System (PSS) and the Current Limiters (XLs) were analyzed through the use of the fault tree analyses, failure modes and effects analysis, and criticality analysis. Phase II of the program was done to update and reevaluate the safety systems after the Phase I recommendations were implemented. This paper provides a brief reviewmore » of the studies involved in Phases I and II of the program.« less
Limited-scope probabilistic safety analysis for the Los Alamos Meson Physics Facility (LAMPF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharirli, M.; Rand, J.L.; Sasser, M.K.
1992-12-01
The reliability of instrumentation and safety systems is a major issue in the operation of accelerator facilities. A probabilistic safety analysis was performed or the key safety and instrumentation systems at the Los Alamos Meson Physics Facility (LAMPF). in Phase I of this unique study, the Personnel Safety System (PSS) and the Current Limiters (XLs) were analyzed through the use of the fault tree analyses, failure modes and effects analysis, and criticality analysis. Phase II of the program was done to update and reevaluate the safety systems after the Phase I recommendations were implemented. This paper provides a brief reviewmore » of the studies involved in Phases I and II of the program.« less
1980-09-01
relating x’and y’ Figure 2: Basic Laboratory Simulation Model 73 COMPARISON OF COMPUTED AND MEASURED ACCELERATIONS IN A DYNAMICALLY LOADED TACTICAL...Survival (General) Displacements Mines (Ordnance) Telemeter Systems Dynamic Response Models Temperatures Dynamics Moisture Thermal Stresses Energy...probabilistic reliability model for the XM 753 projectile rocket motor to bulkhead joint under extreme loading conditions is constructed. The reliability
Convergence Time and Phase Transition in a Non-monotonic Family of Probabilistic Cellular Automata
NASA Astrophysics Data System (ADS)
Ramos, A. D.; Leite, A.
2017-08-01
In dynamical systems, some of the most important questions are related to phase transitions and convergence time. We consider a one-dimensional probabilistic cellular automaton where their components assume two possible states, zero and one, and interact with their two nearest neighbors at each time step. Under the local interaction, if the component is in the same state as its two neighbors, it does not change its state. In the other cases, a component in state zero turns into a one with probability α , and a component in state one turns into a zero with probability 1-β . For certain values of α and β , we show that the process will always converge weakly to δ 0, the measure concentrated on the configuration where all the components are zeros. Moreover, the mean time of this convergence is finite, and we describe an upper bound in this case, which is a linear function of the initial distribution. We also demonstrate an application of our results to the percolation PCA. Finally, we use mean-field approximation and Monte Carlo simulations to show coexistence of three distinct behaviours for some values of parameters α and β.
Asano, Masanari; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro
2016-05-28
We compare the contextual probabilistic structures of the seminal two-slit experiment (quantum interference experiment), the system of three interacting bodies andEscherichia colilactose-glucose metabolism. We show that they have the same non-Kolmogorov probabilistic structure resulting from multi-contextuality. There are plenty of statistical data with non-Kolmogorov features; in particular, the probabilistic behaviour of neither quantum nor biological systems can be described classically. Biological systems (even cells and proteins) are macroscopic systems and one may try to present a more detailed model of interactions in such systems that lead to quantum-like probabilistic behaviour. The system of interactions between three bodies is one of the simplest metaphoric examples for such interactions. By proceeding further in this way (by playing withn-body systems) we shall be able to find metaphoric mechanical models for complex bio-interactions, e.g. signalling between cells, leading to non-Kolmogorov probabilistic data. © 2016 The Author(s).
Asano, Masanari; Ohya, Masanori; Yamato, Ichiro
2016-01-01
We compare the contextual probabilistic structures of the seminal two-slit experiment (quantum interference experiment), the system of three interacting bodies and Escherichia coli lactose–glucose metabolism. We show that they have the same non-Kolmogorov probabilistic structure resulting from multi-contextuality. There are plenty of statistical data with non-Kolmogorov features; in particular, the probabilistic behaviour of neither quantum nor biological systems can be described classically. Biological systems (even cells and proteins) are macroscopic systems and one may try to present a more detailed model of interactions in such systems that lead to quantum-like probabilistic behaviour. The system of interactions between three bodies is one of the simplest metaphoric examples for such interactions. By proceeding further in this way (by playing with n-body systems) we shall be able to find metaphoric mechanical models for complex bio-interactions, e.g. signalling between cells, leading to non-Kolmogorov probabilistic data. PMID:27091163
NASA Astrophysics Data System (ADS)
Cheng, Hu; Zhang, Junran; Li, Yanchun; Li, Gong; Li, Xiaodong; Liu, Jing
2018-01-01
We have designed and implemented a novel DLD for controlling pressure and compression/decompression rate. Combined with the use of the symmetric diamond anvil cells (DACs), the DLD adopts three piezo-electric (PE) actuators and three static load screws to remotely control pressure in accurate and consistent manner at room temperature. This device allows us to create different loading mechanisms and frames for a variety of existing and commonly used diamond cells rather than designing specialized or dedicated diamond cells with various drives. The sample pressure compression/decompression rate that we have achieved is up to 58.6/43.3 TPa/s, respectively. The minimum of load time is less than 1 ms. The DLD is a powerful tool for exploring the effects of rapid (de)compression on the structure of materials and the properties of materials.
Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P
2015-07-01
Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wood, M. E.
1980-01-01
Four wire Wye connected ac power systems exhibit peculiar steady state fault characteristics when the fourth wire of three phase induction motors is connected. The loss of one phase of power source due to a series or shunt fault results in currents higher than anticipated on the remaining two phases. A theoretical approach to compute the fault currents and voltages is developed. A FORTRAN program is included in the appendix.
Evaluation of Sex-Specific Movement Patterns in Judo Using Probabilistic Neural Networks.
Miarka, Bianca; Sterkowicz-Przybycien, Katarzyna; Fukuda, David H
2017-10-01
The purpose of the present study was to create a probabilistic neural network to clarify the understanding of movement patterns in international judo competitions by gender. Analysis of 773 male and 638 female bouts was utilized to identify movements during the approach, gripping, attack (including biomechanical designations), groundwork, defense, and pause phases. Probabilistic neural network and chi-square (χ 2 ) tests modeled and compared frequencies (p ≤ .05). Women (mean [interquartile range]: 9.9 [4; 14]) attacked more than men (7.0 [3; 10]) while attempting a greater number of arm/leg lever (women: 2.7 [1; 6]; men: 4.0 [0; 4]) and trunk/leg lever (women: 0.8 [0; 1]; men: 2.4 [0; 4]) techniques but fewer maximal length-moment arm techniques (women: 0.7 [0; 1]; men: 1.0 [0; 2]). Male athletes displayed one-handed gripping of the back and sleeve, whereas female athletes executed a greater number of groundwork techniques. An optimized probabilistic neural network model, using patterns from the gripping, attack, groundwork, and pause phases, produced an overall prediction accuracy of 76% for discrimination between men and women.
Experimental study of a quantum random-number generator based on two independent lasers
NASA Astrophysics Data System (ADS)
Sun, Shi-Hai; Xu, Feihu
2017-12-01
A quantum random-number generator (QRNG) can produce true randomness by utilizing the inherent probabilistic nature of quantum mechanics. Recently, the spontaneous-emission quantum phase noise of the laser has been widely deployed for quantum random-number generation, due to its high rate, its low cost, and the feasibility of chip-scale integration. Here, we perform a comprehensive experimental study of a phase-noise-based QRNG with two independent lasers, each of which operates in either continuous-wave (CW) or pulsed mode. We implement the QRNG by operating the two lasers in three configurations, namely, CW + CW, CW + pulsed, and pulsed + pulsed, and demonstrate their trade-offs, strengths, and weaknesses.
Damage Arresting Composites for Shaped Vehicles - Phase II Final Report
NASA Technical Reports Server (NTRS)
Velicki, Alex; Yovanof, Nicolette; Baraja, Jaime; Linton, Kim; Li, Victor; Hawley, Arthur; Thrash, Patrick; DeCoux, Steve; Pickell, Robert
2011-01-01
This report describes the development of a novel structural concept, Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS), that addresses the demanding fuselage loading requirements for the Hybrid Wing or Blended Wing Body (BWB) airplane configuration. In addition to the analytical studies, a three specimen test program was also completed to assess the concept under axial tension loading, axial compression loading, and internal pressure loading.
Probabilistic and Possibilistic Analyses of the Strength of a Bonded Joint
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Krishnamurthy, T.; Smith, Steven A.
2001-01-01
The effects of uncertainties on the strength of a single lap shear joint are explained. Probabilistic and possibilistic methods are used to account for uncertainties. Linear and geometrically nonlinear finite element analyses are used in the studies. To evaluate the strength of the joint, fracture in the adhesive and material strength failure in the strap are considered. The study shows that linear analyses yield conservative predictions for failure loads. The possibilistic approach for treating uncertainties appears to be viable for preliminary design, but with several qualifications.
Charroud, Céline; Steffener, Jason; Le Bars, Emmanuelle; Deverdun, Jérémy; Bonafe, Alain; Abdennour, Meriem; Portet, Florence; Molino, François; Stern, Yaakov; Ritchie, Karen; Menjot de Champfleur, Nicolas; Akbaraly, Tasnime N
2015-11-01
Changes in working memory are sensitive indicators of both normal and pathological brain aging and associated disability. The present study aims to further understanding of working memory in normal aging using a large cohort of healthy elderly in order to examine three separate phases of information processing in relation to changes in task load activation. Using covariance analysis, increasing and decreasing neural activation was observed on fMRI in response to a delayed item recognition task in 337 cognitively healthy elderly persons as part of the CRESCENDO (Cognitive REServe and Clinical ENDOphenotypes) study. During three phases of the task (stimulation, retention, probe), increased activation was observed with increasing task load in bilateral regions of the prefrontal cortex, parietal lobule, cingulate gyrus, insula and in deep gray matter nuclei, suggesting an involvement of central executive and salience networks. Decreased activation associated with increasing task load was observed during the stimulation phase, in bilateral temporal cortex, parietal lobule, cingulate gyrus and prefrontal cortex. This spatial distribution of decreased activation is suggestive of the default mode network. These findings support the hypothesis of an increased activation in salience and central executive networks and a decreased activation in default mode network concomitant to increasing task load. Copyright © 2015 Elsevier Inc. All rights reserved.
An investigation into the probabilistic combination of quasi-static and random accelerations
NASA Technical Reports Server (NTRS)
Schock, R. W.; Tuell, L. P.
1984-01-01
The development of design load factors for aerospace and aircraft components and experiment support structures, which are subject to a simultaneous vehicle dynamic vibration (quasi-static) and acoustically generated random vibration, require the selection of a combination methodology. Typically, the procedure is to define the quasi-static and the random generated response separately, and arithmetically add or root sum square to get combined accelerations. Since the combination of a probabilistic and a deterministic function yield a probabilistic function, a viable alternate approach would be to determine the characteristics of the combined acceleration probability density function and select an appropriate percentile level for the combined acceleration. The following paper develops this mechanism and provides graphical data to select combined accelerations for most popular percentile levels.
Assessment of Optimal Flexibility in Ensemble of Frequency Responsive Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kundu, Soumya; Hansen, Jacob; Lian, Jianming
2018-04-19
Potential of electrical loads in providing grid ancillary services is often limited due to the uncertainties associated with the load behavior. A knowledge of the expected uncertainties with a load control program would invariably yield to better informed control policies, opening up the possibility of extracting the maximal load control potential without affecting grid operations. In the context of frequency responsive load control, a probabilistic uncertainty analysis framework is presented to quantify the expected error between the target and actual load response, under uncertainties in the load dynamics. A closed-form expression of an optimal demand flexibility, minimizing the expected errormore » in actual and committed flexibility, is provided. Analytical results are validated through Monte Carlo simulations of ensembles of electric water heaters.« less
Probabilistic structural mechanics research for parallel processing computers
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.
1991-01-01
Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.
Dwivedi, Dipankar; Mohanty, Binayak P.; Lesikar, Bruce J.
2013-01-01
Microbes have been identified as a major contaminant of water resources. Escherichia coli (E. coli) is a commonly used indicator organism. It is well recognized that the fate of E. coli in surface water systems is governed by multiple physical, chemical, and biological factors. The aim of this work is to provide insight into the physical, chemical, and biological factors along with their interactions that are critical in the estimation of E. coli loads in surface streams. There are various models to predict E. coli loads in streams, but they tend to be system or site specific or overly complex without enhancing our understanding of these factors. Hence, based on available data, a Bayesian Neural Network (BNN) is presented for estimating E. coli loads based on physical, chemical, and biological factors in streams. The BNN has the dual advantage of overcoming the absence of quality data (with regards to consistency in data) and determination of mechanistic model parameters by employing a probabilistic framework. This study evaluates whether the BNN model can be an effective alternative tool to mechanistic models for E. coli loads estimation in streams. For this purpose, a comparison with a traditional model (LOADEST, USGS) is conducted. The models are compared for estimated E. coli loads based on available water quality data in Plum Creek, Texas. All the model efficiency measures suggest that overall E. coli loads estimations by the BNN model are better than the E. coli loads estimations by the LOADEST model on all the three occasions (three-fold cross validation). Thirteen factors were used for estimating E. coli loads with the exhaustive feature selection technique, which indicated that six of thirteen factors are important for estimating E. coli loads. Physical factors included temperature and dissolved oxygen; chemical factors include phosphate and ammonia; biological factors include suspended solids and chlorophyll. The results highlight that the LOADEST model estimates E. coli loads better in the smaller ranges, whereas the BNN model estimates E. coli loads better in the higher ranges. Hence, the BNN model can be used to design targeted monitoring programs and implement regulatory standards through TMDL programs. PMID:24511166
Input Power Characteristics of a Three-Phase Thyristor Converter
DOT National Transportation Integrated Search
1973-10-01
A phase delay rectifier operating into a passive resistive load was instrumented in the laboratory. Techniques for accurate measurement of power, displacement reactive power, harmonic components, and distortion reactive power are presented. The chara...
The purpose of this SOP is to describe the procedures undertaken to calculate the dermal exposure using a probabilistic approach. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University of Arizona NHEXAS and Battelle Labo...
The purpose of this SOP is to describe the procedures undertaken to calculate the inhalation exposures to chlorpyrifos and diazinon using the probabilistic approach. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University...
Liew, Bernard X W; Morris, Susan; Netto, Kevin
2016-06-01
Investigating the impact of incremental load magnitude on running joint power and kinematics is important for understanding the energy cost burden and potential injury-causative mechanisms associated with load carriage. It was hypothesized that incremental load magnitude would result in phase-specific, joint power and kinematic changes within the stance phase of running, and that these relationships would vary at different running velocities. Thirty-one participants performed running while carrying three load magnitudes (0%, 10%, 20% body weight), at three velocities (3, 4, 5m/s). Lower limb trajectories and ground reaction forces were captured, and global optimization was used to derive the variables. The relationships between load magnitude and joint power and angle vectors, at each running velocity, were analyzed using Statistical Parametric Mapping Canonical Correlation Analysis. Incremental load magnitude was positively correlated to joint power in the second half of stance. Increasing load magnitude was also positively correlated with alterations in three dimensional ankle angles during mid-stance (4.0 and 5.0m/s), knee angles at mid-stance (at 5.0m/s), and hip angles during toe-off (at all velocities). Post hoc analyses indicated that at faster running velocities (4.0 and 5.0m/s), increasing load magnitude appeared to alter power contribution in a distal-to-proximal (ankle→hip) joint sequence from mid-stance to toe-off. In addition, kinematic changes due to increasing load influenced both sagittal and non-sagittal plane lower limb joint angles. This study provides a list of plausible factors that may influence running energy cost and injury risk during load carriage running. Copyright © 2016 Elsevier B.V. All rights reserved.
Simultaneous DC and three phase output using hybrid converter
NASA Astrophysics Data System (ADS)
Surenderanath, S.; Rathnavel, P.; Prakash, G.; Rayavel, P.
2018-04-01
This Paper introduces new hybrid converter topologies which can supply simultaneously three phase AC as well as DC from a single DC source. The new Hybrid Converter is derived from the single switch controlled Boost converter by replacing the controlled switch with voltage source inverter (VSI). This new hybrid converter has the advantages like reduced number of switches as compared with conventional design having separate converter for supplying three phase AC and DC loads, provide DC and three AC outputs with an increased reliability, resulting from the inherent shoot through protection in the inverter stage. The proposed converter, studied in this paper, is called Boost-Derived Hybrid Converter (BDHC) as it is obtained from the conventional boost topology. A DSPIC based feedback controller is designed to regulate the DC as well as AC outputs. The proposed Converter can supply DC and AC loads at 95 V and 35 V (line to ground) respectively from a 48 V DC source.
Stochastic Controls on Nitrate Transport and Cycling
NASA Astrophysics Data System (ADS)
Botter, G.; Settin, T.; Alessi Celegon, E.; Marani, M.; Rinaldo, A.
2005-12-01
In this paper, the impact of nutrient inputs on basin-scale nitrates losses is investigated in a probabilistic framework by means of a continuous, geomorphologically based, Montecarlo approach, which explicitly tackles the random character of the processes controlling nitrates generation, transformation and transport in river basins. This is obtained by coupling the stochastic generation of climatic and rainfall series with simplified hydrologic and biogeochemical models operating at the hillslope scale. Special attention is devoted to the spatial and temporal variability of nitrogen sources of agricultural origin and to the effect of temporally distributed rainfall fields on the ensuing nitrates leaching. The influence of random climatic variables on bio-geochemical processes affecting the nitrogen cycle in the soil-water system (e.g. plant uptake, nitrification and denitrification, mineralization), is also considered. The approach developed has been applied to a catchment located in North-Eastern Italy and is used to provide probabilistic estimates of the NO_3 load transferred downstream, which is received and accumulated in the Venice lagoon. We found that the nitrogen load introduced by fertilizations significantly affects the pdf of the nitrates content in the soil moisture, leading to prolonged risks of increased nitrates leaching from soil. The model allowed the estimation of the impact of different practices on the probabilistic structure of the basin-scale hydrologic and chemical response. As a result, the return period of the water volumes and of the nitrates loads released into the Venice lagoon has been linked directly to the ongoing climatic, pluviometric and agricultural regimes, with relevant implications for environmental planning activities aimed at achieving sustainable management practices.
Probabilistic fatigue life prediction of metallic and composite materials
NASA Astrophysics Data System (ADS)
Xiang, Yibing
Fatigue is one of the most common failure modes for engineering structures, such as aircrafts, rotorcrafts and aviation transports. Both metallic materials and composite materials are widely used and affected by fatigue damage. Huge uncertainties arise from material properties, measurement noise, imperfect models, future anticipated loads and environmental conditions. These uncertainties are critical issues for accurate remaining useful life (RUL) prediction for engineering structures in service. Probabilistic fatigue prognosis considering various uncertainties is of great importance for structural safety. The objective of this study is to develop probabilistic fatigue life prediction models for metallic materials and composite materials. A fatigue model based on crack growth analysis and equivalent initial flaw size concept is proposed for metallic materials. Following this, the developed model is extended to include structural geometry effects (notch effect), environmental effects (corroded specimens) and manufacturing effects (shot peening effects). Due to the inhomogeneity and anisotropy, the fatigue model suitable for metallic materials cannot be directly applied to composite materials. A composite fatigue model life prediction is proposed based on a mixed-mode delamination growth model and a stiffness degradation law. After the development of deterministic fatigue models of metallic and composite materials, a general probabilistic life prediction methodology is developed. The proposed methodology combines an efficient Inverse First-Order Reliability Method (IFORM) for the uncertainty propogation in fatigue life prediction. An equivalent stresstransformation has been developed to enhance the computational efficiency under realistic random amplitude loading. A systematical reliability-based maintenance optimization framework is proposed for fatigue risk management and mitigation of engineering structures.
NASA Astrophysics Data System (ADS)
Gao, Yi
The development and utilization of wind energy for satisfying electrical demand has received considerable attention in recent years due to its tremendous environmental, social and economic benefits, together with public support and government incentives. Electric power generation from wind energy behaves quite differently from that of conventional sources. The fundamentally different operating characteristics of wind energy facilities therefore affect power system reliability in a different manner than those of conventional systems. The reliability impact of such a highly variable energy source is an important aspect that must be assessed when the wind power penetration is significant. The focus of the research described in this thesis is on the utilization of state sampling Monte Carlo simulation in wind integrated bulk electric system reliability analysis and the application of these concepts in system planning and decision making. Load forecast uncertainty is an important factor in long range planning and system development. This thesis describes two approximate approaches developed to reduce the number of steps in a load duration curve which includes load forecast uncertainty, and to provide reasonably accurate generating and bulk system reliability index predictions. The developed approaches are illustrated by application to two composite test systems. A method of generating correlated random numbers with uniform distributions and a specified correlation coefficient in the state sampling method is proposed and used to conduct adequacy assessment in generating systems and in bulk electric systems containing correlated wind farms in this thesis. The studies described show that it is possible to use the state sampling Monte Carlo simulation technique to quantitatively assess the reliability implications associated with adding wind power to a composite generation and transmission system including the effects of multiple correlated wind sites. This is an important development as it permits correlated wind farms to be incorporated in large practical system studies without requiring excessive increases in computer solution time. The procedures described in this thesis for creating monthly and seasonal wind farm models should prove useful in situations where time period models are required to incorporate scheduled maintenance of generation and transmission facilities. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the quantitative system risk and conduct bulk power system planning. A relatively new approach that incorporates deterministic and probabilistic considerations in a single risk assessment framework has been designated as the joint deterministic-probabilistic approach. The research work described in this thesis illustrates that the joint deterministic-probabilistic approach can be effectively used to integrate wind power in bulk electric system planning. The studies described in this thesis show that the application of the joint deterministic-probabilistic method provides more stringent results for a system with wind power than the traditional deterministic N-1 method because the joint deterministic-probabilistic technique is driven by the deterministic N-1 criterion with an added probabilistic perspective which recognizes the power output characteristics of a wind turbine generator.
NASA Astrophysics Data System (ADS)
Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.
2007-12-01
Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different management decisions. Our research results indicate that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our research also indicates that the probability of violating current water quality guidelines at specified true fecal coliform concentrations depends on the laboratory procedure used. As a result, quality-based management decisions, such as opening or closing a shellfishing area, may also depend on the laboratory procedure used.
Load- and skill-related changes in segmental contributions to a weightlifting movement.
Enoka, R M
1988-04-01
An exemplary short duration, high-power, weightlifting event was examined to determine whether the ability to lift heavier loads and whether variations in the level of skill were accompanied by quantitative changes in selected aspects of lower extremity joint power-time histories. Six experienced weightlifters, three skilled and three less skilled, performed the double-knee-bend execution of the pull in Olympic weightlifting, a movement which lasted almost 1 s. Analysis-of-variance statistics were performed on selected peak and average values of power generated by the three skilled subjects as they lifted three loads (69, 77, and 86% of their competition maximum). The results indicated that the skilled subjects lifted heavier loads by increasing the average power, but not the peak power, about the knee and ankle joints. In addition, the changes with load were more subtle than a mere quantitative scaling and also seemed to be associated with a skill element in the form of variation in the duration of the phases of power production and absorption. Similarly, statistical differences (independent t-test) due to skill did not involve changes in the magnitude of power but rather the temporal organization of the movement. Thus, the ability to successfully execute the double-knee-bend movement depends on an athlete's ability to both generate a sufficient magnitude of joint power and to organize the phases of power production and absorption into an appropriate temporal sequence.
High-frequency ac power distribution in Space Station
NASA Technical Reports Server (NTRS)
Tsai, Fu-Sheng; Lee, Fred C. Y.
1990-01-01
A utility-type 20-kHz ac power distribution system for the Space Station, employing resonant power-conversion techniques, is presented. The system converts raw dc voltage from photovoltaic cells or three-phase LF ac voltage from a solar dynamic generator into a regulated 20-kHz ac voltage for distribution among various loads. The results of EASY5 computer simulations of the local and global performance show that the system has fast response and good transient behavior. The ac bus voltage is effectively regulated using the phase-control scheme, which is demonstrated with both line and load variations. The feasibility of paralleling the driver-module outputs is illustrated with the driver modules synchronized and sharing a common feedback loop. An HF sinusoidal ac voltage is generated in the three-phase ac input case, when the driver modules are phased 120 deg away from one another and their outputs are connected in series.
Szczepanski, Caroline R.; Stansbury, Jeffrey W.
2014-01-01
A mechanism for polymerization shrinkage and stress reduction was developed for heterogeneous networks formed via ambient, photo-initiated polymerization-induced phase separation (PIPS). The material system used consists of a bulk homopolymer matrix of triethylene glycol dimethacrylate (TEGDMA) modified with one of three non-reactive, linear prepolymers (poly-methyl, ethyl and butyl methacrylate). At higher prepolymer loading levels (10–20 wt%) an enhanced reduction in both shrinkage and polymerization stress is observed. The onset of gelation in these materials is delayed to a higher degree of methacrylate conversion (~15–25%), providing more time for phase structure evolution by thermodynamically driven monomer diffusion between immiscible phases prior to network macro-gelation. The resulting phase structure was probed by introducing a fluorescently tagged prepolymer into the matrix. The phase structure evolves from a dispersion of prepolymer at low loading levels to a fully co-continuous heterogeneous network at higher loadings. The bulk modulus in phase separated networks is equivalent or greater than that of poly(TEGDMA), despite a reduced polymerization rate and cross-link density in the prepolymer-rich domains. PMID:25418999
NASA Technical Reports Server (NTRS)
Wolf, Michael
2012-01-01
A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.
Realization of station for testing asynchronous three-phase motors
NASA Astrophysics Data System (ADS)
Wróbel, A.; Surma, W.
2016-08-01
Nowadays, you cannot imagine the construction and operation of machines without the use of electric motors [13-15]. The proposed position is designed to allow testing of asynchronous three-phase motors. The position consists of a tested engine and the engine running as a load, both engines combined with a mechanical clutch [2]. The value of the load is recorded by measuring shaft created with Strain Gauge Bridge. This concept will allow to study the basic parameters of the engines, visualization motor parameters both vector and scalar controlled, during varying load drive system. In addition, registration during the variable physical parameters of the working electric motor, controlled by a frequency converter or controlled by a contactor will be possible. Position is designed as a teaching and research position to characterize the engines. It will be also possible selection of inverter parameters.
Pupillary dilation as an index of task demands.
Cabestrero, Raúl; Crespo, Antonio; Quirós, Pilar
2009-12-01
To analyze how pupillary responses reflect mental effort and allocation of processing resources under several load conditions, the pupil diameter of 18 participants was recorded during an auditory digit-span recall task under three load conditions: Low (5 digits), Moderate (8 digits), and Overload (11 digits). In previous research, under all load conditions a significant linear enlargement in pupil diameter was observed as each digit was presented. Significant dilations from the end of the presentation phase to the beginning of the recall phase were also observed but only under low and moderate loads. Contrary to previous research, under the Overload condition, no reduction in pupil diameter was observed when resource limits were exceeded; however, a plateau was observed when the ninth digit was presented until the beginning of the recall phase. Overall, pupillometric data seem to indicate that participants may keep processing actively even though resources are exceeded.
Parallel Computing for Probabilistic Response Analysis of High Temperature Composites
NASA Technical Reports Server (NTRS)
Sues, R. H.; Lua, Y. J.; Smith, M. D.
1994-01-01
The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.
Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source
NASA Astrophysics Data System (ADS)
Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.
2014-06-01
To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.
NASA Astrophysics Data System (ADS)
Shanmugharaj, A. M.; Bhowmick, Anil K.
2004-01-01
The rheological properties of styrene-butadiene rubber (SBR) loaded with dual phase filler were measured using Monsanto Processability Tester (MPT) at three different temperatures (100°C, 110°C and 130°C) and four different shear rates (61.3, 306.3, 613, and 1004.5 s -1). The effect of electron beam modification of dual phase filler in absence and presence of trimethylol propane triacrylate (TMPTA) or triethoxysilylpropyltetrasulphide (Si-69) on melt flow properties of SBR was also studied. The viscosity of all the systems decreases with shear rate indicating their pseudoplastic or shear thinning nature. The higher shear viscosity for the SBR loaded with the electron beam modified filler is explained in terms of variation in structure of the filler upon electron beam irradiation. Die swell of the modified filler loaded SBR is slightly higher than that of the unmodified filler loaded rubber, which is explained by calculating normal stress difference for the systems. Activation energy of the modified filler loaded SBR systems is also slightly higher than that of the control filler loaded SBR system.
NASA Astrophysics Data System (ADS)
Singh, K. K.; Rawat, Prashant
2018-05-01
This paper investigates the mechanical response of three phased (glass/MWCNTs/epoxy) composite laminate under three different loadings. Flexural strength, short beam strength and low-velocity impact (LVI) testing are performed to find an optimum doping percentage value for maximum enhancement in mechanical properties. In this work, MWCNTs were used as secondary reinforcement for three-phased composite plate. MWCNT doping was done in a range of 0–4 wt% of the thermosetting matrix system. Symmetrical design eight layered glass/epoxy laminate with zero bending extension coupling laminate was fabricated using a hybrid method i.e. hand lay-up technique followed by vacuum bagging method. Ranging analysis of MWCNT mixing highlighted the enhancement in flexural, short beam strength and improvement in damage tolerance under LVI loading. While at higher doping wt%, agglomeration of MWCNTs are observed. Results of mechanical testing proposed an optimized doping value for maximum strength and damage resistance of the laminate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukami, Tadashi; Imamura, Michinori; Kaburaki, Yuichi
1995-12-31
A new single-phase capacitor self-excited induction generator with self-regulating feature is presented. The new generator consists of a squirrel cage three-phase induction machine and three capacitors connected in series and parallel with a single phase load. The voltage regulation of this generator is very small due to the effect of the three capacitors. Moreover, since a Y-connected stator winding is employed, the waveform of the output voltage becomes sinusoidal. In this paper the system configuration and the operating principle of the new generator are explained, and the basic characteristics are also investigated by means of a simple analysis and experimentsmore » with a laboratory machine.« less
Design for Reliability and Safety Approach for the NASA New Launch Vehicle
NASA Technical Reports Server (NTRS)
Safie, Fayssal, M.; Weldon, Danny M.
2007-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program intended for sending crew and cargo to the international Space Station (ISS), to the moon, and beyond. This program is called Constellation. As part of the Constellation program, NASA is developing new launch vehicles aimed at significantly increase safety and reliability, reduce the cost of accessing space, and provide a growth path for manned space exploration. Achieving these goals requires a rigorous process that addresses reliability, safety, and cost upfront and throughout all the phases of the life cycle of the program. This paper discusses the "Design for Reliability and Safety" approach for the NASA new crew launch vehicle called ARES I. The ARES I is being developed by NASA Marshall Space Flight Center (MSFC) in support of the Constellation program. The ARES I consists of three major Elements: A solid First Stage (FS), an Upper Stage (US), and liquid Upper Stage Engine (USE). Stacked on top of the ARES I is the Crew exploration vehicle (CEV). The CEV consists of a Launch Abort System (LAS), Crew Module (CM), Service Module (SM), and a Spacecraft Adapter (SA). The CEV development is being led by NASA Johnson Space Center (JSC). Designing for high reliability and safety require a good integrated working environment and a sound technical design approach. The "Design for Reliability and Safety" approach addressed in this paper discusses both the environment and the technical process put in place to support the ARES I design. To address the integrated working environment, the ARES I project office has established a risk based design group called "Operability Design and Analysis" (OD&A) group. This group is an integrated group intended to bring together the engineering, design, and safety organizations together to optimize the system design for safety, reliability, and cost. On the technical side, the ARES I project has, through the OD&A environment, implemented a probabilistic approach to analyze and evaluate design uncertainties and understand their impact on safety, reliability, and cost. This paper focuses on the use of the various probabilistic approaches that have been pursued by the ARES I project. Specifically, the paper discusses an integrated functional probabilistic analysis approach that addresses upffont some key areas to support the ARES I Design Analysis Cycle (DAC) pre Preliminary Design (PD) Phase. This functional approach is a probabilistic physics based approach that combines failure probabilities with system dynamics and engineering failure impact models to identify key system risk drivers and potential system design requirements. The paper also discusses other probabilistic risk assessment approaches planned by the ARES I project to support the PD phase and beyond.
The Extravehicular Suit Impact Load Attenuation Study for Use in Astronaut Bone Fracture Prediction
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Gilkey, Kelly M.; Sulkowski, Christina M.; Samorezov, Sergey; Myers, Jerry G.
2011-01-01
The NASA Integrated Medical Model (IMM) assesses the risk, including likelihood and impact of occurrence, of all credible in-flight medical conditions. Fracture of the proximal femur is a traumatic injury that would likely result in loss of mission if it were to happen during spaceflight. The low gravity exposure causes decreases in bone mineral density which heightens the concern. Researchers at the NASA Glenn Research Center have quantified bone fracture probability during spaceflight with a probabilistic model. It was assumed that a pressurized extravehicular activity (EVA) suit would attenuate load during a fall, but no supporting data was available. The suit impact load attenuation study was performed to collect analogous data. METHODS: A pressurized EVA suit analog test bed was used to study how the offset, defined as the gap between the suit and the astronaut s body, impact load magnitude and suit operating pressure affects the attenuation of impact load. The attenuation data was incorporated into the probabilistic model of bone fracture as a function of these factors, replacing a load attenuation value based on commercial hip protectors. RESULTS: Load attenuation was more dependent on offset than on pressurization or load magnitude, especially at small offsets. Load attenuation factors for offsets between 0.1 - 1.5 cm were 0.69 +/- 0.15, 0.49 +/- 0.22 and 0.35 +/- 0.18 for mean impact forces of 4827, 6400 and 8467 N, respectively. Load attenuation factors for offsets of 2.8 - 5.3 cm were 0.93 +/- 0.2, 0.94 +/- 0.1 and 0.84 +/- 0.5, for the same mean impact forces. Reductions were observed in the 95th percentile confidence interval of the bone fracture probability predictions. CONCLUSIONS: The reduction in uncertainty and improved confidence in bone fracture predictions increased the fidelity and credibility of the fracture risk model and its benefit to mission design and operational decisions.
NASA Technical Reports Server (NTRS)
Thomas, J. M.; Hanagud, S.
1975-01-01
The results of two questionnaires sent to engineering experts are statistically analyzed and compared with objective data from Saturn V design and testing. Engineers were asked how likely it was for structural failure to occur at load increments above and below analysts' stress limit predictions. They were requested to estimate the relative probabilities of different failure causes, and of failure at each load increment given a specific cause. Three mathematical models are constructed based on the experts' assessment of causes. The experts' overall assessment of prediction strength fits the Saturn V data better than the models do, but a model test option (T-3) based on the overall assessment gives more design change likelihood to overstrength structures than does an older standard test option. T-3 compares unfavorably with the standard option in a cost optimum structural design problem. The report reflects a need for subjective data when objective data are unavailable.
Valente, Giordano; Taddei, Fulvia; Jonkers, Ilse
2013-09-03
The weakness of hip abductor muscles is related to lower-limb joint osteoarthritis, and joint overloading may increase the risk for disease progression. The relationship between muscle strength, structural joint deterioration and joint loading makes the latter an important parameter in the study of onset and follow-up of the disease. Since the relationship between hip abductor weakness and joint loading still remains an open question, the purpose of this study was to adopt a probabilistic modeling approach to give insights into how the weakness of hip abductor muscles, in the extent to which normal gait could be unaltered, affects ipsilateral joint contact forces. A generic musculoskeletal model was scaled to each healthy subject included in the study, and the maximum force-generating capacity of each hip abductor muscle in the model was perturbed to evaluate how all physiologically possible configurations of hip abductor weakness affected the joint contact forces during walking. In general, the muscular system was able to compensate for abductor weakness. The reduced force-generating capacity of the abductor muscles affected joint contact forces to a mild extent, with 50th percentile mean differences up to 0.5 BW (maximum 1.7 BW). There were greater increases in the peak knee joint loads than in loads at the hip or ankle. Gluteus medius, particularly the anterior compartment, was the abductor muscle with the most influence on hip and knee loads. Further studies should assess if these increases in joint loading may affect initiation and progression of osteoarthritis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wang, Zhong; An, Yu-Guang; Xu, Guang-Ju; Wang, Xiao-Zhe
2011-07-01
The polycyclic aromatic hydrocarbons (PAHs) were measured by glass fiber filter and XAD-2 collector, ultrasonic extraction, soxhlet extraction and GC-MS analysis equipment. The exhaust emission of the DI single cylinder diesel engine fueled with pure diesel, biodiesel and biodiesel blends of 50% (B50) were measured. The results indicate that the particle-phase PAHs emissions of diesel engine decrease with the increasing of load. The gas-phase PAHs emissions of diesel engine decrease with the increasing of load in the beginning and it turns to going up with further increasing of load. The particle-phase and gas-phase PAHs emissions of biodiesel decrease and mean concentration are lower than that of diesel. The total PAHs emission concentration of biodisesl is 41.1-70.1 microg/m3. Total PAHs mean concentration emissions of biodiesel is decreased 33.3% than that of diesel. The mass proportion of three-ring PAHs emissions of those 3 kinds tested fuels is about 44% in the total PAHs. Biodiesel can increase the proportion of three-ring PAHs. Toxic equivalence of PAHs emissions of biodiesel are greatly lower than that of diesel. It is less harmful to human than diesel fuel.
Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.
2014-01-01
Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051
Reddy, Lena Felice; Waltz, James A; Green, Michael F; Wynn, Jonathan K; Horan, William P
2016-07-01
Although individuals with schizophrenia show impaired feedback-driven learning on probabilistic reversal learning (PRL) tasks, the specific factors that contribute to these deficits remain unknown. Recent work has suggested several potential causes including neurocognitive impairments, clinical symptoms, and specific types of feedback-related errors. To examine this issue, we administered a PRL task to 126 stable schizophrenia outpatients and 72 matched controls, and patients were retested 4 weeks later. The task involved an initial probabilistic discrimination learning phase and subsequent reversal phases in which subjects had to adjust their responses to sudden shifts in the reinforcement contingencies. Patients showed poorer performance than controls for both the initial discrimination and reversal learning phases of the task, and performance overall showed good test-retest reliability among patients. A subgroup analysis of patients (n = 64) and controls (n = 49) with good initial discrimination learning revealed no between-group differences in reversal learning, indicating that the patients who were able to achieve all of the initial probabilistic discriminations were not impaired in reversal learning. Regarding potential contributors to impaired discrimination learning, several factors were associated with poor PRL, including higher levels of neurocognitive impairment, poor learning from both positive and negative feedback, and higher levels of indiscriminate response shifting. The results suggest that poor PRL performance in schizophrenia can be the product of multiple mechanisms. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Lee, D. B.; Jerolmack, D. J.
2017-12-01
Bed-load transport is notoriously unpredictable, in part due to stochastic fluctuations in grain entrainment and deposition. A general statistical mechanical framework has been proposed by Furbish and colleagues to formally derive average bed-load flux from grain-scale motion, and its application requires an intimate understanding of the probabilistic motion of individual grains. Recent work by Ancey et al. suggests that, near threshold, particles are entrained collectively. If so, understanding the scales of correlation is a necessary step to complete the probabilistic framework describing bed-load flux. We perform a series of experiments in a steep-sloped channel that directly quantifies fluctuations in grain motion as a function of the feed rate of particles (marbles). As the feed rate is increased, the necessary averaging time is decreased (i.e. transport grows less variable in time). Collective grain motion is defined as spatially clustered movement of several grains at once. We find that entrainment of particles is generally collective, but that these entrained particles deposit independently of each other. The size distribution of collective motion events follows an exponential decay that is consistent across sediment feed rates. To first order, changing feed rate does not change the kinematics of mobile grains, just the frequency of motion. For transport within a given region of the bed, we show that the total displacement of all entrained grains is proportional to the kinetic energy deposited into the bed by impacting grains. Individual grain-bed impacts are the likely cause of both collective and individual grain entrainment. The picture that emerges is similar to generic avalanching dynamics in sandpiles: "avalanches" (collective entrainment events) of a characteristic size relax with a characteristic timescale regardless of feed rate, but the frequency of avalanches increases in proportion to the feed rate. At high enough feed rates the avalanches merge, leading to progressively smoother and continuous transport. As most bed-load transport occurs in the intermittent regime, the length scale of collective entrainment should be considered a fundamental addition to a probabilistic framework that hopes to infer flux from grain motion.
Statistical modelling of networked human-automation performance using working memory capacity.
Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja
2014-01-01
This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.
Analog-Based Postprocessing of Navigation-Related Hydrological Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Hemri, S.; Klein, B.
2017-11-01
Inland waterway transport benefits from probabilistic forecasts of water levels as they allow to optimize the ship load and, hence, to minimize the transport costs. Probabilistic state-of-the-art hydrologic ensemble forecasts inherit biases and dispersion errors from the atmospheric ensemble forecasts they are driven with. The use of statistical postprocessing techniques like ensemble model output statistics (EMOS) allows for a reduction of these systematic errors by fitting a statistical model based on training data. In this study, training periods for EMOS are selected based on forecast analogs, i.e., historical forecasts that are similar to the forecast to be verified. Due to the strong autocorrelation of water levels, forecast analogs have to be selected based on entire forecast hydrographs in order to guarantee similar hydrograph shapes. Custom-tailored measures of similarity for forecast hydrographs comprise hydrological series distance (SD), the hydrological matching algorithm (HMA), and dynamic time warping (DTW). Verification against observations reveals that EMOS forecasts for water level at three gauges along the river Rhine with training periods selected based on SD, HMA, and DTW compare favorably with reference EMOS forecasts, which are based on either seasonal training periods or on training periods obtained by dividing the hydrological forecast trajectories into runoff regimes.
Probabilistic model of bridge vehicle loads in port area based on in-situ load testing
NASA Astrophysics Data System (ADS)
Deng, Ming; Wang, Lei; Zhang, Jianren; Wang, Rei; Yan, Yanhong
2017-11-01
Vehicle load is an important factor affecting the safety and usability of bridges. An statistical analysis is carried out in this paper to investigate the vehicle load data of Tianjin Haibin highway in Tianjin port of China, which are collected by the Weigh-in- Motion (WIM) system. Following this, the effect of the vehicle load on test bridge is calculated, and then compared with the calculation result according to HL-93(AASHTO LRFD). Results show that the overall vehicle load follows a distribution with a weighted sum of four normal distributions. The maximum vehicle load during the design reference period follows a type I extremum distribution. The vehicle load effect also follows a weighted sum of four normal distributions, and the standard value of the vehicle load is recommended as 1.8 times that of the calculated value according to HL-93.
Probalistic Finite Elements (PFEM) structural dynamics and fracture mechanics
NASA Technical Reports Server (NTRS)
Liu, Wing-Kam; Belytschko, Ted; Mani, A.; Besterfield, G.
1989-01-01
The purpose of this work is to develop computationally efficient methodologies for assessing the effects of randomness in loads, material properties, and other aspects of a problem by a finite element analysis. The resulting group of methods is called probabilistic finite elements (PFEM). The overall objective of this work is to develop methodologies whereby the lifetime of a component can be predicted, accounting for the variability in the material and geometry of the component, the loads, and other aspects of the environment; and the range of response expected in a particular scenario can be presented to the analyst in addition to the response itself. Emphasis has been placed on methods which are not statistical in character; that is, they do not involve Monte Carlo simulations. The reason for this choice of direction is that Monte Carlo simulations of complex nonlinear response require a tremendous amount of computation. The focus of efforts so far has been on nonlinear structural dynamics. However, in the continuation of this project, emphasis will be shifted to probabilistic fracture mechanics so that the effect of randomness in crack geometry and material properties can be studied interactively with the effect of random load and environment.
A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation
Wang, Zeyu; Wang, Jianhui; Chen, Chen
2016-12-07
Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less
A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zeyu; Wang, Jianhui; Chen, Chen
Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less
The purpose of this SOP is to describe the procedures undertaken to calculate the ingestion exposure using composite food chemical residue values from the day of direct measurements. The calculation is based on the probabilistic approach. This SOP uses data that have been proper...
NASA Astrophysics Data System (ADS)
Amien, S.; Yoga, W.; Fahmi, F.
2018-02-01
Synchronous generators are a major tool in an electrical energy generating systems, the load supplied by the generator is unbalanced. This paper discusses the effect of synchronous generator temperature on the condition of balanced load and unbalanced load, which will then be compared with the measurement result of both states of the generator. Unbalanced loads can be caused by various asymmetric disturbances in the power system and the failure of load forecasting studies so that the load distribution in each phase is not the same and causing the excessive heat of the generator. The method used in data collection was by using an infrared thermometer and resistance calculation method. The temperature comparison result between the resistive, inductive and capacitive loads in the highest temperature balance occured when the generator is loaded with a resistive load, where T = 31.9 ° C and t = 65 minutes. While in a state of unbalanced load the highest temperature occured when the generator is loaded with a capacitive load, where T = 40.1 ° C and t = 60 minutes. By understanding this behavior, we can maintain the generator for longer operation life.
NASA Astrophysics Data System (ADS)
Ghaemi, Mehrdad; Javadi, Nabi
2017-11-01
The phase diagrams of the three-layer Ising model on the honeycomb lattice with a diluted surface have been constructed using the probabilistic cellular automata based on Glauber algorithm. The effects of the exchange interactions on the phase diagrams have been investigated. A general mathematical expression for the critical temperature is obtained in terms of relative coupling r = J1/J and Δs = (Js/J) - 1, where J and Js represent the nearest neighbor coupling within inner- and surface-layers, respectively, and each magnetic site in the surface-layer is coupled with the nearest neighbor site in the inner-layer via the exchange coupling J1. In the case of antiferromagnetic coupling between surface-layer and inner-layer, system reveals many interesting phenomena, such as the possibility of existence of compensation line before the critical temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, O.; Departamento de Fisica, Facultad de Ciencias Basicas, Universidad de Antofagasta, Casilla 170, Antofagasta; Bergou, J.
We study the probabilistic cloning of three symmetric states. These states are defined by a single complex quantity, the inner product among them. We show that three different probabilistic cloning machines are necessary to optimally clone all possible families of three symmetric states. We also show that the optimal cloning probability of generating M copies out of one original can be cast as the quotient between the success probability of unambiguously discriminating one and M copies of symmetric states.
Research on a Method of Geographical Information Service Load Balancing
NASA Astrophysics Data System (ADS)
Li, Heyuan; Li, Yongxing; Xue, Zhiyong; Feng, Tao
2018-05-01
With the development of geographical information service technologies, how to achieve the intelligent scheduling and high concurrent access of geographical information service resources based on load balancing is a focal point of current study. This paper presents an algorithm of dynamic load balancing. In the algorithm, types of geographical information service are matched with the corresponding server group, then the RED algorithm is combined with the method of double threshold effectively to judge the load state of serve node, finally the service is scheduled based on weighted probabilistic in a certain period. At the last, an experiment system is built based on cluster server, which proves the effectiveness of the method presented in this paper.
Molecular dynamics simulation of shock-wave loading of copper and titanium
NASA Astrophysics Data System (ADS)
Bolesta, A. V.; Fomin, V. M.
2017-10-01
At extreme pressures and temperatures common materials form new dense phases with compacted atomic arrangements. By classical molecular dynamics simulation we observe that FCC copper undergo phase transformation to BCC structure. The transition occurs under shock wave loading at the pressures above 80 GPa and corresponding temperatures above 2000 K. We calculate phase diagram, show that at these pressures and low temperature FCC phase of copper is still stable and discuss the thermodynamic reason for phase transformation at high temperature shock wave regime. Titanium forms new hexagonal phase at high pressure as well. We calculate the structure of shock wave in titanium and observe that shock front splits in three parts: elastic, plastic and phase transformation. The possibility of using a phase transition behind a shock wave with further unloading for designing nanocrystalline materials with a reduced grain size is also shown.
Probabilistic Flexural Fatigue in Plain and Fiber-Reinforced Concrete
Ríos, José D.
2017-01-01
The objective of this work is two-fold. First, we attempt to fit the experimental data on the flexural fatigue of plain and fiber-reinforced concrete with a probabilistic model (Saucedo, Yu, Medeiros, Zhang and Ruiz, Int. J. Fatigue, 2013, 48, 308–318). This model was validated for compressive fatigue at various loading frequencies, but not for flexural fatigue. Since the model is probabilistic, it is not necessarily related to the specific mechanism of fatigue damage, but rather generically explains the fatigue distribution in concrete (plain or reinforced with fibers) for damage under compression, tension or flexion. In this work, more than 100 series of flexural fatigue tests in the literature are fit with excellent results. Since the distribution of monotonic tests was not available in the majority of cases, a two-step procedure is established to estimate the model parameters based solely on fatigue tests. The coefficient of regression was more than 0.90 except for particular cases where not all tests were strictly performed under the same loading conditions, which confirms the applicability of the model to flexural fatigue data analysis. Moreover, the model parameters are closely related to fatigue performance, which demonstrates the predictive capacity of the model. For instance, the scale parameter is related to flexural strength, which improves with the addition of fibers. Similarly, fiber increases the scattering of fatigue life, which is reflected by the decreasing shape parameter. PMID:28773123
Probabilistic Flexural Fatigue in Plain and Fiber-Reinforced Concrete.
Ríos, José D; Cifuentes, Héctor; Yu, Rena C; Ruiz, Gonzalo
2017-07-07
The objective of this work is two-fold. First, we attempt to fit the experimental data on the flexural fatigue of plain and fiber-reinforced concrete with a probabilistic model (Saucedo, Yu, Medeiros, Zhang and Ruiz, Int. J. Fatigue, 2013, 48, 308-318). This model was validated for compressive fatigue at various loading frequencies, but not for flexural fatigue. Since the model is probabilistic, it is not necessarily related to the specific mechanism of fatigue damage, but rather generically explains the fatigue distribution in concrete (plain or reinforced with fibers) for damage under compression, tension or flexion. In this work, more than 100 series of flexural fatigue tests in the literature are fit with excellent results. Since the distribution of monotonic tests was not available in the majority of cases, a two-step procedure is established to estimate the model parameters based solely on fatigue tests. The coefficient of regression was more than 0.90 except for particular cases where not all tests were strictly performed under the same loading conditions, which confirms the applicability of the model to flexural fatigue data analysis. Moreover, the model parameters are closely related to fatigue performance, which demonstrates the predictive capacity of the model. For instance, the scale parameter is related to flexural strength, which improves with the addition of fibers. Similarly, fiber increases the scattering of fatigue life, which is reflected by the decreasing shape parameter.
Classic articles and workbook: EPRI monographs on simulation of electric power production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stremel, J.P.
1991-12-01
This monograph republishes several articles including a seminal one on probabilistic production costing for electric power generation. That article is given in the original French along with a English translation. Another article, written by R. Booth, gives a popular explanation of the theory, and a workbook by B. Manhire is included that carries through a simple example step by step. The classical analysis of non-probabilistic generator dispatch by L.K. Kirchmayer is republished along with an introductory essay by J.P. Stremel that puts in perspective the monograph material. The article in French was written by H. Baleriaux, E. Jamoulle, and Fr.more » Linard de Guertechin and first published in Brussels in 1967. It derived a method for calculating the expected value of production costs by modifying a load duration curve through the use of probability factors that account for unplanned random generator outages. Although the paper showed how pump storage plants could be included and how linear programming could be applied, the convolution technique used in the probabilistic calculations is the part most widely applied. The tutorial paper by Booth was written in a light style, and its lucidity helped popularize the method. The workbook by Manhire also shows how the calculation can be shortened significantly using cumulants to approximate the load duration curve.« less
Adaptive Control of Four-Leg VSC Based DSTATCOM in Distribution System
NASA Astrophysics Data System (ADS)
Singh, Bhim; Arya, Sabha Raj
2014-01-01
This work discusses an experimental performance of a four-leg Distribution Static Compensator (DSTATCOM) using an adaptive filter based approach. It is used for estimation of reference supply currents through extracting the fundamental active power components of three-phase distorted load currents. This control algorithm is implemented on an assembled DSTATCOM for harmonics elimination, neutral current compensation and load balancing, under nonlinear loads. Experimental results are discussed, and it is noticed that DSTATCOM is effective solution to perform satisfactory performance under load dynamics.
Development/Modernization of an Advanced Non-Light Water Reactor Probabilistic Risk Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henneke, Dennis W.; Robinson, James
In 2015, GE Hitachi Nuclear Energy (GEH) teamed with Argonne National Laboratory (Argonne) to perform Research and Development (R&D) of next-generation Probabilistic Risk Assessment (PRA) methodologies for the modernization of an advanced non-Light Water Reactor (non-LWR) PRA. This effort built upon a PRA developed in the early 1990s for GEH’s Power Reactor Inherently Safe Module (PRISM) Sodium Fast Reactor (SFR). The work had four main tasks: internal events development modeling the risk from the reactor for hazards occurring at-power internal to the plant; an all hazards scoping review to analyze the risk at a high level from external hazards suchmore » as earthquakes and high winds; an all modes scoping review to understand the risk at a high level from operating modes other than at-power; and risk insights to integrate the results from each of the three phases above. To achieve these objectives, GEH and Argonne used and adapted proven PRA methodologies and techniques to build a modern non-LWR all hazards/all modes PRA. The teams also advanced non-LWR PRA methodologies, which is an important outcome from this work. This report summarizes the project outcomes in two major phases. The first phase presents the methodologies developed for non-LWR PRAs. The methodologies are grouped by scope, from Internal Events At-Power (IEAP) to hazards analysis to modes analysis. The second phase presents details of the PRISM PRA model which was developed as a validation of the non-LWR methodologies. The PRISM PRA was performed in detail for IEAP, and at a broader level for hazards and modes. In addition to contributing methodologies, this project developed risk insights applicable to non-LWR PRA, including focus-areas for future R&D, and conclusions about the PRISM design.« less
Students’ difficulties in probabilistic problem-solving
NASA Astrophysics Data System (ADS)
Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.
2018-03-01
There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Probabilistic Cloning of Three Real States with Optimal Success Probabilities
NASA Astrophysics Data System (ADS)
Rui, Pin-shu
2017-06-01
We investigate the probabilistic quantum cloning (PQC) of three real states with average probability distribution. To get the analytic forms of the optimal success probabilities we assume that the three states have only two pairwise inner products. Based on the optimal success probabilities, we derive the explicit form of 1 →2 PQC for cloning three real states. The unitary operation needed in the PQC process is worked out too. The optimal success probabilities are also generalized to the M→ N PQC case.
Characterizing the uncertainty in holddown post load measurements
NASA Technical Reports Server (NTRS)
Richardson, J. A.; Townsend, J. S.
1993-01-01
In order to understand unexpectedly erratic load measurements in the launch-pad supports for the space shuttle, the sensitivities of the load cells in the supports were analyzed using simple probabilistic techniques. NASA engineers use the loads in the shuttle's supports to calculate critical stresses in the shuttle vehicle just before lift-off. The support loads are measured with 'load cells' which are actually structural components of the mobile launch platform which have been instrumented with strain gauges. Although these load cells adequately measure vertical loads, the horizontal load measurements have been erratic. The load measurements were simulated in this study using Monte Carlo simulation procedures. The simulation studies showed that the support loads are sensitive to small deviations in strain and calibration. In their current configuration, the load cells will not measure loads with sufficient accuracy to reliably calculate stresses in the shuttle vehicle. A simplified model of the holddown post (HDP) load measurement system was used to study the effect on load measurement accuracy for several factors, including load point deviations, gauge heights, and HDP geometry.
Lumbar spinal loads and muscle activity during a golf swing.
Lim, Young-Tae; Chow, John W; Chae, Woen-Sik
2012-06-01
This study estimated the lumbar spinal loads at the L4-L5 level and evaluated electromyographic (EMG) activity of right and left rectus abdominis, external and internal obliques, erector spinae, and latissimus dorsi muscles during a golf swing. Four super VHS camcorders and two force plates were used to obtain three-dimensional (3D) kinematics and kinetics of golf swings performed by five male collegiate golfers. Average EMG levels for different phases of golf swing were determined. An EMG-assisted optimization model was applied to compute the contact forces acting on the L4-L5. The results revealed a mean peak compressive load of over six times the body weight (BW) during the downswing and mean peak anterior and medial shear loads approaching 1.6 and 0.6 BW during the follow-through phases. The peak compressive load estimated in this study was high, but less than the corresponding value (over 8 BW) reported by a previous study. Average EMG levels of different muscles were the highest in the acceleration and follow-through phases, suggesting a likely link between co-contractions of paraspinal muscles and lumbar spinal loads.
Probabilistic Analysis of Structural Member from Recycled Aggregate Concrete
NASA Astrophysics Data System (ADS)
Broukalová, I.; Šeps, K.
2017-09-01
The paper aims at the topic of sustainable building concerning recycling of waste rubble concrete from demolition. Considering demands of maximising recycled aggregate use and minimising of cement consumption, composite from recycled concrete aggregate was proposed. The objective of the presented investigations was to verify feasibility of the recycled aggregate cement based fibre reinforced composite in a structural member. Reliability of wall from recycled aggregate fibre reinforced composite was assessed in a probabilistic analysis of a load-bearing capacity of the wall. The applicability of recycled aggregate fibre reinforced concrete in structural applications was demonstrated. The outcomes refer to issue of high scatter of material parameters of recycled aggregate concretes.
NASA Astrophysics Data System (ADS)
Uchiyama, H.; Watanabe, M.; Shaw, D. M.; Bahia, J. E.; Collins, G. J.
1999-10-01
Accurate measurement of plasma source impedance is important for verification of plasma circuit models, as well as for plasma process characterization and endpoint detection. Most impedance measurement techniques depend in some manner on the cosine of the phase angle to determine the impedance of the plasma load. Inductively coupled plasmas are generally highly inductive, with the phase angle between the applied rf voltage and the rf current in the range of 88 to near 90 degrees. A small measurement error in this phase angle range results in a large error in the calculated cosine of the angle, introducing large impedance measurement variations. In this work, we have compared the measured impedance of a planar inductively coupled plasma using three commercial plasma impedance monitors (ENI V/I probe, Advanced Energy RFZ60 and Advanced Energy Z-Scan). The plasma impedance is independently verified using a specially designed match network and a calibrated load, representing the plasma, to provide a measurement standard.
Willoughby, Timothy C.
2000-01-01
The Grand Calumet River, in northwestern Indiana, drains a heavily industrialized area along the southern shore of Lake Michigan. Steel production and petroleum refining are two of the area?s predominant industries. High-temperature processes, such as fossilfuel combustion and steel production, release contaminants to the atmosphere that may result in wet deposition being a major contributor to major-ion and trace-metal loadings in northwestern Indiana and Lake Michigan. A wet-deposition collection site was established at the Gary (Indiana) Regional Airport to monitor the quantity and chemical quality of wet deposition. During a first phase of sampling, 48 wet-deposition samples were collected weekly between June 30, 1992, and August 31, 1993. During a second phase of sampling, 40 wet-deposition samples were collected between October 17, 1995, and November 12, 1996. Forty-two wet-deposition samples were collected during a third phase of sampling, which began April 29, 1997, and was completed April 28, 1998. Wetdeposition samples were analyzed for pH, specific conductance, and selected major ions and trace metals. This report describes the quantity and quality of wet-deposition samples collected during the third sampling phase and compares these findings to the results of the first and second sampling phases. All of the samples collected during the third phase of sampling were of sufficient volumes for at least some of the analyses to be performed. Constituent concentrations from the third sampling phase were not significantly different (at the 5-percent significance level) from those for the second sampling phase. Significant increases, however, were observed in the concentrations of potassium, iron, lead, and zinc when compared to the concentrations observed in the first sampling phase. Weekly loadings were estimated for each constituent measured during the third sampling phase. If constituent concentrations were reported less than the method reporting limit, a range for the weekly loading was computed. The estimated annual loadings of chloride, silica, bromide, copper, and zinc during the third sampling phase were greater than those estimated for the first two sampling phases. The only estimated annual loading in the third sampling phase that was less than the estimated annual loadings observed during the first two sampling phases was sulfate. The estimated annual loadings of calcium, magnesium, nitrate, potassium, barium, lead, iron, and manganese observed during the third sampling phase were greater than the loadings observed during the first sampling phase but less than those observed during the second sampling phase. No significant differences were observed between the quantity of wet deposition collected during the three sampling phases.
NASA Technical Reports Server (NTRS)
Onwubiko, Chin-Yere; Onyebueke, Landon
1996-01-01
Structural failure is rarely a "sudden death" type of event, such sudden failures may occur only under abnormal loadings like bomb or gas explosions and very strong earthquakes. In most cases, structures fail due to damage accumulated under normal loadings such as wind loads, dead and live loads. The consequence of cumulative damage will affect the reliability of surviving components and finally causes collapse of the system. The cumulative damage effects on system reliability under time-invariant loadings are of practical interest in structural design and therefore will be investigated in this study. The scope of this study is, however, restricted to the consideration of damage accumulation as the increase in the number of failed components due to the violation of their strength limits.
Time series prediction in the case of nonlinear loads by using ADALINE and NAR neural networks
NASA Astrophysics Data System (ADS)
Ghiormez, L.; Panoiu, M.; Panoiu, C.; Tirian, O.
2018-01-01
This paper presents a study regarding the time series prediction in the case of an electric arc furnace. The considered furnace is a three phase load and it is used to melt scrap in order to obtain liquid steel. The furnace is powered by a three-phase electrical supply and therefore has three graphite electrodes. The furnace is a nonlinear load that can influence the equipment connected to the same electrical power supply network. The nonlinearity is given by the electric arc that appears at the furnace between the graphite electrode and the scrap. Because of the disturbances caused by the electric arc furnace during the elaboration process of steel it is very useful to predict the current of the electric arc and the voltage from the measuring point in the secondary side of the furnace transformer. In order to make the predictions were used ADALINE and NAR neural networks. To train the networks and to make the predictions were used data acquired from the real technological plant.
Reliability and Creep/Fatigue Analysis of a CMC Component
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Mital, Subodh K.; Gyekenyesi, John Z.; Gyekenyesi, John P.
2007-01-01
High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight and enable higher operating temperatures requiring less cooling; thus leading to increased engine efficiencies. There is a need for convenient design tools that can accommodate various loading conditions and material data with their associated uncertainties to estimate the minimum predicted life as well as the failure probabilities of a structural component. This paper presents a review of the life prediction and probabilistic analyses performed for a CMC turbine stator vane. A computer code, NASALife, is used to predict the life of a 2-D woven silicon carbide fiber reinforced silicon carbide matrix (SiC/SiC) turbine stator vane due to a mission cycle which induces low cycle fatigue and creep. The output from this program includes damage from creep loading, damage due to cyclic loading and the combined damage due to the given loading cycle. Results indicate that the trends predicted by NASALife are as expected for the loading conditions used for this study. In addition, a combination of woven composite micromechanics, finite element structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Results indicate that reducing the scatter in proportional limit strength of the vane material has the greatest effect in improving the overall reliability of the CMC vane.
Probabilistic Weather Information Tailored to the Needs of Transmission System Operators
NASA Astrophysics Data System (ADS)
Alberts, I.; Stauch, V.; Lee, D.; Hagedorn, R.
2014-12-01
Reliable and accurate forecasts for wind and photovoltaic (PV) power production are essential for stable transmission systems. A high potential for improving the wind and PV power forecasts lies in optimizing the weather forecasts, since these energy sources are highly weather dependent. For this reason the main objective of the German research project EWeLiNE is to improve the quality the underlying numerical weather predictions towards energy operations. In this project, the German Meteorological Service (DWD), the Fraunhofer Institute for Wind Energy and Energy System Technology, and three of the German transmission system operators (TSOs) are working together to improve the weather and power forecasts. Probabilistic predictions are of particular interest, as the quantification of uncertainties provides an important tool for risk management. Theoretical considerations suggest that it can be advantageous to use probabilistic information to represent and respond to the remaining uncertainties in the forecasts. However, it remains a challenge to integrate this information into the decision making processes related to market participation and power systems operations. The project is planned and carried out in close cooperation with the involved TSOs in order to ensure the usability of the products developed. It will conclude with a demonstration phase, in which the improved models and newly developed products are combined into a process chain and used to provide information to TSOs in a real-time decision support tool. The use of a web-based development platform enables short development cycles and agile adaptation to evolving user needs. This contribution will present the EWeLiNE project and discuss ideas on how to incorporate probabilistic information into the users' current decision making processes.
O'Sullivan, G.A.; O'Sullivan, J.A.
1999-07-27
In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources. 31 figs.
O'Sullivan, George A.; O'Sullivan, Joseph A.
1999-01-01
In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources.
Numerical investigation and experimental development on VM-PT cryocooler operating below 4 K
NASA Astrophysics Data System (ADS)
Zhang, Tong; Pan, Changzhao; Zhou, Yuan; Wang, Junjie
2016-12-01
Vuilleumier coupling pulse tube (VM-PT) cryocooler is a novel kind of cryocooler capable of attaining liquid helium temperature which had been experimentally verified. Depending on different coupling modes and phase shifters, VM-PT cryocooler can be designed in several configurations. This paper presents a numerical investigation on three typical types of VM-PT cryocoolers, which are gas-coupling mode with room temperature phase shifter (GCRP), gas-coupling mode with cold phase shifter (GCCP) and thermal-coupling mode with cold phase shifter (TCCP). Firstly, three configurations are optimized on operating parameters to attain lower no-load temperature. Then, based on the simulation results, distributions of acoustic power, enthalpy flow, pressure wave, and volume flow rate are presented and discussed to better understand the energy flow characteristics and coupling mechanism. Meanwhile, analyses of phase relationship and exergy loss are also performed. Furthermore, a GCCP experimental system with optimal comprehensive performance among three configurations was built and tested. Experimental results showed good consistency with the simulations. Finally, a no-load temperature of 3.39 K and cooling power of 9.75 mW at 4.2 K were obtained with a pressure ratio of 1.7, operating frequency of 1.22 Hz and mean pressure of 1.5 MPa.
Design Analysis of a Prepackaged Nuclear Power Plant for an Ice Cap Location
1959-01-15
requirements and heating load 1.3 Site Conditions 1,U Air Transportability 1.5 Standby Power Availability 1.6 Building Structuree and Foundations 2,0...Skid with Reactor and Steam Generator Generator Weight Distribution Foundation Load Diagram (Secondary) Turbine Generator Package - Typical...Requirements and Heating Load The plant shall be capable of producing a minimum of 1500 Kw net ^ electrical energy at 4160/2400 volts, three phase
Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.
Mørk, Søren; Holmes, Ian
2012-03-01
Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.
A probabilistic framework for single-station location of seismicity on Earth and Mars
NASA Astrophysics Data System (ADS)
Böse, M.; Clinton, J. F.; Ceylan, S.; Euchner, F.; van Driel, M.; Khan, A.; Giardini, D.; Lognonné, P.; Banerdt, W. B.
2017-01-01
Locating the source of seismic energy from a single three-component seismic station is associated with large uncertainties, originating from challenges in identifying seismic phases, as well as inevitable pick and model uncertainties. The challenge is even higher for planets such as Mars, where interior structure is a priori largely unknown. In this study, we address the single-station location problem by developing a probabilistic framework that combines location estimates from multiple algorithms to estimate the probability density function (PDF) for epicentral distance, back azimuth, and origin time. Each algorithm uses independent and complementary information in the seismic signals. Together, the algorithms allow locating seismicity ranging from local to teleseismic quakes. Distances and origin times of large regional and teleseismic events (M > 5.5) are estimated from observed and theoretical body- and multi-orbit surface-wave travel times. The latter are picked from the maxima in the waveform envelopes in various frequency bands. For smaller events at local and regional distances, only first arrival picks of body waves are used, possibly in combination with fundamental Rayleigh R1 waveform maxima where detectable; depth phases, such as pP or PmP, help constrain source depth and improve distance estimates. Back azimuth is determined from the polarization of the Rayleigh- and/or P-wave phases. When seismic signals are good enough for multiple approaches to be used, estimates from the various methods are combined through the product of their PDFs, resulting in an improved event location and reduced uncertainty range estimate compared to the results obtained from each algorithm independently. To verify our approach, we use both earthquake recordings from existing Earth stations and synthetic Martian seismograms. The Mars synthetics are generated with a full-waveform scheme (AxiSEM) using spherically-symmetric seismic velocity, density and attenuation models of Mars that incorporate existing knowledge of Mars internal structure, and include expected ambient and instrumental noise. While our probabilistic framework is developed mainly for application to Mars in the context of the upcoming InSight mission, it is also relevant for locating seismic events on Earth in regions with sparse instrumentation.
Parallel computing for probabilistic fatigue analysis
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.
1993-01-01
This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.
Economic Dispatch for Microgrid Containing Electric Vehicles via Probabilistic Modeling: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Yin; Gao, Wenzhong; Momoh, James
In this paper, an economic dispatch model with probabilistic modeling is developed for a microgrid. The electric power supply in a microgrid consists of conventional power plants and renewable energy power plants, such as wind and solar power plants. Because of the fluctuation in the output of solar and wind power plants, an empirical probabilistic model is developed to predict their hourly output. According to different characteristics of wind and solar power plants, the parameters for probabilistic distribution are further adjusted individually for both. On the other hand, with the growing trend in plug-in electric vehicles (PHEVs), an integrated microgridmore » system must also consider the impact of PHEVs. The charging loads from PHEVs as well as the discharging output via the vehicle-to-grid (V2G) method can greatly affect the economic dispatch for all of the micro energy sources in a microgrid. This paper presents an optimization method for economic dispatch in a microgrid considering conventional power plants, renewable power plants, and PHEVs. The simulation results reveal that PHEVs with V2G capability can be an indispensable supplement in a modern microgrid.« less
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Measuring cognitive load during procedural skills training with colonoscopy as an exemplar.
Sewell, Justin L; Boscardin, Christy K; Young, John Q; Ten Cate, Olle; O'Sullivan, Patricia S
2016-06-01
Few studies have investigated cognitive factors affecting learning of procedural skills in medical education. Cognitive load theory, which focuses on working memory, is highly relevant, but methods for measuring cognitive load during procedural training are not well understood. Using colonoscopy as an exemplar, we used cognitive load theory to develop a self-report instrument to measure three types of cognitive load (intrinsic, extraneous and germane load) and to provide evidence for instrument validity. We developed the instrument (the Cognitive Load Inventory for Colonoscopy [CLIC]) using a multi-step process. It included 19 items measuring three types of cognitive load, three global rating items and demographics. We then conducted a cross-sectional survey that was administered electronically to 1061 gastroenterology trainees in the USA. Participants completed the CLIC following a colonoscopy. The two study phases (exploratory and confirmatory) each lasted for 10 weeks during the 2014-2015 academic year. Exploratory factor analysis determined the most parsimonious factor structure; confirmatory factor analysis assessed model fit. Composite measures of intrinsic, extraneous and germane load were compared across years of training and with global rating items. A total of 477 (45.0%) invitees participated (116 in the exploratory study and 361 in the confirmatory study) in 154 (95.1%) training programmes. Demographics were similar to national data from the USA. The most parsimonious factor structure included three factors reflecting the three types of cognitive load. Confirmatory factor analysis verified that a three-factor model was the best fit. Intrinsic, extraneous and germane load items had high internal consistency (Cronbach's alpha 0.90, 0.87 and 0.96, respectively) and correlated as expected with year in training and global assessment of cognitive load. The CLIC measures three types of cognitive load during colonoscopy training. Evidence of validity is provided. Although CLIC items relate to colonoscopy, the development process we detail can be used to adapt the instrument for use in other learning settings in medical education. © 2016 John Wiley & Sons Ltd.
A Tool Chain for the V and V of NASA Cryogenic Fuel Loading Health Management
2014-10-02
Here, K. (2011). Formal testing for separation assurance. Ann. Math. Artif . Intell., 63(1), 5–30. Goodrich, C., Narasimhan, S., Daigle, M...Probabilistic Reasoning in Intelligent Sys- tems: Networks of plausible inference Morgan Kauf- mann: . Reed, E., Schumann, J., & Mengshoel, O. (2011
Probabilistic Metrology Attains Macroscopic Cloning of Quantum Clocks
NASA Astrophysics Data System (ADS)
Gendra, B.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.; Chiribella, G.
2014-12-01
It has recently been shown that probabilistic protocols based on postselection boost the performances of the replication of quantum clocks and phase estimation. Here we demonstrate that the improvements in these two tasks have to match exactly in the macroscopic limit where the number of clones grows to infinity, preserving the equivalence between asymptotic cloning and state estimation for arbitrary values of the success probability. Remarkably, the cloning fidelity depends critically on the number of rationally independent eigenvalues of the clock Hamiltonian. We also prove that probabilistic metrology can simulate cloning in the macroscopic limit for arbitrary sets of states when the performance of the simulation is measured by testing small groups of clones.
Load Designs For MJ Dense Plasma Foci
NASA Astrophysics Data System (ADS)
Link, A.; Povlius, A.; Anaya, R.; Anderson, M. G.; Angus, J. R.; Cooper, C. M.; Falabella, S.; Goerz, D.; Higginson, D.; Holod, I.; McMahon, M.; Mitrani, J.; Koh, E. S.; Pearson, A.; Podpaly, Y. A.; Prasad, R.; van Lue, D.; Watson, J.; Schmidt, A. E.
2017-10-01
Dense plasma focus (DPF) Z-pinches are compact pulse power driven devices with coaxial electrodes. The discharge of DPF consists of three distinct phases: first generation of a plasma sheath, plasma rail gun phase where the sheath is accelerated down the electrodes and finally an implosion phase where the plasma stagnates into a z-pinch geometry. During the z-pinch phase, DPFs can produce MeV ion beams, x-rays and neutrons. Megaampere class DPFs with deuterium fills have demonstrated neutron yields in the 1012 neutrons/shot range with pulse durations of 10-100 ns. Kinetic simulations using the code Chicago are being used to evaluate various load configurations from initial sheath formation to the final z-pinch phase for DPFs with up to 5 MA and 1 MJ coupled to the load. Results will be presented from the preliminary design simulations. LLNL-ABS-734785 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory (LLNL) under Contract DE-AC52-07NA27344 and with support from the Computing Grand Challenge program at LLNL.
NASA Astrophysics Data System (ADS)
Gordeev, Evgeny; Sergeev, Victor; Tsyganenko, Nikolay; Kuznetsova, Maria; Rastaetter, Lutz; Raeder, Joachim; Toth, Gabor; Lyon, John; Merkin, Vyacheslav; Wiltberger, Michael
2017-04-01
In this study we investigate how well the three community-available global MHD models, supported by the Community Coordinated Modeling Center (CCMC NASA), reproduce the global magnetospheric dynamics, including the loading-unloading substorm cycle. We found that in terms of global magnetic flux transport CCMC models display systematically different response to idealized 2-hour north then 2-hour south IMF Bz variation. The LFM model shows a depressed return convection in the tail plasma sheet and high rate of magnetic flux loading into the lobes during the growth phase, as well as enhanced return convection and high unloading rate during the expansion phase, with the amount of loaded/unloaded magnetotail flux and the growth phase duration being the closest to their observed empirical values during isolated substorms. BATSRUS and Open GGCM models exhibit drastically different behavior. In the BATS-R-US model the plasma sheet convection shows a smooth transition to the steady convection regime after the IMF southward turning. In the Open GGCM a weak plasma sheet convection has comparable intensities during both the growth phase and the following slow unloading phase. Our study shows that different CCMC models under the same solar wind conditions (north to south IMF variation) produce essentially different solutions in terms of global magnetospheric convection.
Shock Equation of State of Multi-Phase Epoxy-Based Composite (Al-MnO2-Epoxy)
2010-10-01
single stage light gas gun , two...using three different loading techniques— single stage light gas gun , two stage light gas gun , and explosive loading—with multiple diagnostic...wave speed. B. Single stage gas gun loading experiments Four gas gun -driven equation of state experiments were conducted at NSWC-Indian Head using
Simulation and Analysis of Three-Phase Rectifiers for Aerospace Power Applications
NASA Technical Reports Server (NTRS)
Truong, Long V.; Birchenough, Arthur G.
2004-01-01
Due to the nature of planned planetary missions, fairly large advanced power systems are required for the spacecraft. These future high power spacecrafts are expected to use dynamic power conversion systems incorporating high speed alternators as three-phase AC electrical power source. One of the early design considerations in such systems is the type of rectification to be used with the AC source for DC user loads. This paper address the issues involved with two different rectification methods, namely the conventional six and twelve pulses. Two circuit configurations which involved parallel combinations of the six and twelve-pulse rectifiers were selected for the simulation. The rectifier s input and output power waveforms will be thoroughly examined through simulations. The effects of the parasitic load for power balancing and filter components for reducing the ripple voltage at the DC loads are also included in the analysis. Details of the simulation circuits, simulation results, and design examples for reducing risk from damaging of spacecraft engines will be presented and discussed.
Phase estimation of coherent states with a noiseless linear amplifier
NASA Astrophysics Data System (ADS)
Assad, Syed M.; Bradshaw, Mark; Lam, Ping Koy
Amplification of quantum states is inevitably accompanied with the introduction of noise at the output. For protocols that are probabilistic with heralded success, noiseless linear amplification in theory may still be possible. When the protocol is successful, it can lead to an output that is a noiselessly amplified copy of the input. When the protocol is unsuccessful, the output state is degraded and is usually discarded. Probabilistic protocols may improve the performance of some quantum information protocols, but not for metrology if the whole statistics is taken into consideration. We calculate the precision limits on estimating the phase of coherent states using a noiseless linear amplifier by computing its quantum Fisher information and we show that on average, the noiseless linear amplifier does not improve the phase estimate. We also discuss the case where abstention from measurement can reduce the cost for estimation.
Modeling the residual effects and threshold saturation of training: a case study of Olympic swimmers
Hellard, Philippe; Avalos, Marta; Millet, Grégoire; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude
2005-01-01
The aim of this study was to model the residual effects of training on the swimming performance and to compare a model including threshold saturation (MM) to the Banister model (BM). Seven Olympic swimmers were studied over a period of 4 ± 2 years. For three training loads (low-intensity wLIT, high-intensity wHIT and strength training wST), three residual training effects were determined: short-term (STE) during the taper phase, i.e. three weeks before the performance (weeks 0, −1, −2), intermediate-term (ITE) during the intensity phase (weeks −3, −4 and −5) and long-term (LTE) during the volume phase (weeks −6, −7, −8). ITE and LTE were positive for wHIT and wLIT, respectively (P < 0.05). wLIT during taper was related to performances by a parabolic relationship (P < 0.05). Different quality measures indicated that MM compares favorably with BM. Identifying individual training thresholds may help individualizing the distribution of training loads. PMID:15705048
Operating health analysis of electric power systems
NASA Astrophysics Data System (ADS)
Fotuhi-Firuzabad, Mahmud
The required level of operating reserve to be maintained by an electric power system can be determined using both deterministic and probabilistic techniques. Despite the obvious disadvantages of deterministic approaches there is still considerable reluctance to apply probabilistic techniques due to the difficulty of interpreting a single numerical risk index and the lack of sufficient information provided by a single index. A practical way to overcome difficulties is to embed deterministic considerations in the probabilistic indices in order to monitor the system well-being. The system well-being can be designated as healthy, marginal and at risk. The concept of system well-being is examined and extended in this thesis to cover the overall area of operating reserve assessment. Operating reserve evaluation involves the two distinctly different aspects of unit commitment and the dispatch of the committed units. Unit commitment health analysis involves the determination of which unit should be committed to satisfy the operating criteria. The concepts developed for unit commitment health, margin and risk are extended in this thesis to evaluate the response well-being of a generating system. A procedure is presented to determine the optimum dispatch of the committed units to satisfy the response criteria. The impact on the response wellbeing being of variations in the margin time, required regulating margin and load forecast uncertainty are illustrated. The effects on the response well-being of rapid start units, interruptible loads and postponable outages are also illustrated. System well-being is, in general, greatly improved by interconnection with other power systems. The well-being concepts are extended to evaluate the spinning reserve requirements in interconnected systems. The interconnected system unit commitment problem is decomposed into two subproblems in which unit scheduling is performed in each isolated system followed by interconnected system evaluation. A procedure is illustrated to determine the well-being indices of the overall interconnected system. Under normal operating conditions, the system may also be able to carry a limited amount of interruptible load on top of its firm load without violating the operating criterion. An energy based approach is presented to determine the optimum interruptible load carrying capability in both the isolated and interconnected systems. Composite system spinning reserve assessment and composite system well-being are also examined in this research work. The impacts on the composite well-being of operating reserve considerations such as stand-by units, interruptible loads and the physical locations of these resources are illustrated. It is expected that the well-being framework and the concepts developed in this research work will prove extremely useful in the new competitive utility environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacvarov, D.C.
1981-01-01
A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less
Dynamic shaping of dopamine signals during probabilistic Pavlovian conditioning.
Hart, Andrew S; Clark, Jeremy J; Phillips, Paul E M
2015-01-01
Cue- and reward-evoked phasic dopamine activity during Pavlovian and operant conditioning paradigms is well correlated with reward-prediction errors from formal reinforcement learning models, which feature teaching signals in the form of discrepancies between actual and expected reward outcomes. Additionally, in learning tasks where conditioned cues probabilistically predict rewards, dopamine neurons show sustained cue-evoked responses that are correlated with the variance of reward and are maximal to cues predicting rewards with a probability of 0.5. Therefore, it has been suggested that sustained dopamine activity after cue presentation encodes the uncertainty of impending reward delivery. In the current study we examined the acquisition and maintenance of these neural correlates using fast-scan cyclic voltammetry in rats implanted with carbon fiber electrodes in the nucleus accumbens core during probabilistic Pavlovian conditioning. The advantage of this technique is that we can sample from the same animal and recording location throughout learning with single trial resolution. We report that dopamine release in the nucleus accumbens core contains correlates of both expected value and variance. A quantitative analysis of these signals throughout learning, and during the ongoing updating process after learning in probabilistic conditions, demonstrates that these correlates are dynamically encoded during these phases. Peak CS-evoked responses are correlated with expected value and predominate during early learning while a variance-correlated sustained CS signal develops during the post-asymptotic updating phase. Copyright © 2014 Elsevier Inc. All rights reserved.
Chebabhi, Ali; Fellah, Mohammed Karim; Kessal, Abdelhalim; Benkhoris, Mohamed F
2016-07-01
In this paper is proposed a new balancing three-level three dimensional space vector modulation (B3L-3DSVM) strategy which uses a redundant voltage vectors to realize precise control and high-performance for a three phase three-level four-leg neutral point clamped (NPC) inverter based Shunt Active Power Filter (SAPF) for eliminate the source currents harmonics, reduce the magnitude of neutral wire current (eliminate the zero-sequence current produced by single-phase nonlinear loads), and to compensate the reactive power in the three-phase four-wire electrical networks. This strategy is proposed in order to gate switching pulses generation, dc bus voltage capacitors balancing (conserve equal voltage of the two dc bus capacitors), and to switching frequency reduced and fixed of inverter switches in same times. A Nonlinear Back Stepping Controllers (NBSC) are used for regulated the dc bus voltage capacitors and the SAPF injected currents to robustness, stabilizing the system and to improve the response and to eliminate the overshoot and undershoot of traditional PI (Proportional-Integral). Conventional three-level three dimensional space vector modulation (C3L-3DSVM) and B3L-3DSVM are calculated and compared in terms of error between the two dc bus voltage capacitors, SAPF output voltages and THDv, THDi of source currents, magnitude of source neutral wire current, and the reactive power compensation under unbalanced single phase nonlinear loads. The success, robustness, and the effectiveness of the proposed control strategies are demonstrated through simulation using Sim Power Systems and S-Function of MATLAB/SIMULINK. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Border Collision of Three-Phase Voltage-Source Inverter System with Interacting Loads
NASA Astrophysics Data System (ADS)
Li, Zhen; Liu, Bin; Li, Yining; Wong, Siu-Chung; Liu, Xiangdong; Huang, Yuehui
As a commercial interface, three-phase voltage-source inverters (VSI) are commonly equipped for energy conversion to export DC power from most distributed generation (DG) to the AC utility. Not only do voltage-source converters take charge of converting the power to the loads but support the grid voltage at the point of common connection (PCC) as well, which is dependent on the condition of the grid-connected loads. This paper explores the border collision and its interacting mechanism among the VSI, resistive interacting loads and grids, which manifests as the alternating emergence of the inverting and rectifying operations, where the normal operation is terminated and a new one is assumed. Their mutual effect on the power quality under investigation will cause the circuital stability issue and further deteriorate the voltage regulation capability of VSI by dramatically raising the grid voltage harmonics. It is found in a design-oriented view that the border collision operation will be induced within the unsuitable parameter space with respect to transmission lines of AC grid, resistive loads and internal resistance of VSI. The physical phenomenon is also identified by the theoretical analysis. With numerical simulations for various circuit conditions, the corresponding bifurcation boundaries are collected, where the stability of the system is lost via border collision.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-01-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Astrophysics Data System (ADS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-08-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1987-01-01
Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Probabilistic analysis for fatigue strength degradation of materials
NASA Technical Reports Server (NTRS)
Royce, Lola
1989-01-01
This report presents the results of the first year of a research program conducted for NASA-LeRC by the University of Texas at San Antonio. The research included development of methodology that provides a probabilistic treatment of lifetime prediction of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Linear elastic fracture mechanics is utilized in the latter model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs, RANDOM2, RANDOM3, and RANDOM4. These programs determine the random lifetime of an engine component, in mechanical load cycles, to reach a critical fatigue strength or crack size. The material considered was a cast nickel base superalloy, one typical of those used in the Space Shuttle Main Engine.
An ac initiation system is described which uses three ac transmission signals interlocked for safety by frequency, phase, and power discrimination...The ac initiation system is pre-armed by the application of two ac signals have the proper phases, and activates a load when an ac power signal of the proper frequency and power level is applied. (Author)
Three different designs of coaxial hybrid junctions having perf ormance analogous to a wave-guide magic -T are discussed. The experimental results...loads, decoupling greater than 70 db can be obtained. An application of the magic -T in phase measurement is described which is independent of the signal amplitude and is similar to the homodyne system of phase measurement.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
Deformations of temporary wooden supports used to reduce building deflections in mining areas
NASA Astrophysics Data System (ADS)
Gromysz, Krzysztof
2018-04-01
Temporary supports, consisting of a stack of wooden elements and a hydraulic jack, are used in the process of removing deflections in buildings with one to three aboveground floors in mining areas. During uneven raising, the supports are loaded monotonically, unloaded and loaded cyclically. Laboratory tests were designed for the supports. For the investigated range of loads of 0 to 400 kN, under a growing load, a linear relationship exists between a load and the change in the stack length, which signifies that the deformations of wooden elements and displacements related to their mutual interactions increase proportionally. A seemingly higher stack stiffness is seen at the beginning of the unloading process and for cyclical loads, meaning that in this phase of loading, the material deformation of the wooden elements and the jack is responsible for changing the jack length in this load phase, with a negligible presence of mutual displacements of wooden elements. The support, after being unloaded, returns to the initial position and its permanent deformations are not observed. The stiffness of a temporary support decreases as the height of the stack of wooden elements increases.
Intelligent voltage control strategy for three-phase UPS inverters with output LC filter
NASA Astrophysics Data System (ADS)
Jung, J. W.; Leu, V. Q.; Dang, D. Q.; Do, T. D.; Mwasilu, F.; Choi, H. H.
2015-08-01
This paper presents a supervisory fuzzy neural network control (SFNNC) method for a three-phase inverter of uninterruptible power supplies (UPSs). The proposed voltage controller is comprised of a fuzzy neural network control (FNNC) term and a supervisory control term. The FNNC term is deliberately employed to estimate the uncertain terms, and the supervisory control term is designed based on the sliding mode technique to stabilise the system dynamic errors. To improve the learning capability, the FNNC term incorporates an online parameter training methodology, using the gradient descent method and Lyapunov stability theory. Besides, a linear load current observer that estimates the load currents is used to exclude the load current sensors. The proposed SFNN controller and the observer are robust to the filter inductance variations, and their stability analyses are described in detail. The experimental results obtained on a prototype UPS test bed with a TMS320F28335 DSP are presented to validate the feasibility of the proposed scheme. Verification results demonstrate that the proposed control strategy can achieve smaller steady-state error and lower total harmonic distortion when subjected to nonlinear or unbalanced loads compared to the conventional sliding mode control method.
He, Zhi Chao; Huang, Shuo; Guo, Qing Hai; Xiao, Li Shan; Yang, De Wei; Wang, Ying; Yang, Yi Fu
2016-08-01
Urban sprawl has impacted increasingly on water environment quality in watersheds. Based on water environmental response, the simulation and prediction of expanding threshold of urban building land could provide an alternative reference for urban construction planning. Taking three watersheds (i.e., Yundang Lake at complete urbanization phase, Maluan Bay at peri-urbanization phase and Xinglin Bay at early urbanization phase) with 2009-2012 observation data as example, we calculated the upper limit of TN and TP capacity in three watersheds and identified the threshold value of urban building land in watersheds using the regional nutrient management (ReNuMa) model, and also predicted the water environmental effects associated with the changes of urban landscape pattern. Results indicated that the upper limit value of TN was 12900, 42800 and 43120 kg, while that of TP was 340, 420 and 450 kg for Yundang, Maluan and Xinglin watershed, respectively. In reality, the environment capacity of pollutants in Yundang Lake was not yet satura-ted, and annual pollutant loads in Maluan Bay and Xinglin Bay were close to the upper limit. How-ever, an obvious upward trend of annual TN and TP loads was observed in Xinglin Bay. The annual pollutant load was not beyond the annual upper limit in three watersheds under Scenario 1, while performed oppositely under Scenario 3. Under Scenario 2, the annual pollutant load in Yundang Lake was under-saturation, and the TN and TP in Maluan Bay were over their limits. The area thresholds of urban building land were 1320, 5600 and 4750 hm 2 in Yundang Lake, Maluan Bay and Xinglin Bay, respectively. This study could benefit the regulation on urban landscape planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr
In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less
Permanent-magnet linear alternators. I - Fundamental equations. II - Design guidelines
NASA Astrophysics Data System (ADS)
Boldea, I.; Nasar, S. A.
1987-01-01
The general equations of permanent-magnet heteropolar three-phase and single-phase linear alternators, powered by free-piston Stirling engines, are presented, with application to space power stations and domestic applications including solar power plants. The equations are applied to no-load and short-circuit conditions, illustrating the end-effect caused by the speed-reversal process. In the second part, basic design guidelines for a three-phase tubular linear alternator are given, and the procedure is demonstrated with the numerical example of the design of a 25-kVA, 14.4-m/s, 120/220-V, 60-Hz alternator.
User interface and operational issues with thermionic space power systems
NASA Technical Reports Server (NTRS)
Dahlberg, R. C.; Fisher, C. R.
1987-01-01
Thermionic space power systems have unique features which facilitate predeployment operations, provide operational flexibility and simplify the interface with the user. These were studied in some detail during the SP-100 program from 1983 to 1985. Three examples are reviewed in this paper: (1) system readiness verification in the prelaunch phase; (2) startup, shutdown, and dormancy in the operations phase; (3) part-load operation in the operations phase.
Environmental probabilistic quantitative assessment methodologies
Crovelli, R.A.
1995-01-01
In this paper, four petroleum resource assessment methodologies are presented as possible pollution assessment methodologies, even though petroleum as a resource is desirable, whereas pollution is undesirable. A methodology is defined in this paper to consist of a probability model and a probabilistic method, where the method is used to solve the model. The following four basic types of probability models are considered: 1) direct assessment, 2) accumulation size, 3) volumetric yield, and 4) reservoir engineering. Three of the four petroleum resource assessment methodologies were written as microcomputer systems, viz. TRIAGG for direct assessment, APRAS for accumulation size, and FASPU for reservoir engineering. A fourth microcomputer system termed PROBDIST supports the three assessment systems. The three assessment systems have different probability models but the same type of probabilistic method. The type of advantages of the analytic method are in computational speed and flexibility, making it ideal for a microcomputer. -from Author
Development of Probabilistic Life Prediction Methodologies and Testing Strategies for MEMS and CMC's
NASA Technical Reports Server (NTRS)
Jadaan, Osama
2003-01-01
This effort is to investigate probabilistic life prediction methodologies for ceramic matrix composites and MicroElectroMechanical Systems (MEMS) and to analyze designs that determine stochastic properties of MEMS. For CMC's this includes a brief literature survey regarding lifing methodologies. Also of interest for MEMS is the design of a proper test for the Weibull size effect in thin film (bulge test) specimens. The Weibull size effect is a consequence of a stochastic strength response predicted from the Weibull distribution. Confirming that MEMS strength is controlled by the Weibull distribution will enable the development of a probabilistic design methodology for MEMS - similar to the GRC developed CARES/Life program for bulk ceramics. A main objective of this effort is to further develop and verify the ability of the Ceramics Analysis and Reliability Evaluation of Structures/Life (CARES/Life) code to predict the time-dependent reliability of MEMS structures subjected to multiple transient loads. A second set of objectives is to determine the applicability/suitability of the CARES/Life methodology for CMC analysis, what changes would be needed to the methodology and software, and if feasible, run a demonstration problem. Also important is an evaluation of CARES/Life coupled to the ANSYS Probabilistic Design System (PDS) and the potential of coupling transient reliability analysis to the ANSYS PDS.
Energy Storage and Dissipation in Random Copolymers during Biaxial Loading
NASA Astrophysics Data System (ADS)
Cho, Hansohl; Boyce, Mary
2012-02-01
Random copolymers composed of hard and soft segments in a glassy and rubbery state at the ambient conditions exhibit phase-separated morphologies which can be tailored to provide hybrid mechanical behaviors of the constituents. Here, phase-separated copolymers with hard and soft contents which form co-continuous structures are explored through experiments and modeling. The mechanics of the highly dissipative yet resilient behavior of an exemplar polyurea are studied under biaxial loading. The hard phase governs the initially stiff response followed by a highly dissipative viscoplasticity where dissipation arises from viscous relaxation as well as structural breakdown in the network structure that still provides energy storage resulting in the shape recovery. The soft phase provides additional energy storage that drives the resilience in high strain rate events. Biaxial experiments reveal the anisotropy and loading history dependence of energy storage and dissipation, validating the three-dimensional predictive capabilities of the microstructurally-based constitutive model. The combination of a highly dissipative and resilient behavior provides a versatile material for a myriad of applications ranging from self-healing microcapsules to ballistic protective coatings.
Stress state reassessment of Romanian offshore structures taking into account corrosion influence
NASA Astrophysics Data System (ADS)
Joavină, R.; Zăgan, S.; Zăgan, R.; Popa, M.
2017-08-01
Progressive degradation analysis for extraction or exploration offshore structure, with appraisal of failure potential and the causes that can be correlated with the service age, depends on the various sources of uncertainty that require particular attention in design, construction and exploitation phases. Romanian self erecting platforms are spatial lattice structures consist of tubular steel joints, forming a continuous system with an infinite number of dynamic degrees of freedom. Reassessment of a structure at fixed intervals of time, recorrelation of initial design elements with the actual situation encountered in location and with structural behaviour represents a major asset in lowering vulnerabilities of offshore structure. This paper proposes a comparative reassessment of the stress state for an offshore structure Gloria type, when leaving the shipyard and at the end of that interval corresponding to capital revision, taking into account sectional changes due to marine environment corrosion. The calculation was done using Newmark integration method on a 3D model, asses of the dynamic loads was made through probabilistic spectral method.
A data-driven wavelet-based approach for generating jumping loads
NASA Astrophysics Data System (ADS)
Chen, Jun; Li, Guo; Racic, Vitomir
2018-06-01
This paper suggests an approach to generate human jumping loads using wavelet transform and a database of individual jumping force records. A total of 970 individual jumping force records of various frequencies were first collected by three experiments from 147 test subjects. For each record, every jumping pulse was extracted and decomposed into seven levels by wavelet transform. All the decomposition coefficients were stored in an information database. Probability distributions of jumping cycle period, contact ratio and energy of the jumping pulse were statistically analyzed. Inspired by the theory of DNA recombination, an approach was developed by interchanging the wavelet coefficients between different jumping pulses. To generate a jumping force time history with N pulses, wavelet coefficients were first selected randomly from the database at each level. They were then used to reconstruct N pulses by the inverse wavelet transform. Jumping cycle periods and contract ratios were then generated randomly based on their probabilistic functions. These parameters were assigned to each of the N pulses which were in turn scaled by the amplitude factors βi to account for energy relationship between successive pulses. The final jumping force time history was obtained by linking all the N cycles end to end. This simulation approach can preserve the non-stationary features of the jumping load force in time-frequency domain. Application indicates that this approach can be used to generate jumping force time history due to single people jumping and also can be extended further to stochastic jumping loads due to groups and crowds.
Improved reliability of wind turbine towers with active tuned mass dampers (ATMDs)
NASA Astrophysics Data System (ADS)
Fitzgerald, Breiffni; Sarkar, Saptarshi; Staino, Andrea
2018-04-01
Modern multi-megawatt wind turbines are composed of slender, flexible, and lightly damped blades and towers. These components exhibit high susceptibility to wind-induced vibrations. As the size, flexibility and cost of the towers have increased in recent years, the need to protect these structures against damage induced by turbulent aerodynamic loading has become apparent. This paper combines structural dynamic models and probabilistic assessment tools to demonstrate improvements in structural reliability when modern wind turbine towers are equipped with active tuned mass dampers (ATMDs). This study proposes a multi-modal wind turbine model for wind turbine control design and analysis. This study incorporates an ATMD into the tower of this model. The model is subjected to stochastically generated wind loads of varying speeds to develop wind-induced probabilistic demand models for towers of modern multi-megawatt wind turbines under structural uncertainty. Numerical simulations have been carried out to ascertain the effectiveness of the active control system to improve the structural performance of the wind turbine and its reliability. The study constructs fragility curves, which illustrate reductions in the vulnerability of towers to wind loading owing to the inclusion of the damper. Results show that the active controller is successful in increasing the reliability of the tower responses. According to the analysis carried out in this paper, a strong reduction of the probability of exceeding a given displacement at the rated wind speed has been observed.
NASA Astrophysics Data System (ADS)
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.
Probabilistic Simulation of Progressive Fracture in Bolted-Joint Composite Laminates
NASA Technical Reports Server (NTRS)
Minnetyan, L.; Singhal, S. N.; Chamis, C. C.
1996-01-01
This report describes computational methods to probabilistically simulate fracture in bolted composite structures. An innovative approach that is independent of stress intensity factors and fracture toughness was used to simulate progressive fracture. The effect of design variable uncertainties on structural damage was also quantified. A fast probability integrator assessed the scatter in the composite structure response before and after damage. Then the sensitivity of the response to design variables was computed. General-purpose methods, which are applicable to bolted joints in all types of structures and in all fracture processes-from damage initiation to unstable propagation and global structure collapse-were used. These methods were demonstrated for a bolted joint of a polymer matrix composite panel under edge loads. The effects of the fabrication process were included in the simulation of damage in the bolted panel. Results showed that the most effective way to reduce end displacement at fracture is to control both the load and the ply thickness. The cumulative probability for longitudinal stress in all plies was most sensitive to the load; in the 0 deg. plies it was very sensitive to ply thickness. The cumulative probability for transverse stress was most sensitive to the matrix coefficient of thermal expansion. In addition, fiber volume ratio and fiber transverse modulus both contributed significantly to the cumulative probability for the transverse stresses in all the plies.
NASA Astrophysics Data System (ADS)
Huang, Peiyan; Liu, Guangwan; Guo, Xinyan; Huang, Man
2008-11-01
The experimental research on fatigue crack propagation rate of reinforced concrete (RC) beams strengthened with carbon fiber laminate (CFL) is carried out by MTS system in this paper. The experimental results show that, the main crack propagation on strengthened beam can be summarized into three phases: 1) fast propagation phase; 2) steady propagation and rest phase; 3) unsteady propagation phase. The phase 2-i.e. steady propagation and rest stage makes up about 95% of fatigue life of the strengthened beam. The propagation rate of the main crack, da/dN, in phase 2 can be described by Paris formula, and the constant C and m can be confirmed by the fatigue crack propagation experiments of the RC beams strengthened with CFL under three-point bending loads.
NASA Astrophysics Data System (ADS)
Wei, Y.; Thomas, S.; Zhou, H.; Arcas, D.; Titov, V. V.
2017-12-01
The increasing potential tsunami hazards pose great challenges for infrastructures along the coastlines of the U.S. Pacific Northwest. Tsunami impact at a coastal site is usually assessed from deterministic scenarios based on 10,000 years of geological records in the Cascadia Subduction Zone (CSZ). Aside from these deterministic methods, the new ASCE 7-16 tsunami provisions provide engineering design criteria of tsunami loads on buildings based on a probabilistic approach. This work develops a site-specific model near Newport, OR using high-resolution grids, and compute tsunami inundation depth and velocities at the study site resulted from credible probabilistic and deterministic earthquake sources in the Cascadia Subduction Zone. Three Cascadia scenarios, two deterministic scenarios, XXL1 and L1, and a 2,500-yr probabilistic scenario compliant with the new ASCE 7-16 standard, are simulated using combination of a depth-averaged shallow water model for offshore propagation and a Boussinesq-type model for onshore inundation. We speculate on the methods and procedure to obtain the 2,500-year probabilistic scenario for Newport that is compliant with the ASCE 7-16 tsunami provisions. We provide details of model results, particularly the inundation depth and flow speed for a new building, which will also be designated as a tsunami vertical evacuation shelter, at Newport, Oregon. We show that the ASCE 7-16 consistent hazards are between those obtained from deterministic L1 and XXL1 scenarios, and the greatest impact on the building may come from later waves. As a further step, we utilize the inundation model results to numerically compute tracks of large vessels in the vicinity of the building site and estimate if these vessels will impact on the building site during the extreme XXL1 and ASCE 7-16 hazard-consistent scenarios. Two-step study is carried out first to study tracks of massless particles and then large vessels with assigned mass considering drag force, inertial force, ship grounding and mooring. The simulation results show that none of the large vessels will impact on the building site in all tested scenarios.
Salivary Biomarkers and Training Load during Training and Competition in Paralympic Swimmers.
Sinnott-O'Connor, Ciara; Comyns, Tom; Nevill, Alan M; Warrington, Giles
2017-11-28
Stress responses in athletes can be attributed to training and also competition, where increased physiological and psychological stress may negatively impact on performance and recovery. The aim of this study was to examine the relationship between training load and salivary biomarkers IgA, alpha-amylase (AA) and cortisol across a 16-week preparation phase and 10-day competition phase in Paralympic swimmers. Four Paralympic swimmers provided bi-weekly saliva samples during three training phases - 1) normal training, 2) intensified training and 3) taper as well as daily saliva samples in the 10 day Paralympic competition (2016 Paralympic Games). Training load (TL) was measured using session-RPE. Multi-level analysis identified a significant increase in sIgA (94.98 (27.69) μg.ml -1 ), sAA (45.78 (19.07) μg.ml -1 ) and salivary cortisol (7.92 (2.17) ng.ml) during intensified training concurrent with a 38.3% increase in TL. During taper phase, a 49.5% decrease in TL from the intensified training phase resulted in decrease in sIgA, sAA and salivary cortisol; however, all three remained higher than baseline levels. A further significant increase was observed during competition in sIgA (168.69(24.19) μg.ml -1 ), sAA (35.86(16.67) μg.ml -1 ) and salivary cortisol (10.49(1.89) ng.ml) despite a continued decrease (77.8%) in TL from taper phase. Results demonstrate performance in major competition such as Paralympic Games despite a noticeable reduction in TL induces a stress response in athletes. Due to elevated stress response observed, modifications to individual post-race recovery protocols may be required to enable athletes to maximise performance across all ten days of competition.
Characterisation of Asphalt Concrete Using Nanoindentation
Barbhuiya, Salim; Caracciolo, Benjamin
2017-01-01
In this study, nanoindentation was conducted to extract the load-displacement behaviour and the nanomechanical properties of asphalt concrete across the mastic, matrix, and aggregate phases. Further, the performance of hydrated lime as an additive was assessed across the three phases. The hydrated lime containing samples have greater resistance to deformation in the mastic and matrix phases, in particular, the mastic. There is strong evidence suggesting that hydrated lime has the most potent effect on the mastic phase, with significant increase in hardness and stiffness. PMID:28773181
Phase-field modeling of the beta to omega phase transformation in Zr–Nb alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeddu, Hemantha Kumar; Lookman, Turab
A three-dimensional elastoplastic phase-field model is developed, using the Finite Element Method (FEM), for modeling the athermal beta to omega phase transformation in Zr–Nb alloys by including plastic deformation and strain hardening of the material. The microstructure evolution during athermal transformation as well as under different stress states, e.g. uni-axial tensile and compressive, bi-axial tensile and compressive, shear and tri-axial loadings, is studied. The effects of plasticity, stress states and the stress loading direction on the microstructure evolution as well as on the mechanical properties are studied. The input data corresponding to a Zr – 8 at.% Nb alloy aremore » acquired from experimental studies as well as by using the CALPHAD method. Our simulations show that the four different omega variants grow as ellipsoidal shaped particles. Our results show that due to stress relaxation, the athermal phase transformation occurs slightly more readily in the presence of plasticity compared to that in its absence. The evolution of omega phase is different under different stress states, which leads to the differences in the mechanical properties of the material. The variant selection mechanism, i.e. formation of different variants under different stress loading directions, is also nicely captured by our model.« less
Semi-volatile pesticides, such as chlorpyrifos, can move about within a home environment after an application due to physical/chemical processes, resulting in concentration loadings in and on objects and surfaces. Children can be particularly susceptible to the effects of pest...
Cyclic Axial-Torsional Deformation Behavior of a Cobalt-Base Superalloy
NASA Technical Reports Server (NTRS)
Bonacuse, Peter J.; Kalluri, Sreeramesh
1995-01-01
The cyclic, high-temperature deformation behavior of a wrought cobalt-base super-alloy, Haynes 188, is investigated under combined axial and torsional loads. This is accomplished through the examination of hysteresis loops generated from a biaxial fatigue test program. A high-temperature axial, torsional, and combined axial-torsional fatigue database has been generated on Haynes 188 at 760 C. Cyclic loading tests have been conducted on uniform gage section tubular specimens in a servohydraulic axial-torsional test rig. Test control and data acquisition were accomplished with a minicomputer. The fatigue behavior of Haynes 188 at 760 C under axial, torsional, and combined axial-torsional loads and the monotonic and cyclic deformation behaviors under axial and torsional loads have been previously reported. In this paper, the cyclic hardening characteristics and typical hysteresis loops in the axial stress versus axial strain, shear stress ,versus engineering shear strain, axial strain versus engineering shear strain. and axial stress versus shear stress spaces are presented for cyclic in-phase and out-of-phase axial-torsional tests. For in-phase tests, three different values of the proportionality constant lambda (the ratio of engineering shear strain amplitude to axial strain amplitude, are examined, viz. 0.86, 1.73, and 3.46. In the out-of-phase tests, three different values of the phase angle, phi (between the axial and engineering shear strain waveforms), are studied, viz., 30, 60, and 90 degrees with lambda equals 1.73. The cyclic hardening behaviors of all the tests conducted on Haynes 188 at 760 C are evaluated using the von Mises equivalent stress-strain and the maximum shear stress-maximum engineering shear strain (Tresca) curves. Comparisons are also made between the hardening behaviors of cyclic axial, torsional, and combined in-phase (lambda = 1.73 and phi = 0) and out-of-phase (lambda = 1.73 and phi = 90') axial-torsional fatigue tests. These comparisons are accomplished through simple Ramberg-Osgood type stress-strain functions for cyclic, axial stress-strain and shear stress-engineering shear strain curves.
Effect of grain boundaries on shock-induced phase transformation in iron bicrystals
NASA Astrophysics Data System (ADS)
Zhang, Xueyang; Wang, Kun; Zhu, Wenjun; Chen, Jun; Cai, Mengqiu; Xiao, Shifang; Deng, Huiqiu; Hu, Wangyu
2018-01-01
Non-equilibrium molecular-dynamic simulations with a modified analytic embedded-atom model potential have been performed to investigate the effect of three kinds of grain boundaries (GBs) on the martensitic transformation in iron bicrystals with three different GBs under shock loadings. Our results show that the phase transition was influenced by the GBs. All three GBs provide a nucleation site for the α → ɛ transformation in samples shock-loaded with up = 0.5 km/s, and in particular, the elastic wave can induce the phase transformation at Σ3 ⟨110⟩ twist GB, which indicates that the phase transformation can occur at Σ3 ⟨110⟩ twist GB with a much lower pressure. The effect of GBs on the stress assisted transformation (SAT) mechanisms is discussed. All variants nucleating at the vicinity of these GBs meet the maximum strain work (MSW) criterion. Moreover, all of the variants with the MSW nucleate at Σ5 ⟨001⟩ twist GB and Σ3 ⟨110⟩ tilt GB, but only part of them nucleate at Σ3 ⟨110⟩ twist GB. This is because the coincident planes between both sides of the GB would affect the slip process, which is the second stage of the martensitic transformation and influences the selection of variant. We also find that the martensitic transformation at the front end of the bicrystals would give rise to stress attenuation in samples shock-loaded with up = 0.6 km/s, which makes the GBs seem to be unfavorable to the martensitic transformation. Our findings have the potential to affect the interface engineering and material design under high pressure conditions.
As part of the National Coastal Assessment, the Environmental Monitoring and Assessment Program of EPA is conducting a three year evaluation of benthic habitat condition of California estuaries. In 1999, probabilistic sampling for a variety of biotic and abiotic condition indica...
Three key areas of scientific inquiry in the study of human exposure to environmental contaminants are 1) assessment of aggregate (i.e., multi-pathway, multi-route) exposures, 2) application of probabilistic methods to exposure prediction, and 3) the interpretation of biomarker m...
Efficient entanglement distillation without quantum memory.
Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J; Fiurášek, Jaromír; Schnabel, Roman
2016-05-31
Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.
Efficient entanglement distillation without quantum memory
Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J.; Fiurášek, Jaromír; Schnabel, Roman
2016-01-01
Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution. PMID:27241946
Probabilistic TSUnami Hazard MAPS for the NEAM Region: The TSUMAPS-NEAM Project
NASA Astrophysics Data System (ADS)
Basili, R.; Babeyko, A. Y.; Baptista, M. A.; Ben Abdallah, S.; Canals, M.; El Mouraouah, A.; Harbitz, C. B.; Ibenbrahim, A.; Lastras, G.; Lorito, S.; Løvholt, F.; Matias, L. M.; Omira, R.; Papadopoulos, G. A.; Pekcan, O.; Nmiri, A.; Selva, J.; Yalciner, A. C.
2016-12-01
As global awareness of tsunami hazard and risk grows, the North-East Atlantic, the Mediterranean, and connected Seas (NEAM) region still lacks a thorough probabilistic tsunami hazard assessment. The TSUMAPS-NEAM project aims to fill this gap in the NEAM region by 1) producing the first region-wide long-term homogenous Probabilistic Tsunami Hazard Assessment (PTHA) from earthquake sources, and by 2) triggering a common tsunami risk management strategy. The specific objectives of the project are tackled by the following four consecutive actions: 1) Conduct a state-of-the-art, standardized, and updatable PTHA with full uncertainty treatment; 2) Review the entire process with international experts; 3) Produce the PTHA database, with documentation of the entire hazard assessment process; and 4) Publicize the results through an awareness raising and education phase, and a capacity building phase. This presentation will illustrate the project layout, summarize its current status of advancement and prospective results, and outline its connections with similar initiatives in the international context. The TSUMAPS-NEAM Project (http://www.tsumaps-neam.eu/) is co-financed by the European Union Civil Protection Mechanism, Agreement Number: ECHO/SUB/2015/718568/PREV26.
Probabilistic TSUnami Hazard MAPS for the NEAM Region: The TSUMAPS-NEAM Project
NASA Astrophysics Data System (ADS)
Basili, Roberto; Babeyko, Andrey Y.; Hoechner, Andreas; Baptista, Maria Ana; Ben Abdallah, Samir; Canals, Miquel; El Mouraouah, Azelarab; Bonnevie Harbitz, Carl; Ibenbrahim, Aomar; Lastras, Galderic; Lorito, Stefano; Løvholt, Finn; Matias, Luis Manuel; Omira, Rachid; Papadopoulos, Gerassimos A.; Pekcan, Onur; Nmiri, Abdelwaheb; Selva, Jacopo; Yalciner, Ahmet C.; Thio, Hong K.
2017-04-01
As global awareness of tsunami hazard and risk grows, the North-East Atlantic, the Mediterranean, and connected Seas (NEAM) region still lacks a thorough probabilistic tsunami hazard assessment. The TSUMAPS-NEAM project aims to fill this gap in the NEAM region by 1) producing the first region-wide long-term homogenous Probabilistic Tsunami Hazard Assessment (PTHA) from earthquake sources, and by 2) triggering a common tsunami risk management strategy. The specific objectives of the project are tackled by the following four consecutive actions: 1) Conduct a state-of-the-art, standardized, and updatable PTHA with full uncertainty treatment; 2) Review the entire process with international experts; 3) Produce the PTHA database, with documentation of the entire hazard assessment process; and 4) Publicize the results through an awareness raising and education phase, and a capacity building phase. This presentation will illustrate the project layout, summarize its current status of advancement including the firs preliminary release of the assessment, and outline its connections with similar initiatives in the international context. The TSUMAPS-NEAM Project (http://www.tsumaps-neam.eu/) is co-financed by the European Union Civil Protection Mechanism, Agreement Number: ECHO/SUB/2015/718568/PREV26.
Control method of Three-phase Four-leg converter based on repetitive control
NASA Astrophysics Data System (ADS)
Hui, Wang
2018-03-01
The research chose the magnetic levitation force of wind power generation system as the object. In order to improve the power quality problem caused by unbalanced load in power supply system, we combined the characteristics and repetitive control principle of magnetic levitation wind power generation system, and then an independent control strategy for three-phase four-leg converter was proposed. In this paper, based on the symmetric component method, the second order generalized integrator was used to generate the positive and negative sequence of signals, and the decoupling control was carried out under the synchronous rotating reference frame, in which the positive and negative sequence voltage is PI double closed loop, and a PI regulator with repetitive control was introduced to eliminate the static error regarding the fundamental frequency fluctuation characteristic of zero sequence component. The simulation results based on Matlab/Simulink show that the proposed control project can effectively suppress the disturbance caused by unbalanced loads and maintain the load voltage balance. The project is easy to be achieved and remarkably improves the quality of the independent power supply system.
A Hybrid Demand Response Simulator Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-05-02
A hybrid demand response simulator is developed to test different control algorithms for centralized and distributed demand response (DR) programs in a small distribution power grid. The HDRS is designed to model a wide variety of DR services such as peak having, load shifting, arbitrage, spinning reserves, load following, regulation, emergency load shedding, etc. The HDRS does not model the dynamic behaviors of the loads, rather, it simulates the load scheduling and dispatch process. The load models include TCAs (water heaters, air conditioners, refrigerators, freezers, etc) and non-TCAs (lighting, washer, dishwasher, etc.) The ambient temperature changes, thermal resistance, capacitance, andmore » the unit control logics can be modeled for TCA loads. The use patterns of the non-TCA can be modeled by probability of use and probabilistic durations. Some of the communication network characteristics, such as delays and errors, can also be modeled. Most importantly, because the simulator is modular and greatly simplified the thermal models for TCA loads, it is very easy and fast to be used to test and validate different control algorithms in a simulated environment.« less
NASA Astrophysics Data System (ADS)
Sun, Hu; Zhang, Aijia; Wang, Yishou; Qing, Xinlin P.
2017-04-01
Guided wave-based structural health monitoring (SHM) has been given considerable attention and widely studied for large-scale aircraft structures. Nevertheless, it is difficult to apply SHM systems on board or online, for which one of the most serious reasons is the environmental influence. Load is one fact that affects not only the host structure, in which guided wave propagates, but also the PZT, by which guided wave is transmitted and received. In this paper, numerical analysis using finite element method is used to study the load effect on guided wave acquired by PZT. The static loads with different grades are considered to analyze its effect on guided wave signals that PZT transmits and receives. Based on the variation trend of guided waves versus load, a load compensation method is developed to eliminate effects of load in the process of damage detection. The probabilistic reconstruction algorithm based on the signal variation of transmitter-receiver path is employed to identify the damage. Numerical tests is conducted to verify the feasibility and effectiveness of the given method.
A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges
Wang, Xu; Sun, Baitao
2014-01-01
Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347
Comment on "Secure quantum private information retrieval using phase-encoded queries"
NASA Astrophysics Data System (ADS)
Shi, Run-hua; Mu, Yi; Zhong, Hong; Zhang, Shun
2016-12-01
In this Comment, we reexamine the security of phase-encoded quantum private query (QPQ). We find that the current phase-encoded QPQ protocols, including their applications, are vulnerable to a probabilistic entangle-and-measure attack performed by the owner of the database. Furthermore, we discuss how to overcome this security loophole and present an improved cheat-sensitive QPQ protocol without losing the good features of the original protocol.
Toward the Probabilistic Forecasting of High-latitude GPS Phase Scintillation
NASA Technical Reports Server (NTRS)
Prikryl, P.; Jayachandran, P.T.; Mushini, S. C.; Richardson, I. G.
2012-01-01
The phase scintillation index was obtained from L1 GPS data collected with the Canadian High Arctic Ionospheric Network (CHAIN) during years of extended solar minimum 2008-2010. Phase scintillation occurs predominantly on the dayside in the cusp and in the nightside auroral oval. We set forth a probabilistic forecast method of phase scintillation in the cusp based on the arrival time of either solar wind corotating interaction regions (CIRs) or interplanetary coronal mass ejections (ICMEs). CIRs on the leading edge of high-speed streams (HSS) from coronal holes are known to cause recurrent geomagnetic and ionospheric disturbances that can be forecast one or several solar rotations in advance. Superposed epoch analysis of phase scintillation occurrence showed a sharp increase in scintillation occurrence just after the arrival of high-speed solar wind and a peak associated with weak to moderate CMEs during the solar minimum. Cumulative probability distribution functions for the phase scintillation occurrence in the cusp are obtained from statistical data for days before and after CIR and ICME arrivals. The probability curves are also specified for low and high (below and above median) values of various solar wind plasma parameters. The initial results are used to demonstrate a forecasting technique on two example periods of CIRs and ICMEs.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Energy Approach-Based Simulation of Structural Materials High-Cycle Fatigue
NASA Astrophysics Data System (ADS)
Balayev, A. F.; Korolev, A. V.; Kochetkov, A. V.; Sklyarova, A. I.; Zakharov, O. V.
2016-02-01
The paper describes the mechanism of micro-cracks development in solid structural materials based on the theory of brittle fracture. A probability function of material cracks energy distribution is obtained using a probabilistic approach. The paper states energy conditions for cracks growth at material high-cycle loading. A formula allowing to calculate the amount of energy absorbed during the cracks growth is given. The paper proposes a high- cycle fatigue evaluation criterion allowing to determine the maximum permissible number of solid body loading cycles, at which micro-cracks start growing rapidly up to destruction.
Failed rib region prediction in a human body model during crash events with precrash braking.
Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S
2018-02-28
The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.
Role of load history in intervertebral disc mechanics and intradiscal pressure generation.
Hwang, David; Gabai, Adam S; Yu, Miao; Yew, Alvin G; Hsieh, Adam H
2012-01-01
Solid-fluid interactions play an important role in mediating viscoelastic behaviour of biological tissues. In the intervertebral disc, water content is governed by a number of factors, including age, disease and mechanical loads, leading to changes in stiffness characteristics. We hypothesized that zonal stress distributions depend on load history, or the prior stresses experienced by the disc. To investigate these effects, rat caudal motion segments were subjected to compressive creep biomechanical testing in vitro using a protocol that consisted of two phases: a Prestress Phase (varied to represent different histories of load) followed immediately by an Exertion Phase, identical across all Prestress groups. Three analytical models were used to fit the experimental data in order to evaluate load history effects on gross and zonal disc mechanics. Model results indicated that while gross transient response was insensitive to load history, there may be changes in the internal mechanics of the disc. In particular, a fluid transport model suggested that the role of the nucleus pulposus in resisting creep during Exertion depended on Prestress conditions. Separate experiments using similarly defined load history regimens were performed to verify these predictions by measuring intradiscal pressure with a fibre optic sensor. We found that the ability for intradiscal pressure generation was load history-dependent and exhibited even greater sensitivity than predicted by analytical models. A 0.5 MPa Exertion load resulted in 537.2 kPa IDP for low magnitude Prestress compared with 373.7 kPa for high magnitude Prestress. Based on these measurements, we developed a simple model that may describe the pressure-shear environment in the nucleus pulposus. These findings may have important implications on our understanding of how mechanical stress contributes to disc health and disease etiology.
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
Taborri, Juri; Rossi, Stefano; Palermo, Eduardo; Patanè, Fabrizio; Cappa, Paolo
2014-09-02
In this work, we decided to apply a hierarchical weighted decision, proposed and used in other research fields, for the recognition of gait phases. The developed and validated novel distributed classifier is based on hierarchical weighted decision from outputs of scalar Hidden Markov Models (HMM) applied to angular velocities of foot, shank, and thigh. The angular velocities of ten healthy subjects were acquired via three uni-axial gyroscopes embedded in inertial measurement units (IMUs) during one walking task, repeated three times, on a treadmill. After validating the novel distributed classifier and scalar and vectorial classifiers-already proposed in the literature, with a cross-validation, classifiers were compared for sensitivity, specificity, and computational load for all combinations of the three targeted anatomical segments. Moreover, the performance of the novel distributed classifier in the estimation of gait variability in terms of mean time and coefficient of variation was evaluated. The highest values of specificity and sensitivity (>0.98) for the three classifiers examined here were obtained when the angular velocity of the foot was processed. Distributed and vectorial classifiers reached acceptable values (>0.95) when the angular velocity of shank and thigh were analyzed. Distributed and scalar classifiers showed values of computational load about 100 times lower than the one obtained with the vectorial classifier. In addition, distributed classifiers showed an excellent reliability for the evaluation of mean time and a good/excellent reliability for the coefficient of variation. In conclusion, due to the better performance and the small value of computational load, the here proposed novel distributed classifier can be implemented in the real-time application of gait phases recognition, such as to evaluate gait variability in patients or to control active orthoses for the recovery of mobility of lower limb joints.
Effect of cognitive load on speech prosody in aviation: Evidence from military simulator flights.
Huttunen, Kerttu; Keränen, Heikki; Väyrynen, Eero; Pääkkönen, Rauno; Leino, Tuomo
2011-01-01
Mental overload directly affects safety in aviation and needs to be alleviated. Speech recordings are obtained non-invasively and as such are feasible for monitoring cognitive load. We recorded speech of 13 military pilots while they were performing a simulator task. Three types of cognitive load (load on situation awareness, information processing and decision making) were rated by a flight instructor separately for each flight phase and participant. As a function of increased cognitive load, the mean utterance-level fundamental frequency (F0) increased, on average, by 7 Hz and the mean vocal intensity increased by 1 dB. In the most intensive simulator flight phases, mean F0 increased by 12 Hz and mean intensity, by 1.5 dB. At the same time, the mean F0 range decreased by 5 Hz, on average. Our results showed that prosodic features of speech can be used to monitor speaker state and support pilot training in a simulator environment. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Probabilistic verification of cloud fraction from three different products with CALIPSO
NASA Astrophysics Data System (ADS)
Jung, B. J.; Descombes, G.; Snyder, C.
2017-12-01
In this study, we present how Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) can be used for probabilistic verification of cloud fraction, and apply this probabilistic approach to three cloud fraction products: a) The Air Force Weather (AFW) World Wide Merged Cloud Analysis (WWMCA), b) Satellite Cloud Observations and Radiative Property retrieval Systems (SatCORPS) from NASA Langley Research Center, and c) Multi-sensor Advection Diffusion nowCast (MADCast) from NCAR. Although they differ in their details, both WWMCA and SatCORPS retrieve cloud fraction from satellite observations, mainly of infrared radiances. MADCast utilizes in addition a short-range forecast of cloud fraction (provided by the Model for Prediction Across Scales, assuming cloud fraction is advected as a tracer) and a column-by-column particle filter implemented within the Gridpoint Statistical Interpolation (GSI) data-assimilation system. The probabilistic verification considers the retrieved or analyzed cloud fractions as predicting the probability of cloud at any location within a grid cell and the 5-km vertical feature mask (VFM) from CALIPSO level-2 products as a point observation of cloud.
Kindermans, Pieter-Jan; Verschore, Hannes; Schrauwen, Benjamin
2013-10-01
In recent years, in an attempt to maximize performance, machine learning approaches for event-related potential (ERP) spelling have become more and more complex. In this paper, we have taken a step back as we wanted to improve the performance without building an overly complex model, that cannot be used by the community. Our research resulted in a unified probabilistic model for ERP spelling, which is based on only three assumptions and incorporates language information. On top of that, the probabilistic nature of our classifier yields a natural dynamic stopping strategy. Furthermore, our method uses the same parameters across 25 subjects from three different datasets. We show that our classifier, when enhanced with language models and dynamic stopping, improves the spelling speed and accuracy drastically. Additionally, we would like to point out that as our model is entirely probabilistic, it can easily be used as the foundation for complex systems in future work. All our experiments are executed on publicly available datasets to allow for future comparison with similar techniques.
30 CFR 77.905 - Connection of single-phase loads.
Code of Federal Regulations, 2012 CFR
2012-07-01
... COAL MINES Low- and Medium-Voltage Alternating Current Circuits § 77.905 Connection of single-phase loads. Single-phase loads shall be connected phase-to-phase in resistance grounded systems. ...
30 CFR 77.905 - Connection of single-phase loads.
Code of Federal Regulations, 2014 CFR
2014-07-01
... COAL MINES Low- and Medium-Voltage Alternating Current Circuits § 77.905 Connection of single-phase loads. Single-phase loads shall be connected phase-to-phase in resistance grounded systems. ...
30 CFR 77.905 - Connection of single-phase loads.
Code of Federal Regulations, 2013 CFR
2013-07-01
... COAL MINES Low- and Medium-Voltage Alternating Current Circuits § 77.905 Connection of single-phase loads. Single-phase loads shall be connected phase-to-phase in resistance grounded systems. ...
A simple rule for quadrupedal gait generation determined by leg loading feedback: a modeling study
Fukuoka, Yasuhiro; Habu, Yasushi; Fukui, Takahiro
2015-01-01
We discovered a specific rule for generating typical quadrupedal gaits (the order of the movement of four legs) through a simulated quadrupedal locomotion, in which unprogrammed gaits (diagonal/lateral sequence walks, left/right-lead canters, and left/right-lead transverse gallops) spontaneously emerged because of leg loading feedbacks to the CPGs hard-wired to produce a default trot. Additionally, all gaits transitioned according to speed, as seen in animals. We have therefore hypothesized that various gaits derive from a trot because of posture control through leg loading feedback. The body tilt on the two support legs of each diagonal pair during trotting was classified into three types (level, tilted up, or tilted down) according to speed. The load difference between the two legs led to the phase difference between their CPGs via the loading feedbacks, resulting in nine gaits (32: three tilts to the power of two diagonal pairs) including the aforementioned. PMID:25639661
Morphology effect of nano-hydroxyapatite as a drug carrier of methotrexate.
Sun, Haina; Liu, Shanshan; Zeng, Xiongfeng; Meng, Xianguang; Zhao, Lina; Wan, Yizao; Zuo, Guifu
2017-09-13
In this study, morphology effect of nano-hydroxyapatite as a drug carrier was investigated for the first time. Hydroxyapatite/methotrexate (HAp/MTX) hybrids with different morphologies were successfully prepared in situ using polyethylene glycol (PEG) as a template. SEM, TEM, XRD and FTIR results confirmed that the hybrids of different morphologies (laminated, rod-like and spherical) with similar phase composition and functional groups were obtained by changing the preparation parameters. UV-Vis spectroscopy was used to identify the drug loading capacity and drug release mechanism of the three hybrids with different morphologies. It is concluded that the laminated hybrid exhibits a higher drug loading capacity compared to the other two hybrids, and all the three hybrids showed a sustained slow release which were fitted well by Bhaskar equation. Additionally, the result of in vitro bioassay test confirms that the inhibition efficacy of the three hybrids showed a positive correlation to the drug loading capacity.
NASA Astrophysics Data System (ADS)
Mendonça, J. R. G.
2018-04-01
We propose and investigate a one-parameter probabilistic mixture of one-dimensional elementary cellular automata under the guise of a model for the dynamics of a single-species unstructured population with nonoverlapping generations in which individuals have smaller probability of reproducing and surviving in a crowded neighbourhood but also suffer from isolation and dispersal. Remarkably, the first-order mean field approximation to the dynamics of the model yields a cubic map containing terms representing both logistic and weak Allee effects. The model has a single absorbing state devoid of individuals, but depending on the reproduction and survival probabilities can achieve a stable population. We determine the critical probability separating these two phases and find that the phase transition between them is in the directed percolation universality class of critical behaviour.
Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device
He, Xiang; Aloi, Daniel N.; Li, Jia
2015-01-01
Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design. PMID:26694387
Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device.
He, Xiang; Aloi, Daniel N; Li, Jia
2015-12-14
Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design.
Probabilistic failure analysis of bone using a finite element model of mineral-collagen composites.
Dong, X Neil; Guda, Teja; Millwater, Harry R; Wang, Xiaodu
2009-02-09
Microdamage accumulation is a major pathway for energy dissipation during the post-yield deformation of bone. In this study, a two-dimensional probabilistic finite element model of a mineral-collagen composite was developed to investigate the influence of the tissue and ultrastructural properties of bone on the evolution of microdamage from an initial defect in tension. The probabilistic failure analyses indicated that the microdamage progression would be along the plane of the initial defect when the debonding at mineral-collagen interfaces was either absent or limited in the vicinity of the defect. In this case, the formation of a linear microcrack would be facilitated. However, the microdamage progression would be scattered away from the initial defect plane if interfacial debonding takes place at a large scale. This would suggest the possible formation of diffuse damage. In addition to interfacial debonding, the sensitivity analyses indicated that the microdamage progression was also dependent on the other material and ultrastructural properties of bone. The intensity of stress concentration accompanied with microdamage progression was more sensitive to the elastic modulus of the mineral phase and the nonlinearity of the collagen phase, whereas the scattering of failure location was largely dependent on the mineral to collagen ratio and the nonlinearity of the collagen phase. The findings of this study may help understanding the post-yield behavior of bone at the ultrastructural level and shed light on the underlying mechanism of bone fractures.
Probabilistic Failure Analysis of Bone Using a Finite Element Model of Mineral-Collagen Composites
Dong, X. Neil; Guda, Teja; Millwater, Harry R.; Wang, Xiaodu
2009-01-01
Microdamage accumulation is a major pathway for energy dissipation during the post-yield deformation of bone. In this study, a two-dimensional probabilistic finite element model of a mineral-collagen composite was developed to investigate the influence of the tissue and ultrastructural properties of bone on the evolution of microdamage from an initial defect in tension. The probabilistic failure analyses indicated that the microdamage progression would be along the plane of the initial defect when the debonding at mineral-collagen interfaces was either absent or limited in the vicinity of the defect. In this case, the formation of a linear microcrack would be facilitated. However, the microdamage progression would be scattered away from the initial defect plane if interfacial debonding takes place at a large scale. This would suggest the possible formation of diffuse damage. In addition to interfacial debonding, the sensitivity analyses indicated that the microdamage progression was also dependent on the other material and ultrastructural properties of bone. The intensity of stress concentration accompanied with microdamage progression was more sensitive to the elastic modulus of the mineral phase and the nonlinearity of the collagen phase, whereas the scattering of failure location was largely dependent on the mineral to collagen ratio and the nonlinearity of the collagen phase. The findings of this study may help understanding the post-yield behavior of bone at the ultrastructural level and shed light on the underlying mechanism of bone fractures. PMID:19058806
Surface Damage Mechanism of Monocrystalline Si Under Mechanical Loading
NASA Astrophysics Data System (ADS)
Zhao, Qingliang; Zhang, Quanli; To, Suet; Guo, Bing
2017-03-01
Single-point diamond scratching and nanoindentation on monocrystalline silicon wafer were performed to investigate the surface damage mechanism of Si under the contact loading. The results showed that three typical stages of material removal appeared during dynamic scratching, and a chemical reaction of Si with the diamond indenter and oxygen occurred under the high temperature. In addition, the Raman spectra of the various points in the scratching groove indicated that the Si-I to β-Sn structure (Si-II) and the following β-Sn structure (Si-II) to amorphous Si transformation appeared under the rapid loading/unloading condition of the diamond grit, and the volume change induced by the phase transformation resulted in a critical depth (ductile-brittle transition) of cut (˜60 nm ± 15 nm) much lower than the theoretical calculated results (˜387 nm). Moreover, it also led to abnormal load-displacement curves in the nanoindentation tests, resulting in the appearance of elbow and pop-out effects (˜270 nm at 20 s, 50 mN), which were highly dependent on the loading/unloading conditions. In summary, phase transformation of Si promoted surface deformation and fracture under both static and dynamic mechanical loading.
NASA Astrophysics Data System (ADS)
Robbins, Joshua; Voth, Thomas
2011-06-01
Material response to dynamic loading is often dominated by microstructure such as grain topology, porosity, inclusions, and defects; however, many models rely on assumptions of homogeneity. We use the probabilistic finite element method (WK Liu, IJNME, 1986) to introduce local uncertainty to account for material heterogeneity. The PFEM uses statistical information about the local material response (i.e., its expectation, coefficient of variation, and autocorrelation) drawn from knowledge of the microstructure, single crystal behavior, and direct numerical simulation (DNS) to determine the expectation and covariance of the system response (velocity, strain, stress, etc). This approach is compared to resolved grain-scale simulations of the equivalent system. The microstructures used for the DNS are produced using Monte Carlo simulations of grain growth, and a sufficient number of realizations are computed to ensure a meaningful comparison. Finally, comments are made regarding the suitability of one-dimensional PFEM for modeling material heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Wei, Yaochi; Kim, Seokpum; Horie, Yasuyuki; Zhou, Min
2017-06-01
A computational approach is developed to predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs). The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific damage mechanisms considered include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to mimic relevant experiments for statistical variations of material behavior due to inherent material heterogeneities. The ignition thresholds and corresponding ignition probability maps are predicted for PBX 9404 and PBX 9501 for the impact loading regime of Up = 200 --1200 m/s. James and Walker-Wasley relations are utilized to establish explicit analytical expressions for the ignition probability as a function of load intensities. The predicted results are in good agreement with available experimental measurements. The capability to computationally predict the macroscopic response out of material microstructures and basic constituent properties lends itself to the design of new materials and the analysis of existing materials. The authors gratefully acknowledge the support from Air Force Office of Scientific Research (AFOSR) and the Defense Threat Reduction Agency (DTRA).
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.
2002-01-01
Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.
30 CFR 77.806 - Connection of single-phase loads.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COAL MINES Surface High-Voltage Distribution § 77.806 Connection of single-phase loads. Single-phase loads, such as transformer primaries, shall be connected phase to phase in resistance grounded systems. ...
NASA Astrophysics Data System (ADS)
Scharfenberg, Franz-Josef; Bogner, Franz X.
2013-02-01
This study classified students into different cognitive load (CL) groups by means of cluster analysis based on their experienced CL in a gene technology outreach lab which has instructionally been designed with regard to CL theory. The relationships of the identified student CL clusters to learner characteristics, laboratory variables, and cognitive achievement were examined using a pre-post-follow-up design. Participants of our day-long module Genetic Fingerprinting were 409 twelfth-graders. During the module instructional phases (pre-lab, theoretical, experimental, and interpretation phases), we measured the students' mental effort (ME) as an index of CL. By clustering the students' module-phase-specific ME pattern, we found three student CL clusters which were independent of the module instructional phases, labeled as low-level, average-level, and high-level loaded clusters. Additionally, we found two student CL clusters that were each particular to a specific module phase. Their members reported especially high ME invested in one phase each: within the pre-lab phase and within the interpretation phase. Differentiating the clusters, we identified uncertainty tolerance, prior experience in experimentation, epistemic interest, and prior knowledge as relevant learner characteristics. We found relationships to cognitive achievement, but no relationships to the examined laboratory variables. Our results underscore the importance of pre-lab and interpretation phases in hands-on teaching in science education and the need for teachers to pay attention to these phases, both inside and outside of outreach laboratory learning settings.
Microleakage Evaluation at Implant-Abutment Interface Using Radiotracer Technique
Siadat, Hakimeh; Arshad, Mahnaz; Mahgoli, Hossein-Ali; Fallahi, Babak
2016-01-01
Objectives: Microbial leakage through the implant-abutment (I-A) interface results in bacterial colonization in two-piece implants. The aim of this study was to compare microleakage rates in three types of Replace abutments namely Snappy, GoldAdapt, and customized ceramic using radiotracing. Materials and Methods: Three groups, one for each abutment type, of five implants and one positive and one negative control were considered (a total of 17 regular body implants). A torque of 35 N/cm was applied to the abutments. The samples were immersed in thallium 201 radioisotope solution for 24 hours to let the radiotracers leak through the I-A interface. Then, gamma photons received from the radiotracers were counted using a gamma counter device. In the next phase, cyclic fatigue loading process was applied followed by the same steps of immersion in the radioactive solution and photon counting. Results: Rate of microleakage significantly increased (P≤0.05) in all three types of abutments (i.e. Snappy, GoldAdapt, and ceramic) after cyclic loading. No statistically significant differences were observed between abutment types after cyclic loading. Conclusions: Microleakage significantly increases after cyclic loading in all three Replace abutments (GoldAdapt, Snappy, ceramic). Lowest microleakage before and after cyclic loading was observed in GoldAdapt followed by Snappy and ceramic. PMID:28392814
Probabilistic fracture finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Lua, Y. J.
1991-01-01
The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.
Probabilistic fracture finite elements
NASA Astrophysics Data System (ADS)
Liu, W. K.; Belytschko, T.; Lua, Y. J.
1991-05-01
The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.
Rocketdyne PSAM: In-house enhancement/application
NASA Technical Reports Server (NTRS)
Newell, J. F.; Rajagopal, K. R.; Ohara, K.
1991-01-01
The development was initiated of the Probabilistic Design Analysis (PDA) Process for rocket engines. This will enable engineers a quantitative assessment of calculated reliability during the design process. The PDA will help choose better designs, make them more robust, and help decide on critical tests to help demonstrate key reliability issues to aid in improving the confidence of the engine capabilities. Rockedyne's involvement with the Composite Loads Spectra (CLS) and Probabilistic Structural Analysis Methodology (PSAM) contracts started this effort and are key elements in the on-going developments. Internal development efforts and hardware applications complement and extend the CLS and PSAM efforts. The completion of the CLS option work and the follow-on PSAM developments will also be integral parts of this methodology. A brief summary of these efforts is presented.
ZERO: probabilistic routing for deploy and forget Wireless Sensor Networks.
Vilajosana, Xavier; Llosa, Jordi; Pacho, Jose Carlos; Vilajosana, Ignasi; Juan, Angel A; Vicario, Jose Lopez; Morell, Antoni
2010-01-01
As Wireless Sensor Networks are being adopted by industry and agriculture for large-scale and unattended deployments, the need for reliable and energy-conservative protocols become critical. Physical and Link layer efforts for energy conservation are not mostly considered by routing protocols that put their efforts on maintaining reliability and throughput. Gradient-based routing protocols route data through most reliable links aiming to ensure 99% packet delivery. However, they suffer from the so-called "hot spot" problem. Most reliable routes waste their energy fast, thus partitioning the network and reducing the area monitored. To cope with this "hot spot" problem we propose ZERO a combined approach at Network and Link layers to increase network lifespan while conserving reliability levels by means of probabilistic load balancing techniques.
Evaluation of Lithofacies Up-Scaling Methods for Probabilistic Prediction of Carbon Dioxide Behavior
NASA Astrophysics Data System (ADS)
Park, J. Y.; Lee, S.; Lee, Y. I.; Kihm, J. H.; Kim, J. M.
2017-12-01
Behavior of carbon dioxide injected into target reservoir (storage) formations is highly dependent on heterogeneities of geologic lithofacies and properties. These heterogeneous lithofacies and properties basically have probabilistic characteristics. Thus, their probabilistic evaluation has to be implemented properly into predicting behavior of injected carbon dioxide in heterogeneous storage formations. In this study, a series of three-dimensional geologic modeling is performed first using SKUA-GOCAD (ASGA and Paradigm) to establish lithofacies models of the Janggi Conglomerate in the Janggi Basin, Korea within a modeling domain. The Janggi Conglomerate is composed of mudstone, sandstone, and conglomerate, and it has been identified as a potential reservoir rock (clastic saline formation) for geologic carbon dioxide storage. Its lithofacies information are obtained from four boreholes and used in lithofacies modeling. Three different up-scaling methods (i.e., nearest to cell center, largest proportion, and random) are applied, and lithofacies modeling is performed 100 times for each up-scaling method. The lithofacies models are then compared and analyzed with the borehole data to evaluate the relative suitability of the three up-scaling methods. Finally, the lithofacies models are converted into coarser lithofacies models within the same modeling domain with larger grid blocks using the three up-scaling methods, and a series of multiphase thermo-hydrological numerical simulation is performed using TOUGH2-MP (Zhang et al., 2008) to predict probabilistically behavior of injected carbon dioxide. The coarser lithofacies models are also compared and analyzed with the borehole data and finer lithofacies models to evaluate the relative suitability of the three up-scaling methods. Three-dimensional geologic modeling, up-scaling, and multiphase thermo-hydrological numerical simulation as linked methodologies presented in this study can be utilized as a practical probabilistic evaluation tool to predict behavior of injected carbon dioxide and even to analyze its leakage risk. This work was supported by the Korea CCS 2020 Project of the Korea Carbon Capture and Sequestration R&D Center (KCRC) funded by the National Research Foundation (NRF), Ministry of Science and ICT (MSIT), Korea.
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2015-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Method of energy load management using PCM for heating and cooling of buildings
Stovall, T.K.; Tomlinson, J.J.
1996-03-26
A method is described for energy load management for the heating and cooling of a building. The method involves utilizing a wallboard as a portion of the building, the wallboard containing about 5 to about 30 wt.% phase change material such that melting of the phase change material occurs during a rise in temperature within the building to remove heat from the air, and a solidification of the phase change material occurs during a lowering of the temperature to dispense heat into the air. At the beginning of either of these cooling or heating cycles, the phase change material is preferably ``fully charged``. In preferred installations one type of wallboard is used on the interior surfaces of exterior walls, and another type as the surface on interior walls. The particular PCM is chosen for the desired wall and room temperature of these locations. In addition, load management is achieved by using PCM-containing wallboards that form cavities of the building such that the cavities can be used for the air handling duct and plenum system of the building. Enhanced load management is achieved by using a thermostat with reduced dead band of about the upper half of a normal dead band of over three degrees. In some applications, air circulation at a rate greater than normal convection provides additional comfort. 7 figs.
Method of energy load management using PCM for heating and cooling of buildings
Stovall, Therese K.; Tomlinson, John J.
1996-01-01
A method of energy load management for the heating and cooling of a building. The method involves utilizing a wallboard as a portion of the building, the wallboard containing about 5 to about 30 wt. % a phase change material such that melting of the phase change material occurs during a rise in temperature within the building to remove heat from the air, and a solidification of the phase change material occurs during a lowering of the temperature to dispense heat into the air. At the beginning of either of these cooling or heating cycles, the phase change material is preferably "fully charged". In preferred installations one type of wallboard is used on the interior surfaces of exterior walls, and another type as the surface on interior walls. The particular PCM is chosen for the desired wall and room temperature of these locations. In addition, load management is achieved by using PCM-containing wallboard that form cavities of the building such that the cavities can be used for the air handling duct and plenum system of the building. Enhanced load management is achieved by using a thermostat with reduced dead band of about the upper half of a normal dead band of over three degree. In some applications, air circulation at a rate greater than normal convection provides additional comfort.
Method of energy load management using PCM for heating and cooling of buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stovall, T.K.; Tomlinson, J.J.
1996-03-26
A method is described for energy load management for the heating and cooling of a building. The method involves utilizing a wallboard as a portion of the building, the wallboard containing about 5 to about 30 wt.% phase change material such that melting of the phase change material occurs during a rise in temperature within the building to remove heat from the air, and a solidification of the phase change material occurs during a lowering of the temperature to dispense heat into the air. At the beginning of either of these cooling or heating cycles, the phase change material ismore » preferably ``fully charged``. In preferred installations one type of wallboard is used on the interior surfaces of exterior walls, and another type as the surface on interior walls. The particular PCM is chosen for the desired wall and room temperature of these locations. In addition, load management is achieved by using PCM-containing wallboards that form cavities of the building such that the cavities can be used for the air handling duct and plenum system of the building. Enhanced load management is achieved by using a thermostat with reduced dead band of about the upper half of a normal dead band of over three degrees. In some applications, air circulation at a rate greater than normal convection provides additional comfort. 7 figs.« less
A three-limb amorphous magnetic circuit for three-phase 200 kVA distribution transformers
NASA Astrophysics Data System (ADS)
Kolano, R.; Wójcik, N.; Gawior, W.
1996-07-01
This paper describes the construction and method of preparation of a three-limb amorphous magnetic circuit. The circuit consists of three single cores: two smaller cores of the same size, surrounded by a third larger one with appropriate window dimensions. The no-load loss and exciting power of the single cores have been investigated as a function of the magnetic induction and stresses applied to the third core.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Zhang, Yingchen
2016-08-01
Distributed energy resources (DERs) and smart loads have the potential to provide flexibility to the distribution system operation. A coordinated optimization approach is proposed in this paper to actively manage DERs and smart loads in distribution systems to achieve the optimal operation status. A three-phase unbalanced Optimal Power Flow (OPF) problem is developed to determine the output from DERs and smart loads with respect to the system operator's control objective. This paper focuses on coordinating PV systems and smart loads to improve the overall voltage profile in distribution systems. Simulations have been carried out in a 12-bus distribution feeder andmore » results illustrate the superior control performance of the proposed approach.« less
Coordinated Optimization of Distributed Energy Resources and Smart Loads in Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Zhang, Yingchen
2016-11-14
Distributed energy resources (DERs) and smart loads have the potential to provide flexibility to the distribution system operation. A coordinated optimization approach is proposed in this paper to actively manage DERs and smart loads in distribution systems to achieve the optimal operation status. A three-phase unbalanced Optimal Power Flow (OPF) problem is developed to determine the output from DERs and smart loads with respect to the system operator's control objective. This paper focuses on coordinating PV systems and smart loads to improve the overall voltage profile in distribution systems. Simulations have been carried out in a 12-bus distribution feeder andmore » results illustrate the superior control performance of the proposed approach.« less
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2009-01-01
A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.
Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2014-04-14
To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less
Investigation of the seismic resistance of interior building partitions, phase 1
NASA Astrophysics Data System (ADS)
Anderson, R. W.; Yee, Y. C.; Savulian, G.; Barclay, B.; Lee, G.
1981-02-01
The effective participation of wood-framed interior shear wall partitions when determining the ultimate resistance capacity of two- and three-story masonry apartment buildings to seismic loading was investigated. Load vs. deflection tests were performed on 8 ft by 8 ft wall panel specimens constructed of four different facing materials, including wood lath and plaster, gypsum lath and plaster, and gypsum wallboard with joints placed either horizontally or vertically. The wood lath and plaster construction is found to be significantly stronger and stiffer than the other three specimens. Analyses of the test panels using finite element methods to predict their static resistance characteristics indicates that the facing material acts as the primary shear-resisting structural element. Resistance of shear wall partitions to lateral loads was assessed.
Analysis of scale effect in compressive ice failure and implications for design
NASA Astrophysics Data System (ADS)
Taylor, Rocky Scott
The main focus of the study was the analysis of scale effect in local ice pressure resulting from probabilistic (spalling) fracture and the relationship between local and global loads due to the averaging of pressures across the width of a structure. A review of fundamental theory, relevant ice mechanics and a critical analysis of data and theory related to the scale dependent pressure behavior of ice were completed. To study high pressure zones (hpzs), data from small-scale indentation tests carried out at the NRC-IOT were analyzed, including small-scale ice block and ice sheet tests. Finite element analysis was used to model a sample ice block indentation event using a damaging, viscoelastic material model and element removal techniques (for spalling). Medium scale tactile sensor data from the Japan Ocean Industries Association (JOIA) program were analyzed to study details of hpz behavior. The averaging of non-simultaneous hpz loads during an ice-structure interaction was examined using local panel pressure data. Probabilistic averaging methodology for extrapolating full-scale pressures from local panel pressures was studied and an improved correlation model was formulated. Panel correlations for high speed events were observed to be lower than panel correlations for low speed events. Global pressure estimates based on probabilistic averaging were found to give substantially lower average errors in estimation of load compared with methods based on linear extrapolation (no averaging). Panel correlations were analyzed for Molikpaq and compared with JOIA results. From this analysis, it was shown that averaging does result in decreasing pressure for increasing structure width. The relationship between local pressure and ice thickness for a panel of unit width was studied in detail using full-scale data from the STRICE, Molikpaq, Cook Inlet and Japan Ocean Industries Association (JOIA) data sets. A distinct trend of decreasing pressure with increasing ice thickness was observed. The pressure-thickness behavior was found to be well modeled by the power law relationships Pavg = 0.278 h-0.408 MPa and Pstd = 0.172h-0.273 MPa for the mean and standard deviation of pressure, respectively. To study theoretical aspects of spalling fracture and the pressure-thickness scale effect, probabilistic failure models have been developed. A probabilistic model based on Weibull theory (tensile stresses only) was first developed. Estimates of failure pressure obtained with this model were orders of magnitude higher than the pressures observed from benchmark data due to the assumption of only tensile failure. A probabilistic fracture mechanics (PFM) model including both tensile and compressive (shear) cracks was developed. Criteria for unstable fracture in tensile and compressive (shear) zones were given. From these results a clear theoretical scale effect in peak (spalling) pressure was observed. This scale effect followed the relationship Pp,th = 0.15h-0.50 MPa which agreed well with the benchmark data. The PFM model was applied to study the effect of ice edge shape (taper angle) and hpz eccentricity. Results indicated that specimens with flat edges spall at lower pressures while those with more tapered edges spall less readily. The mean peak (failure) pressure was also observed to decrease with increased eccentricity. It was concluded that hpzs centered about the middle of the ice thickness are the zones most likely to create the peak pressures that are of interest in design. Promising results were obtained using the PFM model, which provides strong support for continued research in the development and application of probabilistic fracture mechanics to the study of scale effects in compressive ice failure and to guide the development of methods for the estimation of design ice pressures.
The effects of load carriage on joint work at different running velocities.
Liew, Bernard X W; Morris, Susan; Netto, Kevin
2016-10-03
Running with load carriage has become increasingly prevalent in sport, as well as many field-based occupations. However, the "sources" of mechanical work during load carriage running are not yet completely understood. The purpose of this study was to determine the influence of load magnitudes on the mechanical joint work during running, across different velocities. Thirty-one participants performed overground running at three load magnitudes (0%, 10%, 20% body weight), and at three velocities (3, 4, 5m/s). Three dimensional motion capture was performed, with synchronised force plate data captured. Inverse dynamics was used to quantify joint work in the stance phase of running. Joint work was normalized to a unit proportion of body weight and leg length (one dimensionless work unit=532.45J). Load significantly increased total joint work and total positive work and this effect was greater at faster velocities. Load carriage increased ankle positive work (β coefficient=rate of 6.95×10 -4 unit work per 1% BW carried), and knee positive (β=1.12×10 -3 unit) and negative work (β=-2.47×10 -4 unit), and hip negative work (β=-7.79×10 -4 unit). Load carriage reduced hip positive work and this effect was smaller at faster velocities. Inter-joint redistribution did not contribute significantly to altered mechanical work within the spectrum of load and velocity investigated. Hence, the ankle joint contributed to the greatest extent in work production, whilst that of the knee contributed to the greatest extent to work absorption when running with load. Copyright © 2016 Elsevier Ltd. All rights reserved.
Monitoring Training Load in Indian Male Swimmers
MAJUMDAR, PRALAY; SRIVIDHYA, SRI
2010-01-01
The present study was initiated to monitor the training load with the magnitude of impact on the hormone concentrations such as testosterone, cortisol and T/C (Testosterone/Cortisol) ratio during the three phases of training (i.e. preparatory, pre-competitive, and competitive phases) in Indian male swimmers preparing for the 2010 Commonwealth Games. Blood samples were collected at the end of each training phase and hormone concentration was determined by ELISA. Our results reveal that testosterone concentration and the T/C ratio significantly decreased and the cortisol concentration increased in the subsequent periodized cycle. Change in hormone concentration was associated with the intensity and duration of individual exercise sessions. The greatest performance enhancement was realized with the lowest plasma cortisol, highest testosterone, and a high T/C ratio. Monitoring of these hormones also have implications for identifying and preventing overreaching in swimmers. PMID:27182335
NASA Astrophysics Data System (ADS)
Velazquez, Antonio; Swartz, Raymond A.
2011-04-01
Wind turbine systems are attracting considerable attention due to concerns regarding global energy consumption as well as sustainability. Advances in wind turbine technology promote the tendency to improve efficiency in the structure that support and produce this renewable power source, tending toward more slender and larger towers, larger gear boxes, and larger, lighter blades. The structural design optimization process must account for uncertainties and nonlinear effects (such as wind-induced vibrations, unmeasured disturbances, and material and geometric variabilities). In this study, a probabilistic monitoring approach is developed that measures the response of the turbine tower to stochastic loading, estimates peak demand, and structural resistance (in terms of serviceability). The proposed monitoring system can provide a real-time estimate of the probability of exceedance of design serviceability conditions based on data collected in-situ. Special attention is paid to wind and aerodynamic characteristics that are intrinsically present (although sometimes neglected in health monitoring analysis) and derived from observations or experiments. In particular, little attention has been devoted to buffeting, usually non-catastrophic but directly impacting the serviceability of the operating wind turbine. As a result, modal-based analysis methods for the study and derivation of flutter instability, and buffeting response, have been successfully applied to the assessment of the susceptibility of high-rise slender structures, including wind turbine towers. A detailed finite element model has been developed to generate data (calibrated to published experimental and analytical results). Risk assessment is performed for the effects of along wind forces in a framework of quantitative risk analysis. Both structural resistance and wind load demands were considered probabilistic with the latter assessed by dynamic analyses.
The United States of America as represented by the United States Department of Energy
2009-12-15
An apparatus and method for transferring thermal energy from a heat load is disclosed. In particular, use of a phase change material and specific flow designs enables cooling with temperature regulation well above the fusion temperature of the phase change material for medium and high heat loads from devices operated intermittently (in burst mode). Exemplary heat loads include burst mode lasers and laser diodes, flight avionics, and high power space instruments. Thermal energy is transferred from the heat load to liquid phase change material from a phase change material reservoir. The liquid phase change material is split into two flows. Thermal energy is transferred from the first flow via a phase change material heat sink. The second flow bypasses the phase change material heat sink and joins with liquid phase change material exiting from the phase change material heat sink. The combined liquid phase change material is returned to the liquid phase change material reservoir. The ratio of bypass flow to flow into the phase change material heat sink can be varied to adjust the temperature of the liquid phase change material returned to the liquid phase change material reservoir. Varying the flowrate and temperature of the liquid phase change material presented to the heat load determines the magnitude of thermal energy transferred from the heat load.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
NASA Astrophysics Data System (ADS)
González, F. I.; Leveque, R. J.; Hatheway, D.; Metzger, N.
2011-12-01
Risk is defined in many ways, but most are consistent with Crichton's [1999] definition based on the ''risk triangle'' concept and the explicit identification of three risk elements: ''Risk is the probability of a loss, and this depends on three elements: hazard, vulnerability, and exposure. If any of these three elements in risk increases or decreases, then the risk increases or decreases respectively." The World Meteorological Organization, for example, cites Crichton [1999] and then defines risk as [WMO, 2008] Risk = function (Hazard x Vulnerability x Exposure) while the Asian Disaster Reduction Center adopts the more general expression [ADRC, 2005] Risk = function (Hazard, Vulnerability, Exposure) In practice, probabilistic concepts are invariably invoked, and at least one of the three factors are specified as probabilistic in nature. The Vulnerability and Exposure factors are defined in multiple ways in the relevant literature; but the Hazard factor, which is the focus of our presentation, is generally understood to deal only with the physical aspects of the phenomena and, in particular, the ability of the phenomena to inflict harm [Thywissen, 2006]. A Hazard factor can be estimated by a methodology known as Probabilistic Tsunami Hazard Assessment (PTHA) [González, et al., 2009]. We will describe the PTHA methodology and provide an example -- the results of a previous application to Seaside, OR. We will also present preliminary results for a PTHA of Crescent City, CA -- a pilot project and coastal modeling/mapping effort funded by the Federal Emergency Management Agency (FEMA) Region IX office as part of the new California Coastal Analysis and Mapping Project (CCAMP). CCAMP and the PTHA in Crescent City are being conducted under the nationwide FEMA Risk Mapping, Assessment, and Planning (Risk MAP) Program which focuses on providing communities with flood information and tools they can use to enhance their mitigation plans and better protect their citizens.
NASA Astrophysics Data System (ADS)
Constantinescu, Robert; Robertson, Richard; Lindsay, Jan M.; Tonini, Roberto; Sandri, Laura; Rouwet, Dmitri; Smith, Patrick; Stewart, Roderick
2016-11-01
We report on the first "real-time" application of the BET_UNREST (Bayesian Event Tree for Volcanic Unrest) probabilistic model, during a VUELCO Simulation Exercise carried out on the island of Dominica, Lesser Antilles, in May 2015. Dominica has a concentration of nine potentially active volcanic centers and frequent volcanic earthquake swarms at shallow depths, intense geothermal activity, and recent phreatic explosions (1997) indicate the region is still active. The exercise scenario was developed in secret by a team of scientists from The University of the West Indies (Trinidad and Tobago) and University of Auckland (New Zealand). The simulated unrest activity was provided to the exercise's Scientific Team in three "phases" through exercise injects comprising processed monitoring data. We applied the newly created BET_UNREST model through its software implementation PyBetUnrest, to estimate the probabilities of having (i) unrest of (ii) magmatic, hydrothermal or tectonic origin, which may or may not lead to (iii) an eruption. The probabilities obtained for each simulated phase raised controversy and intense deliberations among the members of the Scientific Team. The results were often considered to be "too high" and were not included in any of the reports presented to ODM (Office for Disaster Management) revealing interesting crisis communication challenges. We concluded that the PyBetUnrest application itself was successful and brought the tool one step closer to a full implementation. However, as with any newly proposed method, it needs more testing, and in order to be able to use it in the future, we make a series of recommendations for future applications.
Vázquez-Guerrero, Jairo; Moras, Gerard; Baeza, Jennifer; Rodríguez-Jiménez, Sergio
2016-01-01
The purpose of the study was to compare the force outputs achieved during a squat exercise using a rotational inertia device in stable versus unstable conditions with different loads and in concentric and eccentric phases. Thirteen male athletes (mean ± SD: age 23.7 ± 3.0 years, height 1.80 ± 0.08 m, body mass 77.4 ± 7.9 kg) were assessed while squatting, performing one set of three repetitions with four different loads under stable and unstable conditions at maximum concentric effort. Overall, there were no significant differences between the stable and unstable conditions at each of the loads for any of the dependent variables. Mean force showed significant differences between some of the loads in stable and unstable conditions (P < 0.010) and peak force output differed between all loads for each condition (P < 0.045). Mean force outputs were greater in the concentric than in the eccentric phase under both conditions and with all loads (P < 0.001). There were no significant differences in peak force between concentric and eccentric phases at any load in either stable or unstable conditions. In conclusion, squatting with a rotational inertia device allowed the generation of similar force outputs under stable and unstable conditions at each of the four loads. The study also provides empirical evidence of the different force outputs achieved by adjusting load conditions on the rotational inertia device when performing squats, especially in the case of peak force. Concentric force outputs were significantly higher than eccentric outputs, except for peak force under both conditions. These findings support the use of the rotational inertia device to train the squatting exercise under unstable conditions for strength and conditioning trainers. The device could also be included in injury prevention programs for muscle lesions and ankle and knee joint injuries.
Vázquez-Guerrero, Jairo; Moras, Gerard
2016-01-01
The purpose of the study was to compare the force outputs achieved during a squat exercise using a rotational inertia device in stable versus unstable conditions with different loads and in concentric and eccentric phases. Thirteen male athletes (mean ± SD: age 23.7 ± 3.0 years, height 1.80 ± 0.08 m, body mass 77.4 ± 7.9 kg) were assessed while squatting, performing one set of three repetitions with four different loads under stable and unstable conditions at maximum concentric effort. Overall, there were no significant differences between the stable and unstable conditions at each of the loads for any of the dependent variables. Mean force showed significant differences between some of the loads in stable and unstable conditions (P < 0.010) and peak force output differed between all loads for each condition (P < 0.045). Mean force outputs were greater in the concentric than in the eccentric phase under both conditions and with all loads (P < 0.001). There were no significant differences in peak force between concentric and eccentric phases at any load in either stable or unstable conditions. In conclusion, squatting with a rotational inertia device allowed the generation of similar force outputs under stable and unstable conditions at each of the four loads. The study also provides empirical evidence of the different force outputs achieved by adjusting load conditions on the rotational inertia device when performing squats, especially in the case of peak force. Concentric force outputs were significantly higher than eccentric outputs, except for peak force under both conditions. These findings support the use of the rotational inertia device to train the squatting exercise under unstable conditions for strength and conditioning trainers. The device could also be included in injury prevention programs for muscle lesions and ankle and knee joint injuries. PMID:27111766
3D Magnetic Field Analysis of a Turbine Generator Stator Core-end Region
NASA Astrophysics Data System (ADS)
Wakui, Shinichi; Takahashi, Kazuhiko; Ide, Kazumasa; Takahashi, Miyoshi; Watanabe, Takashi
In this paper we calculated magnetic flux density and eddy current distributions of a 71MVA turbine generator stator core-end using three-dimensional numerical magnetic field analysis. Subsequently, the magnetic flux densities and eddy current densities in the stator core-end region on the no-load and three-phase short circuit conditions obtained by the analysis have good agreements with the measurements. Furthermore, the differences of eddy current and eddy current loss in the stator core-end region for various load conditions are shown numerically. As a result, the facing had an effect that decrease the eddy current loss of the end plate about 84%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, D. M.; Chen, Yan; Mu, Juan
Micro-mechanical behaviors of a Cu 46.5Zr 46.5Al 7 bulk metallic glass composite in the plastic regime were investigated by continuous in situ neutron diffraction during compression. Three stages of the plastic deformation were observed according to the work-hardening rate. Here, the underlying natures of the work hardening, correlating with the lattice/microscopic strain evolution, are revealed for the three stages: (1) the initiation of shear bands, (2) the phase load transferring from the amorphous phase to the B2 phase and (3) the accelerated martensitic transformation and the work hardening of the polycrystalline phases promoted by the rapid propagation of the shearmore » bands.« less
Wang, D. M.; Chen, Yan; Mu, Juan; ...
2018-05-21
Micro-mechanical behaviors of a Cu 46.5Zr 46.5Al 7 bulk metallic glass composite in the plastic regime were investigated by continuous in situ neutron diffraction during compression. Three stages of the plastic deformation were observed according to the work-hardening rate. Here, the underlying natures of the work hardening, correlating with the lattice/microscopic strain evolution, are revealed for the three stages: (1) the initiation of shear bands, (2) the phase load transferring from the amorphous phase to the B2 phase and (3) the accelerated martensitic transformation and the work hardening of the polycrystalline phases promoted by the rapid propagation of the shearmore » bands.« less
Probabilistic graphs as a conceptual and computational tool in hydrology and water management
NASA Astrophysics Data System (ADS)
Schoups, Gerrit
2014-05-01
Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.
Sun, Chengchun; Shen, Zhenyao; Liu, Ruimin; Xiong, Ming; Ma, Fangbing; Zhang, Ouyang; Li, Yangyang; Chen, Lei
2013-12-01
Excessive inputs of nitrogen and phosphorus (N and P) degrade surface water quality worldwide. Impoundment of reservoirs alters the N and P balance of a basin. In this study, riverine nutrient loads from the upper Yangtze River basin (YRB) at the Yichang station were estimated using Load Estimator (LOADEST). Long-term load trends and monthly variabilities during three sub-periods based on the construction phases of the Three Gorges Dam (TGD) were analyzed statistically. The dissolved inorganic nitrogen (DIN) loads from the upper YRB for the period from 1990 to 2009 ranged from 30.47 × 10(4) to 78.14 × 10(4) t, while the total phosphorus (TP) loads ranged from 2.54 × 10(4) to 7.85 × 10(4) t. DIN increased rapidly from 1995 to 2002 mainly as a result of increased fertilizer use. Statistics of fertilizer use in the upper YRB agreed on this point. However, the trend of the TP loads reflected the combined effect of removal by sedimentation in reservoirs and increased anthropogenic inputs. After the TGD impoundment in 2003, decreasing trends in both DIN and TP loads were found. The reduction in DIN was mainly caused by ammonium consumption and transference. From an analysis of monthly loads, it was found that DIN had a high correlation to discharges. For TP loads, an average decrease of 4.91 % in October was found when the TGD impoundment occurred, but an increase of 4.23 % also occurred in July, corresponding to the washout from sediment deposited in the reservoir before July. Results of this study revealed the TGD had affected nutrient loads in the basin, and it had played a role in nutrient reduction after its operation.
ERIC Educational Resources Information Center
Tsitsipis, Georgios; Stamovlasis, Dimitrios; Papageorgiou, George
2012-01-01
In this study, the effect of 3 cognitive variables such as logical thinking, field dependence/field independence, and convergent/divergent thinking on some specific students' answers related to the particulate nature of matter was investigated by means of probabilistic models. Besides recording and tabulating the students' responses, a combination…
Malfait, Bart; Dingenen, Bart; Smeets, Annemie; Staes, Filip; Pataky, Todd; Robinson, Mark A; Vanrenterghem, Jos; Verschueren, Sabine
2016-01-01
The purpose was to assess if variation in sagittal plane landing kinematics is associated with variation in neuromuscular activation patterns of the quadriceps-hamstrings muscle groups during drop vertical jumps (DVJ). Fifty female athletes performed three DVJ. The relationship between peak knee and hip flexion angles and the amplitude of four EMG vectors was investigated with trajectory-level canonical correlation analyses over the entire time period of the landing phase. EMG vectors consisted of the {vastus medialis(VM),vastus lateralis(VL)}, {vastus medialis(VM),hamstring medialis(HM)}, {hamstring medialis(HM),hamstring lateralis(HL)} and the {vastus lateralis(VL),hamstring lateralis(HL)}. To estimate the contribution of each individual muscle, linear regressions were also conducted using one-dimensional statistical parametric mapping. The peak knee flexion angle was significantly positively associated with the amplitudes of the {VM,HM} and {HM,HL} during the preparatory and initial contact phase and with the {VL,HL} vector during the peak loading phase (p<0.05). Small peak knee flexion angles were significantly associated with higher HM amplitudes during the preparatory and initial contact phase (p<0.001). The amplitudes of the {VM,VL} and {VL,HL} were significantly positively associated with the peak hip flexion angle during the peak loading phase (p<0.05). Small peak hip flexion angles were significantly associated with higher VL amplitudes during the peak loading phase (p = 0.001). Higher external knee abduction and flexion moments were found in participants landing with less flexed knee and hip joints (p<0.001). This study demonstrated clear associations between neuromuscular activation patterns and landing kinematics in the sagittal plane during specific parts of the landing. These findings have indicated that an erect landing pattern, characterized by less hip and knee flexion, was significantly associated with an increased medial and posterior neuromuscular activation (dominant hamstrings medialis activity) during the preparatory and initial contact phase and an increased lateral neuromuscular activation (dominant vastus lateralis activity) during the peak loading phase.
NASA Astrophysics Data System (ADS)
Sohn, Soo-Jin; Min, Young-Mi; Lee, June-Yi; Tam, Chi-Yung; Kang, In-Sik; Wang, Bin; Ahn, Joong-Bae; Yamagata, Toshio
2012-02-01
The performance of the probabilistic multimodel prediction (PMMP) system of the APEC Climate Center (APCC) in predicting the Asian summer monsoon (ASM) precipitation at a four-month lead (with February initial condition) was compared with that of a statistical model using hindcast data for 1983-2005 and real-time forecasts for 2006-2011. Particular attention was paid to probabilistic precipitation forecasts for the boreal summer after the mature phase of El Niño and Southern Oscillation (ENSO). Taking into account the fact that coupled models' skill for boreal spring and summer precipitation mainly comes from their ability to capture ENSO teleconnection, we developed the statistical model using linear regression with the preceding winter ENSO condition as the predictor. Our results reveal several advantages and disadvantages in both forecast systems. First, the PMMP appears to have higher skills for both above- and below-normal categories in the six-year real-time forecast period, whereas the cross-validated statistical model has higher skills during the 23-year hindcast period. This implies that the cross-validated statistical skill may be overestimated. Second, the PMMP is the better tool for capturing atypical ENSO (or non-canonical ENSO related) teleconnection, which has affected the ASM precipitation during the early 1990s and in the recent decade. Third, the statistical model is more sensitive to the ENSO phase and has an advantage in predicting the ASM precipitation after the mature phase of La Niña.
A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network
NASA Astrophysics Data System (ADS)
Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.
2018-02-01
Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.
Advanced simulation study on bunch gap transient effect
NASA Astrophysics Data System (ADS)
Kobayashi, Tetsuya; Akai, Kazunori
2016-06-01
Bunch phase shift along the train due to a bunch gap transient is a concern in high-current colliders. In KEKB operation, the measured phase shift along the train agreed well with a simulation and a simple analytical form in most part of the train. However, a rapid phase change was observed at the leading part of the train, which was not predicted by the simulation or by the analytical form. In order to understand the cause of this observation, we have developed an advanced simulation, which treats the transient loading in each of the cavities of the three-cavity system of the accelerator resonantly coupled with energy storage (ARES) instead of the equivalent single cavities used in the previous simulation, operating in the accelerating mode. In this paper, we show that the new simulation reproduces the observation, and clarify that the rapid phase change at the leading part of the train is caused by a transient loading in the three-cavity system of ARES. KEKB is being upgraded to SuperKEKB, which is aiming at 40 times higher luminosity than KEKB. The gap transient in SuperKEKB is investigated using the new simulation, and the result shows that the rapid phase change at the leading part of the train is much larger due to higher beam currents. We will also present measures to mitigate possible luminosity reduction or beam performance deterioration due to the rapid phase change caused by the gap transient.
Probabilistic reversal learning is impaired in Parkinson's disease
Peterson, David A.; Elliott, Christian; Song, David D.; Makeig, Scott; Sejnowski, Terrence J.; Poizner, Howard
2009-01-01
In many everyday settings, the relationship between our choices and their potentially rewarding outcomes is probabilistic and dynamic. In addition, the difficulty of the choices can vary widely. Although a large body of theoretical and empirical evidence suggests that dopamine mediates rewarded learning, the influence of dopamine in probabilistic and dynamic rewarded learning remains unclear. We adapted a probabilistic rewarded learning task originally used to study firing rates of dopamine cells in primate substantia nigra pars compacta (Morris et al. 2006) for use as a reversal learning task with humans. We sought to investigate how the dopamine depletion in Parkinson's disease (PD) affects probabilistic reward learning and adaptation to a reversal in reward contingencies. Over the course of 256 trials subjects learned to choose the more favorable from among pairs of images with small or large differences in reward probabilities. During a subsequent otherwise identical reversal phase, the reward probability contingencies for the stimuli were reversed. Seventeen Parkinson's disease (PD) patients of mild to moderate severity were studied off of their dopaminergic medications and compared to 15 age-matched controls. Compared to controls, PD patients had distinct pre- and post-reversal deficiencies depending upon the difficulty of the choices they had to learn. The patients also exhibited compromised adaptability to the reversal. A computational model of the subjects’ trial-by-trial choices demonstrated that the adaptability was sensitive to the gain with which patients weighted pre-reversal feedback. Collectively, the results implicate the nigral dopaminergic system in learning to make choices in environments with probabilistic and dynamic reward contingencies. PMID:19628022
Analysis of flood hazard under consideration of dike breaches
NASA Astrophysics Data System (ADS)
Vorogushyn, S.; Apel, H.; Lindenschmidt, K.-E.; Merz, B.
2009-04-01
The study focuses on the development and application of a new modelling system which allows a comprehensive flood hazard assessment along diked river reaches under consideration of dike failures. The proposed Inundation Hazard Assessment Model (IHAM) represents a hybrid probabilistic-deterministic model. It comprises three models interactively coupled at runtime. These are: (1) 1D unsteady hydrodynamic model of river channel and floodplain flow between dikes, (2) probabilistic dike breach model which determines possible dike breach locations, breach widths and breach outflow discharges, and (3) 2D raster-based diffusion wave storage cell model of the hinterland areas behind the dikes. Due to the unsteady nature of the 1D and 2D coupled models, the dependence between hydraulic load at various locations along the reach is explicitly considered. The probabilistic dike breach model describes dike failures due to three failure mechanisms: overtopping, piping and slope instability caused by the seepage flow through the dike core (micro-instability). Dike failures for each mechanism are simulated based on fragility functions. The probability of breach is conditioned by the uncertainty in geometrical and geotechnical dike parameters. The 2D storage cell model driven by the breach outflow boundary conditions computes an extended spectrum of flood intensity indicators such as water depth, flow velocity, impulse, inundation duration and rate of water rise. IHAM is embedded in a Monte Carlo simulation in order to account for the natural variability of the flood generation processes reflected in the form of input hydrographs and for the randomness of dike failures given by breach locations, times and widths. The scenario calculations for the developed synthetic input hydrographs for the main river and tributary were carried out for floods with return periods of T = 100; 200; 500; 1000 a. Based on the modelling results, probabilistic dike hazard maps could be generated that indicate the failure probability of each discretised dike section for every scenario magnitude. Besides the binary inundation patterns that indicate the probability of raster cells being inundated, IHAM generates probabilistic flood hazard maps. These maps display spatial patterns of the considered flood intensity indicators and their associated return periods. The probabilistic nature of IHAM allows for the generation of percentile flood hazard maps that indicate the median and uncertainty bounds of the flood intensity indicators. The uncertainty results from the natural variability of the flow hydrographs and randomness of dike breach processes. The same uncertainty sources determine the uncertainty in the flow hydrographs along the study reach. The simulations showed that the dike breach stochasticity has an increasing impact on hydrograph uncertainty in downstream direction. Whereas in the upstream part of the reach the hydrograph uncertainty is mainly stipulated by the variability of the flood wave form, the dike failures strongly shape the uncertainty boundaries in the downstream part of the reach. Finally, scenarios of polder deployment for the extreme floods with T = 200; 500; 1000 a were simulated with IHAM. The results indicate a rather weak reduction of the mean and median flow hydrographs in the river channel. However, the capping of the flow peaks resulted in a considerable reduction of the overtopping failures downstream of the polder with a simultaneous slight increase of the piping and slope micro-instability frequencies explained by a more durable average impoundment. The developed IHAM simulation system represents a new scientific tool for studying fluvial inundation dynamics under extreme conditions incorporating effects of technical flood protection measures. With its major outputs in form of novel probabilistic inundation and dike hazard maps, the IHAM system has a high practical value for decision support in flood management.
Development of a 3D numerical methodology for fast prediction of gun blast induced loading
NASA Astrophysics Data System (ADS)
Costa, E.; Lagasco, F.
2014-05-01
In this paper, the development of a methodology based on semi-empirical models from the literature to carry out 3D prediction of pressure loading on surfaces adjacent to a weapon system during firing is presented. This loading is consequent to the impact of the blast wave generated by the projectile exiting the muzzle bore. When exceeding a pressure threshold level, loading is potentially capable to induce unwanted damage to nearby hard structures as well as frangible panels or electronic equipment. The implemented model shows the ability to quickly predict the distribution of the blast wave parameters over three-dimensional complex geometry surfaces when the weapon design and emplacement data as well as propellant and projectile characteristics are available. Considering these capabilities, the use of the proposed methodology is envisaged as desirable in the preliminary design phase of the combat system to predict adverse effects and then enable to identify the most appropriate countermeasures. By providing a preliminary but sensitive estimate of the operative environmental loading, this numerical means represents a good alternative to more powerful, but time consuming advanced computational fluid dynamics tools, which use can, thus, be limited to the final phase of the design.
A Novel TRM Calculation Method by Probabilistic Concept
NASA Astrophysics Data System (ADS)
Audomvongseree, Kulyos; Yokoyama, Akihiko; Verma, Suresh Chand; Nakachi, Yoshiki
In a new competitive environment, it becomes possible for the third party to access a transmission facility. From this structure, to efficiently manage the utilization of the transmission network, a new definition about Available Transfer Capability (ATC) has been proposed. According to the North American ElectricReliability Council (NERC)’s definition, ATC depends on several parameters, i. e. Total Transfer Capability (TTC), Transmission Reliability Margin (TRM), and Capacity Benefit Margin (CBM). This paper is focused on the calculation of TRM which is one of the security margin reserved for any uncertainty of system conditions. The TRM calculation by probabilistic method is proposed in this paper. Based on the modeling of load forecast error and error in transmission line limitation, various cases of transmission transfer capability and its related probabilistic nature can be calculated. By consideration of the proposed concept of risk analysis, the appropriate required amount of TRM can be obtained. The objective of this research is to provide realistic information on the actual ability of the network which may be an alternative choice for system operators to make an appropriate decision in the competitive market. The advantages of the proposed method are illustrated by application to the IEEJ-WEST10 model system.
Three-Phase AC Optimal Power Flow Based Distribution Locational Marginal Price: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Zhang, Yingchen
2017-05-17
Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture themore » phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.« less
Relative Gains, Losses, and Reference Points in Probabilistic Choice in Rats
Marshall, Andrew T.; Kirkpatrick, Kimberly
2015-01-01
Theoretical reference points have been proposed to differentiate probabilistic gains from probabilistic losses in humans, but such a phenomenon in non-human animals has yet to be thoroughly elucidated. Three experiments evaluated the effect of reward magnitude on probabilistic choice in rats, seeking to determine reference point use by examining the effect of previous outcome magnitude(s) on subsequent choice behavior. Rats were trained to choose between an outcome that always delivered reward (low-uncertainty choice) and one that probabilistically delivered reward (high-uncertainty). The probability of high-uncertainty outcome receipt and the magnitudes of low-uncertainty and high-uncertainty outcomes were manipulated within and between experiments. Both the low- and high-uncertainty outcomes involved variable reward magnitudes, so that either a smaller or larger magnitude was probabilistically delivered, as well as reward omission following high-uncertainty choices. In Experiments 1 and 2, the between groups factor was the magnitude of the high-uncertainty-smaller (H-S) and high-uncertainty-larger (H-L) outcome, respectively. The H-S magnitude manipulation differentiated the groups, while the H-L magnitude manipulation did not. Experiment 3 showed that manipulating the probability of differential losses as well as the expected value of the low-uncertainty choice produced systematic effects on choice behavior. The results suggest that the reference point for probabilistic gains and losses was the expected value of the low-uncertainty choice. Current theories of probabilistic choice behavior have difficulty accounting for the present results, so an integrated theoretical framework is proposed. Overall, the present results have implications for understanding individual differences and corresponding underlying mechanisms of probabilistic choice behavior. PMID:25658448
Finite element analysis of provisional structures of implant-supported complete prostheses.
Carneiro, Bruno Albuquerque; de Brito, Rui Barbosa; França, Fabiana Mantovani Gomes
2014-04-01
The use of provisional resin implant-supported complete dentures is a fast and safe procedure to restore mastication and esthetics of patients soon after surgery and during the adaptation phase to the new denture. This study assessed stress distribution of provisional implant-supported fixed dentures and the all-on-4 concept using self-curing acrylic resin (Tempron) and bis-acrylic resin (Luxatemp) to simulate functional loads through the three-dimensional finite element method. Solidworks software was used to build three-dimensional models using acrylic resin (Tempron, model A) and bis-acrylic resin (Luxatemp, model B) for denture captions. Two loading patterns were applied on each model: (1) right unilateral axial loading of 150 N on the occlusal surfaces of posterior teeth and (2) oblique loading vector of 150 N at 45°. The results showed that higher stress was found on the bone crest below oblique load application with a maximum value of 187.57 MPa on model A and 167.45 MPa on model B. It was concluded that model B improved stress distribution on the denture compared with model A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Small, Ward; Pearson, Mark A.; Metz, Tom R.
Dow Corning SE 1700 (reinforced polydimethylsiloxane) porous structures were made by direct ink writing (DIW) in a face centered tetragonal (FCT) configuration. The filament diameter was 250 μm. Structures consisting of 4, 8, or 12 layers were fabricated with center-to-center filament spacing (“road width” (RW)) of 475, 500, 525, 550, or 575 μm. Three compressive load-unload cycles to 2000 kPa were performed on four separate areas of each sample; three samples of each thickness and filament spacing were tested. At a given strain during the third loading phase, stress varied inversely with porosity. At 10% strain, the stress was nearlymore » independent of the number of layers (i.e., thickness). At higher strains (20- 40%), the stress was highest for the 4-layer structure; the 8- and 12-layer structures were nearly equivalent suggesting that the load deflection is independent of number of layers above 8 layers. Intra-and inter-sample variability of the load deflection response was higher for thinner and less porous structures.« less
Quantum Tasks with Non-maximally Quantum Channels via Positive Operator-Valued Measurement
NASA Astrophysics Data System (ADS)
Peng, Jia-Yin; Luo, Ming-Xing; Mo, Zhi-Wen
2013-01-01
By using a proper positive operator-valued measure (POVM), we present two new schemes for probabilistic transmission with non-maximally four-particle cluster states. In the first scheme, we demonstrate that two non-maximally four-particle cluster states can be used to realize probabilistically sharing an unknown three-particle GHZ-type state within either distant agent's place. In the second protocol, we demonstrate that a non-maximally four-particle cluster state can be used to teleport an arbitrary unknown multi-particle state in a probabilistic manner with appropriate unitary operations and POVM. Moreover the total success probability of these two schemes are also worked out.
NASA Astrophysics Data System (ADS)
Chen, Tzikang J.; Shiao, Michael
2016-04-01
This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Liao, Yulong; Zhao, Zhongwei
2018-04-01
Tungsten was recovered from sulfuric-phosphoric acid leach solution of scheelite using 2-octanol and tributyl phosphate (TBP). Approximately 76% of the tungsten and less than 6.2% of the iron were extracted when using 70% 2-octanol, showing good selectivity for tungsten over iron; the tungsten extraction could not be significantly enhanced using a three-stage countercurrent simulation test. Moreover, more than 99.2% of the W and 91.0% of the Fe were extracted when using 70% TBP, showing poor selectivity, but after pretreating the leach solution with iron powder, less than 5.5% of the Fe was extracted. The loaded phases were stripped using deionized water and ammonia solution. The maximum stripping rate of tungsten from loaded 2-octanol was 45.6% when using water, compared with only 13.1% from loaded TBP. Tungsten was efficiently stripped from loaded phases using ammonia solution without formation of Fe(OH)3 precipitate. Finally, a flow sheet for recovery of tungsten with TBP is proposed.
NASA Astrophysics Data System (ADS)
Gao, Siwen; Rajendran, Mohan Kumar; Fivel, Marc; Ma, Anxin; Shchyglo, Oleg; Hartmaier, Alexander; Steinbach, Ingo
2015-10-01
Three-dimensional discrete dislocation dynamics (DDD) simulations in combination with the phase-field method are performed to investigate the influence of different realistic Ni-base single crystal superalloy microstructures with the same volume fraction of {γ\\prime} precipitates on plastic deformation at room temperature. The phase-field method is used to generate realistic microstructures as the boundary conditions for DDD simulations in which a constant high uniaxial tensile load is applied along different crystallographic directions. In addition, the lattice mismatch between the γ and {γ\\prime} phases is taken into account as a source of internal stresses. Due to the high antiphase boundary energy and the rare formation of superdislocations, precipitate cutting is not observed in the present simulations. Therefore, the plastic deformation is mainly caused by dislocation motion in γ matrix channels. From a comparison of the macroscopic mechanical response and the dislocation evolution for different microstructures in each loading direction, we found that, for a given {γ\\prime} phase volume fraction, the optimal microstructure should possess narrow and homogeneous γ matrix channels.
Optimization Testbed Cometboards Extended into Stochastic Domain
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.
2010-01-01
COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.
DACS II - A distributed thermal/mechanical loads data acquisition and control system
NASA Technical Reports Server (NTRS)
Zamanzadeh, Behzad; Trover, William F.; Anderson, Karl F.
1987-01-01
A distributed data acquisition and control system has been developed for the NASA Flight Loads Research Facility. The DACS II system is composed of seven computer systems and four array processors configured as a main computer system, three satellite computer systems, and 13 analog input/output systems interconnected through three independent data networks. Up to three independent heating and loading tests can be run concurrently on different test articles or the entire system can be used on a single large test such as a full scale hypersonic aircraft. Thermal tests can include up to 512 independent adaptive closed loop control channels. The control system can apply up to 20 MW of heating to a test specimen while simultaneously applying independent mechanical loads. Each thermal control loop is capable of heating a structure at rates of up to 150 F per second over a temperature range of -300 to +2500 F. Up to 64 independent mechanical load profiles can be commanded along with thermal control. Up to 1280 analog inputs monitor temperature, load, displacement and strain on the test specimens with real time data displayed on up to 15 terminals as color plots and tabular data displays. System setup and operation is accomplished with interactive menu-driver displays with extensive facilities to assist the users in all phases of system operation.
Kayillo, Sindy; Dennis, Gary R; Shalliker, R Andrew
2006-09-08
In this manuscript the retention and selectivity of a set of linear and non-linear PAHs were evaluated on five different reversed-phase columns. These phases included C18 and C18 Aqua stationary phases, as well as three phenyl phases: Propyl-phenyl, Synergi polar-RP and Cosmosil 5PBB phase. Overall, the results revealed that the phenyl-type columns offered better separation performance for the linear PAHs, while the separation of the structural isomer PAHs was enhanced on the C18 columns. The Propyl-phenyl column was found to have the highest molecular-stationary phase interactions, as evidenced by the greatest rate of change in 'S' (0.71) as a function of the molecular weight in the PAH homologous series, despite having the lowest surface coverage (3% carbon load) (where S is the slope of a plot of logk versus the solvent composition). In contrast, the C18 Aqua column, having the highest surface coverage (15% carbon load) was found to have the second lowest molecular-stationary phase interactions (rate of change in S=0.61). Interestingly, the Synergi polar-RP column, which also is a phenyl stationary phase behaved more 'C18-like' than 'phenyl-like' in many of the tests undertaken. This is probably not unexpected since all five phases were reversed phase.
Wood, Alexander
2004-01-01
This interim report describes an alternative approach for evaluating the efficacy of using mercury (Hg) offsets to improve water quality. Hg-offset programs may allow dischargers facing higher-pollution control costs to meet their regulatory obligations by making more cost effective pollutant-reduction decisions. Efficient Hg management requires methods to translate that science and economics into a regulatory decision framework. This report documents the work in progress by the U.S. Geological Surveys Western Geographic Science Center in collaboration with Stanford University toward developing this decision framework to help managers, regulators, and other stakeholders decide whether offsets can cost effectively meet the Hg total maximum daily load (TMDL) requirements in the Sacramento River watershed. Two key approaches being considered are: (1) a probabilistic approach that explicitly incorporates scientific uncertainty, cost information, and value judgments; and (2) a quantitative approach that captures uncertainty in testing the feasibility of Hg offsets. Current fate and transport-process models commonly attempt to predict chemical transformations and transport pathways deterministically. However, the physical, chemical, and biologic processes controlling the fate and transport of Hg in aquatic environments are complex and poorly understood. Deterministic models of Hg environmental behavior contain large uncertainties, reflecting this lack of understanding. The uncertainty in these underlying physical processes may produce similarly large uncertainties in the decisionmaking process. However, decisions about control strategies are still being made despite the large uncertainties in current Hg loadings, the relations between total Hg (HgT) loading and methylmercury (MeHg) formation, and the relations between control efforts and Hg content in fish. The research presented here focuses on an alternative analytical approach to the current use of safety factors and deterministic methods for Hg TMDL decision support, one that is fully compatible with an adaptive management approach. This alternative approach uses empirical data and informed judgment to provide a scientific and technical basis for helping National Pollutant Discharge Elimination System (NPDES) permit holders make management decisions. An Hg-offset system would be an option if a wastewater-treatment plant could not achieve NPDES permit requirements for HgT reduction. We develop a probabilistic decision-analytical model consisting of three submodels for HgT loading, MeHg, and cost mitigation within a Bayesian network that integrates information of varying rigor and detail into a simple model of a complex system. Hg processes are identified and quantified by using a combination of historical data, statistical models, and expert judgment. Such an integrated approach to uncertainty analysis allows easy updating of prediction and inference when observations of model variables are made. We demonstrate our approach with data from the Cache Creek watershed (a subbasin of the Sacramento River watershed). The empirical models used to generate the needed probability distributions are based on the same empirical models currently being used by the Central Valley Regional Water Quality Control Cache Creek Hg TMDL working group. The significant difference is that input uncertainty and error are explicitly included in the model and propagated throughout its algorithms. This work demonstrates how to integrate uncertainty into the complex and highly uncertain Hg TMDL decisionmaking process. The various sources of uncertainty are propagated as decision risk that allows decisionmakers to simultaneously consider uncertainties in remediation/implementation costs while attempting to meet environmental/ecologic targets. We must note that this research is on going. As more data are collected, the HgT and cost-mitigation submodels are updated and the uncer
NASA Astrophysics Data System (ADS)
Doležel, Jiří; Novák, Drahomír; Petrů, Jan
2017-09-01
Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.
Shock wave-induced phase transition in RDX single crystals.
Patterson, James E; Dreger, Zbigniew A; Gupta, Yogendra M
2007-09-20
The real-time, molecular-level response of oriented single crystals of hexahydro-1,3,5-trinitro-s-triazine (RDX) to shock compression was examined using Raman spectroscopy. Single crystals of [111], [210], or [100] orientation were shocked under stepwise loading to peak stresses from 3.0 to 5.5 GPa. Two types of measurements were performed: (i) high-resolution Raman spectroscopy to probe the material at peak stress and (ii) time-resolved Raman spectroscopy to monitor the evolution of molecular changes as the shock wave reverberated through the material. The frequency shift of the CH stretching modes under shock loading appeared to be similar for all three crystal orientations below 3.5 GPa. Significant spectral changes were observed in crystals shocked above 4.5 GPa. These changes were similar to those observed in static pressure measurements, indicating the occurrence of the alpha-gamma phase transition in shocked RDX crystals. No apparent orientation dependence in the molecular response of RDX to shock compression up to 5.5 GPa was observed. The phase transition had an incubation time of approximately 100 ns when RDX was shocked to 5.5 GPa peak stress. The observation of the alpha-gamma phase transition under shock wave loading is briefly discussed in connection with the onset of chemical decomposition in shocked RDX.
Bayesian networks improve causal environmental ...
Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on value
Sjöberg, C; Ahnesjö, A
2013-06-01
Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Impact of Uncertainty from Load-Based Reserves and Renewables on Dispatch Costs and Emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Bowen; Maroukis, Spencer D.; Lin, Yashen
2016-11-21
Aggregations of controllable loads are considered to be a fast-responding, cost-efficient, and environmental-friendly candidate for power system ancillary services. Unlike conventional service providers, the potential capacity from the aggregation is highly affected by factors like ambient conditions and load usage patterns. Previous work modeled aggregations of controllable loads (such as air conditioners) as thermal batteries, which are capable of providing reserves but with uncertain capacity. A stochastic optimal power flow problem was formulated to manage this uncertainty, as well as uncertainty in renewable generation. In this paper, we explore how the types and levels of uncertainty, generation reserve costs, andmore » controllable load capacity affect the dispatch solution, operational costs, and CO2 emissions. We also compare the results of two methods for solving the stochastic optimization problem, namely the probabilistically robust method and analytical reformulation assuming Gaussian distributions. Case studies are conducted on a modified IEEE 9-bus system with renewables, controllable loads, and congestion. We find that different types and levels of uncertainty have significant impacts on dispatch and emissions. More controllable loads and less conservative solution methodologies lead to lower costs and emissions.« less
Dash, Aneesh; Selvaraja, S K; Naik, A K
2018-02-15
We present a scheme for on-chip optical transduction of strain and displacement of graphene-based nano-electro-mechanical systems (NEMS). A detailed numerical study on the feasibility of three silicon-photonic integrated circuit configurations is presented: the Mach-Zehnder interferometer (MZI), the micro-ring resonator, and the ring-loaded MZI. An index sensing based technique using an MZI loaded with a ring resonator with a moderate Q-factor of 2400 can yield a sensitivity of 28 fm/Hz and 6.5×10 -6 %/Hz for displacement and strain, respectively. Though any phase-sensitive integrated-photonic device could be used for optical transduction, here we show that optimal sensitivity is achievable by combining resonance with phase sensitivity.
NASA Astrophysics Data System (ADS)
Dash, Aneesh; Selvaraja, S. K.; Naik, A. K.
2018-02-01
We present a scheme for on-chip optical transduction of strain and displacement of Graphene-based Nano-Electro-Mechanical Systems (NEMS). A detailed numerical study on the feasibility of three silicon-photonic integrated circuit configurations is presented: Mach-Zehnder Interferometer(MZI), micro-ring resonator and ring-loaded MZI. An index-sensing based technique using a Mach-Zehnder Interferometer loaded with a ring resonator with a moderate Q-factor of 2400 can yield a sensitivity of 28 fm/sqrt(Hz), and 6.5E-6 %/sqrt(Hz) for displacement and strain respectively. Though any phase sensitive integrated photonic device could be used for optical transduction, here we show that optimal sensitivity is achievable by combining resonance with phase sensitivity.
[Forecast of costs of ecodependent cancer treatment for the development of management decisions].
Krasovskiy, V O
2014-01-01
The methodical approach for probabilistic forecasting and differentiation of treatment of costs of ecodependent cancer cases has been elaborated. The modality is useful in the organization of medical aid to cancer patients, in developing management decisions for the reduction the occupational load on the population, as well as in solutions problems in compensation to the population economic and social loss from industrial plants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilton, Harry H.
Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.
Co-delivery of ibuprofen and gentamicin from nanoporous anodic titanium dioxide layers.
Pawlik, Anna; Jarosz, Magdalena; Syrek, Karolina; Sulka, Grzegorz D
2017-04-01
Although single-drug therapy may prove insufficient in treating bacterial infections or inflammation after orthopaedic surgeries, complex therapy (using both an antibiotic and an anti-inflammatory drug) is thought to address the problem. Among drug delivery systems (DDSs) with prolonged drug release profiles, nanoporous anodic titanium dioxide (ATO) layers on Ti foil are very promising. In the discussed research, ATO samples were synthesized via a three-step anodization process in an ethylene glycol-based electrolyte with fluoride ions. The third step lasted 2, 5 and 10min in order to obtain different thicknesses of nanoporous layers. Annealing the as-prepared amorphous layers at the temperature of 400°C led to obtaining the anatase phase. In this study, water-insoluble ibuprofen and water-soluble gentamicin were used as model drugs. Three different drug loading procedures were applied. The desorption-desorption-diffusion (DDD) model of the drug release was fitted to the experimental data. The effects of crystalline structure, depth of TiO 2 nanopores and loading procedure on the drug release profiles were examined. The duration of the drug release process can be easily altered by changing the drug loading sequence. Water-soluble gentamicin is released for a long period of time if gentamicin is loaded in ATO as the first drug. Additionally, deeper nanopores and anatase phase suppress the initial burst release of drugs. These results confirm that factors such as morphological and crystalline structure of ATO layers, and the procedure of drug loading inside nanopores, allow to alter the drug release performance of nanoporous ATO layers. Copyright © 2017 Elsevier B.V. All rights reserved.
Behavioral and Temporal Pattern Detection Within Financial Data With Hidden Information
2012-02-01
probabilistic pattern detector to monitor the pattern. 15. SUBJECT TERMS Runtime verification, Hidden data, Hidden Markov models, Formal specifications...sequences in many other fields besides financial systems [L, TV, LC, LZ ]. Rather, the technique suggested in this paper is positioned as a hybrid...operation of the pattern detector . Section 7 describes the operation of the probabilistic pattern-matching monitor, and section 8 describes three
NASA Astrophysics Data System (ADS)
Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing
2018-01-01
For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.
Probabilistic Analysis of a SiC/SiC Ceramic Matrix Composite Turbine Vane
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Nemeth, Noel N.; Brewer, David N.; Mital, Subodh
2004-01-01
To demonstrate the advanced composite materials technology under development within the Ultra-Efficient Engine Technology (UEET) Program, it was planned to fabricate, test, and analyze a turbine vane made entirely of silicon carbide-fiber-reinforced silicon carbide matrix composite (SiC/SiC CMC) material. The objective was to utilize a five-harness satin weave melt-infiltrated (MI) SiC/SiC composite material developed under this program to design and fabricate a stator vane that can endure 1000 hours of engine service conditions. The vane was designed such that the expected maximum stresses were kept within the proportional limit strength of the material. Any violation of this design requirement was considered as the failure. This report presents results of a probabilistic analysis and reliability assessment of the vane. Probability of failure to meet the design requirements was computed. In the analysis, material properties, strength, and pressure loading were considered as random variables. The pressure loads were considered normally distributed with a nominal variation. A temperature profile on the vane was obtained by performing a computational fluid dynamics (CFD) analysis and was assumed to be deterministic. The results suggest that for the current vane design, the chance of not meeting design requirements is about 1.6 percent.
Probabilistic evaluation of uncertainties and risks in aerospace components
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.
1992-01-01
This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.
Development of Human Posture Simulation Method for Assessing Posture Angles and Spinal Loads
Lu, Ming-Lun; Waters, Thomas; Werren, Dwight
2015-01-01
Video-based posture analysis employing a biomechanical model is gaining a growing popularity for ergonomic assessments. A human posture simulation method of estimating multiple body postural angles and spinal loads from a video record was developed to expedite ergonomic assessments. The method was evaluated by a repeated measures study design with three trunk flexion levels, two lift asymmetry levels, three viewing angles and three trial repetitions as experimental factors. The study comprised two phases evaluating the accuracy of simulating self and other people’s lifting posture via a proxy of a computer-generated humanoid. The mean values of the accuracy of simulating self and humanoid postures were 12° and 15°, respectively. The repeatability of the method for the same lifting condition was excellent (~2°). The least simulation error was associated with side viewing angle. The estimated back compressive force and moment, calculated by a three dimensional biomechanical model, exhibited a range of 5% underestimation. The posture simulation method enables researchers to simultaneously quantify body posture angles and spinal loading variables with accuracy and precision comparable to on-screen posture matching methods. PMID:26361435
NASA Astrophysics Data System (ADS)
Marčič, T.; Štumberger, B.; Štumberger, G.; Hadžiselimović, M.; Zagradišnik, I.
The electromechanical characteristics of induction motors depend on the used stator and rotor slot combination. The correlation between the usage of different stator and rotor slot number combinations, magnetic flux density distributions, no-load iron losses and rated load winding over-temperatures for a specific induction motor is presented. The motor's magnetic field was analyzed by traces of the magnetic flux density vector, obtained by FEM. Post-processing of FE magnetic field solution was used for posterior iron loss calculation of the motor iron loss at no-load. The examined motor stator lamination had 36 semi-closed slots and the rotor laminations had 28, 33, 34, 44 and 46 semi-closed slots.
Sequential Service Restoration for Unbalanced Distribution Systems and Microgrids
Chen, Bo; Chen, Chen; Wang, Jianhui; ...
2017-07-07
The resilience and reliability of modern power systems are threatened by increasingly severe weather events and cyber-physical security events. An effective restoration methodology is desired to optimally integrate emerging smart grid technologies and pave the way for developing self-healing smart grids. In this paper, a sequential service restoration (SSR) framework is proposed to generate restoration solutions for distribution systems and microgrids in the event of large-scale power outages. The restoration solution contains a sequence of control actions that properly coordinate switches, distributed generators, and switchable loads to form multiple isolated microgrids. The SSR can be applied for three-phase unbalanced distributionmore » systems and microgrids and can adapt to various operation conditions. Mathematical models are introduced for three-phase unbalanced power flow, voltage regulators, transformers, and loads. Furthermore, the SSR problem is formulated as a mixed-integer linear programming model, and its effectiveness is evaluated via the modified IEEE 123 node test feeder.« less
Sequential Service Restoration for Unbalanced Distribution Systems and Microgrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Bo; Chen, Chen; Wang, Jianhui
The resilience and reliability of modern power systems are threatened by increasingly severe weather events and cyber-physical security events. An effective restoration methodology is desired to optimally integrate emerging smart grid technologies and pave the way for developing self-healing smart grids. In this paper, a sequential service restoration (SSR) framework is proposed to generate restoration solutions for distribution systems and microgrids in the event of large-scale power outages. The restoration solution contains a sequence of control actions that properly coordinate switches, distributed generators, and switchable loads to form multiple isolated microgrids. The SSR can be applied for three-phase unbalanced distributionmore » systems and microgrids and can adapt to various operation conditions. Mathematical models are introduced for three-phase unbalanced power flow, voltage regulators, transformers, and loads. Furthermore, the SSR problem is formulated as a mixed-integer linear programming model, and its effectiveness is evaluated via the modified IEEE 123 node test feeder.« less
Evaluation of the Demand Response Performance of Electric Water Heaters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Widder, Sarah H.; Parker, Steven A.
2015-03-17
The purpose of this project is to verify or refute many of the concerns raised by utilities regarding the ability of large tank HPWHs to perform DR by measuring the performance of HPWHs compared to ERWHs in providing DR services. perform DR by measuring the performance of HPWHs compared to ERWHs in providing DR services. This project was divided into three phases. Phase 1 consisted of week-long laboratory experiments designed to demonstrate technical feasibility of individual large-tank HPWHs in providing DR services compared to large-tank ERWHs. In Phase 2, the individual behaviors of the water heaters were then extrapolated tomore » a population by first calibrating readily available water heater models developed in GridLAB-D simulation software to experimental results obtained in Phase 1. These models were used to simulate a population of water heaters and generate annual load profiles to assess the impacts on system-level power and residential load curves. Such population modeling allows for the inherent and permanent load reduction accomplished by the more efficient HPWHs to be considered, in addition to the temporal DR services the water heater can provide by switching ON or OFF as needed by utilities. The economic and emissions impacts of using large-tank water heaters in DR programs are then analyzed from the utility and consumer perspective, based on National Impacts Analysis in Phase 3. Phase 1 is discussed in this report. Details on Phases 2 and 3 can be found in the companion report (Cooke et al. 2014).« less
Three-dimensional desirability spaces for quality-by-design-based HPLC development.
Mokhtar, Hatem I; Abdel-Salam, Randa A; Hadad, Ghada M
2015-04-01
In this study, three-dimensional desirability spaces were introduced as a graphical representation method of design space. This was illustrated in the context of application of quality-by-design concepts on development of a stability indicating gradient reversed-phase high-performance liquid chromatography method for the determination of vinpocetine and α-tocopheryl acetate in a capsule dosage form. A mechanistic retention model to optimize gradient time, initial organic solvent concentration and ternary solvent ratio was constructed for each compound from six experimental runs. Then, desirability function of each optimized criterion and subsequently the global desirability function were calculated throughout the knowledge space. The three-dimensional desirability spaces were plotted as zones exceeding a threshold value of desirability index in space defined by the three optimized method parameters. Probabilistic mapping of desirability index aided selection of design space within the potential desirability subspaces. Three-dimensional desirability spaces offered better visualization and potential design spaces for the method as a function of three method parameters with ability to assign priorities to this critical quality as compared with the corresponding resolution spaces. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Wang, Shu-Dong; Zhang, Sheng-Zhong; Liu, Hua; Zhang, You-Zhu
2014-04-01
In this research, the drug loaded polylactide nanofibers are fabricated by electrospinning. Morphology, microstructure and mechanical properties are characterized. Properties and mechanism of the controlled release of the nanofibers are investigated. The results show that the drug loaded polylactide nanofibers do not show dispersed phase, and there is a good compatibility between polylactide and drugs. FTIR spectra show that drugs are encapsulated inside the polylactide nanofibers, and drugs do not break the structure of polylcatide. Flexibility of drug loaded polylactide scaffolds is higher than that of the pure polylactide nanofibers. Release rate of the drug loaded nanofibers is significantly slower than that of the drug powder. Release rate increases with the increase of the drugs’ concentration. The research mechanism suggests a typical diffusion-controlled release of the three loaded drugs. Antibacterial and cell culture show that drug loaded nanofibers possess effective antibacterial activity and biocompatible properties.
NASA Technical Reports Server (NTRS)
Canfield, R. C.; Ricchiazzi, P. J.
1980-01-01
An approximate probabilistic radiative transfer equation and the statistical equilibrium equations are simultaneously solved for a model hydrogen atom consisting of three bound levels and ionization continuum. The transfer equation for L-alpha, L-beta, H-alpha, and the Lyman continuum is explicitly solved assuming complete redistribution. The accuracy of this approach is tested by comparing source functions and radiative loss rates to values obtained with a method that solves the exact transfer equation. Two recent model solar-flare chromospheres are used for this test. It is shown that for the test atmospheres the probabilistic method gives values of the radiative loss rate that are characteristically good to a factor of 2. The advantage of this probabilistic approach is that it retains a description of the dominant physical processes of radiative transfer in the complete redistribution case, yet it achieves a major reduction in computational requirements.
Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten
2017-08-01
Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Design, fabrication, testing and delivery of a solar collector
NASA Technical Reports Server (NTRS)
Sims, W. H.; Ballheim, R. W.; Bartley, S. M.; Smith, G. W.
1976-01-01
A two phase program encompassing the redesign and fabrication of a solar collector which is low in cost and aesthetically appealing is described. Phase one work reviewed the current collector design and developed a low-cost design based on specific design/performance/cost requirements. Throughout this phase selected collector component materials were evaluated by testing and by considering cost, installation, maintainability and durability. The resultant collector design was composed of an absorber plate, insulation, frame, cover, desiccant and sealant. In Phase two, three collector prototypes were fabricated and evaluated for both nonthermal and thermal characteristics. Tests included static load tests of covers, burst pressure tests of absorber plates, and tests for optical characteristics of selective absorber plate coatings. The three prototype collectors were shipped to Marshall Space Flight Center for use in their solar heating and cooling test facility.
Probabilistic metrology or how some measurement outcomes render ultra-precise estimates
NASA Astrophysics Data System (ADS)
Calsamiglia, J.; Gendra, B.; Muñoz-Tapia, R.; Bagan, E.
2016-10-01
We show on theoretical grounds that, even in the presence of noise, probabilistic measurement strategies (which have a certain probability of failure or abstention) can provide, upon a heralded successful outcome, estimates with a precision that exceeds the deterministic bounds for the average precision. This establishes a new ultimate bound on the phase estimation precision of particular measurement outcomes (or sequence of outcomes). For probe systems subject to local dephasing, we quantify such precision limit as a function of the probability of failure that can be tolerated. Our results show that the possibility of abstaining can set back the detrimental effects of noise.
GENERAL: A modified weighted probabilistic cellular automaton traffic flow model
NASA Astrophysics Data System (ADS)
Zhuang, Qian; Jia, Bin; Li, Xin-Gang
2009-08-01
This paper modifies the weighted probabilistic cellular automaton model (Li X L, Kuang H, Song T, et al 2008 Chin. Phys. B 17 2366) which considered a diversity of traffic behaviors under real traffic situations induced by various driving characters and habits. In the new model, the effects of the velocity at the last time step and drivers' desire for acceleration are taken into account. The fundamental diagram, spatial-temporal diagram, and the time series of one-minute data are analyzed. The results show that this model reproduces synchronized flow. Finally, it simulates the on-ramp system with the proposed model. Some characteristics including the phase diagram are studied.
NASA Technical Reports Server (NTRS)
Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.
1995-01-01
The application of the probabilistic risk assessment methodology to a Space Shuttle environment, particularly to the potential of losing the Shuttle during nominal operation is addressed. The different related concerns are identified and combined to determine overall program risks. A fault tree model is used to allocate system probabilities to the subsystem level. The loss of the vehicle due to failure to contain energetic gas and debris, to maintain proper propulsion and configuration is analyzed, along with the loss due to Orbiter, external tank failure, and landing failure or error.
A New Ontological View of the Quantum Measurement Problem
2005-06-13
broader issues in the foundations of quantum mechanics as well. In this scenario, a quantum measurement is a nonequilibrium phase transition in a...the foundations of quantum mechan - ics as well. In this scenario a quantum measurement is a non-equilibrium phase transition in a “resonant cavity...ontology, and the probabilistic element is removed from the foundations of quantum mechanics , its apparent presence in the quantum measurement being solely
Taborri, Juri; Rossi, Stefano; Palermo, Eduardo; Patanè, Fabrizio; Cappa, Paolo
2014-01-01
In this work, we decided to apply a hierarchical weighted decision, proposed and used in other research fields, for the recognition of gait phases. The developed and validated novel distributed classifier is based on hierarchical weighted decision from outputs of scalar Hidden Markov Models (HMM) applied to angular velocities of foot, shank, and thigh. The angular velocities of ten healthy subjects were acquired via three uni-axial gyroscopes embedded in inertial measurement units (IMUs) during one walking task, repeated three times, on a treadmill. After validating the novel distributed classifier and scalar and vectorial classifiers-already proposed in the literature, with a cross-validation, classifiers were compared for sensitivity, specificity, and computational load for all combinations of the three targeted anatomical segments. Moreover, the performance of the novel distributed classifier in the estimation of gait variability in terms of mean time and coefficient of variation was evaluated. The highest values of specificity and sensitivity (>0.98) for the three classifiers examined here were obtained when the angular velocity of the foot was processed. Distributed and vectorial classifiers reached acceptable values (>0.95) when the angular velocity of shank and thigh were analyzed. Distributed and scalar classifiers showed values of computational load about 100 times lower than the one obtained with the vectorial classifier. In addition, distributed classifiers showed an excellent reliability for the evaluation of mean time and a good/excellent reliability for the coefficient of variation. In conclusion, due to the better performance and the small value of computational load, the here proposed novel distributed classifier can be implemented in the real-time application of gait phases recognition, such as to evaluate gait variability in patients or to control active orthoses for the recovery of mobility of lower limb joints. PMID:25184488
NASA Astrophysics Data System (ADS)
Movchan, A. A.; Sil'chenko, L. G.
2008-02-01
We solve the axisymmetric buckling problem for a circular plate made of a shape memory alloy undergoing reverse martensite transformation under the action of a compressing load, which occurs after the direct martensite transformation under the action of a generally different (extending or compressing) load. The problem was solved without any simplifying assumptions concerning the transverse dimension of the supplementary phase transition region related to buckling. The mathematical problem was reduced to a nonlinear eigenvalue problem. An algorithm for solving this problem was proposed. It was shown that the critical buckling load under the reverse transition, which is obtained by taking into account the evolution of the phase strains, can be many times lower than the same quantity obtained under the assumption that the material behavior is elastic even for the least (martensite) values of the elastic moduli. The critical buckling force decreases with increasing modulus of the load applied at the preliminary stage of direct transition and weakly depends on whether this load was extending or compressing. In shape memory alloys (SMA), mutually related processes of strain and direct (from the austenitic into the martensite phase) or reverse thermoelastic phase transitions may occur. The direct transition occurs under cooling and (or) an increase in stresses and is accompanied by a significant decrease (nearly by a factor of three in titan nickelide) of the Young modulus. If the direct transition occurs under the action of stresses with nonzero deviator, then it is accompanied by accumulation of macroscopic phase strains, whose intensity may reach 8%. Under the reverse transition, which occurs under heating and (or) unloading, the moduli increase and the accumulated strain is removed. For plates compressed in their plane, in the case of uniform temperature distribution over the thickness, one can separate trivial processes under which the strained plate remains plane and the phase ratio has a uniform distribution over the thickness. For sufficiently high compressing loads, the trivial process of uniform compression may become unstable in the sense that, for small perturbations of the plate deflection, temperature, the phase ratio, or the load, the difference between the corresponding perturbed process and the unperturbed process may be significant. The results of several experiments concerning the buckling of SMA elements are given in [1, 2], and the statement and solution of the corresponding boundary value problems can be found in [3-11]. The experimental studies [2] and several analytic solutions obtained for the Shanley column [3, 4], rods [5-7], rectangular plates under direct [8] and reverse [9] transitions showed that the processes of thermoelastic phase transitions can significantly (by several times) decrease the critical buckling loads compared with their elastic values calculated for the less rigid martensite state of the material. Moreover, buckling does not occur in the one-phase martensite state in which the elastic moduli are minimal but in the two-phase state in which the values of the volume fractions of the austenitic and martensite phase are approximately equal to each other. This fact is most astonishing for buckling, studied in the present paper, under the reverse transition in which the Young modulus increases approximately half as much from the beginning of the phase transition to the moment of buckling. In [3-9] and in the present paper, the static buckling criterion is used. Following this criterion, the critical load is defined to be the load such that a nontrivial solution of the corresponding quasistatic problem is possible under the action of this load. If, in the problems of stability of rods and SMA plates, small perturbations of the external load are added to small perturbations of the deflection (the critical force is independent of the amplitude of the latter), then the critical forces vary depending on the value of perturbations of the external load [5, 8, 9]. Thus, in the case of small perturbations of the load, the problem of stability of SMA elements becomes indeterminate. The solution of the stability problem for SMA elements also depends on whether the small perturbations of the phase ratio and the phase strain tensor are taken into account. According to this, the problem of stability of SMA elements can be solved in the framework of several statements (concepts, hypotheses) which differ in the set of quantities whose perturbations are admissible (taken into account) in the process of solving the problem. The variety of these statements applied to the problem of buckling of SMA elements under direct martensite transformation is briefly described in [4, 5]. But, in the problem of buckling under the reverse transformation, some of these statements must be changed. The main question which we should answer when solving the problem of stability of SMA elements is whether small perturbations of the phase ratio (the volume fraction of the martensite phase q) are taken into account, because an appropriate choice significantly varies the results of solving the stability problem. If, under the transition to the adjacent form of equilibrium, the phase ratio of all points of the body is assumed to remain the same, then we deal with the "fixed phase atio" concept. The opposite approach can be classified as the "supplementary phase transition" concept (which occurs under the transition to the adjacent form of equilibrium). It should be noted that, since SMA have temperature hysteresis, the phase ratio in SMA can endure only one-sided small variations. But if we deal with buckling under the inverse transformation, then the variation in the volume fraction of the martensite phase cannot be positive. The phase ratio is not an independent variable, like loads or temperature, but, due to the constitutive relations, its variations occur together with the temperature variations and, in the framework of connected models for a majority of SMA, together with variations in the actual stresses. Therefore, the presence or absence of variations in q is determined by the presence or absence of variations in the temperature, deflection, and load, as well as by the system of constitutive relations used in this particular problem. In the framework of unconnected models which do not take the influence of actual stresses on the phase ratio into account, the "fixed phase ratio" concept corresponds to the case of absence of temperature variations. The variations in the phase ratio may also be absent in connected models in the case of specially chosen values of variations in the temperature and (or) in the external load, as well as in the case of SMA of CuMn type, for which the influence of the actual stresses on the phase compound is absent or negligible. In the framework of the "fixed phase ratio" hypothesis, the stability problem for SMA elements has a solution coinciding in form with the solution of the corresponding elastic problem, with the elastic moduli replaced by the corresponding functions of the phase ratio. In the framework of the supplementary phase transition" concept, the result of solving the stability problem essentially depends on whether the small perturbations of the external loads are taken into account in the process of solving the problem. The point is that, when solving the problem in the connected setting, the supplementary phase transition region occupies, in general, not the entire cross-section of the plate but only part of it, and the location of the boundary of this region depends on the existence and the value of these small perturbations. More precisely, the existence of arbitrarily small perturbations of the actual load can result in finite changes of the configuration of the supplementary phase transition region and hence in finite change of the critical values of the load. Here we must distinguish the "fixed load" hypothesis where no perturbations of the external loads are admitted and the "variable load" hypothesis in the opposite case. The conditions that there no variations in the external loads imply additional equations for determining the boundary of the supplementary phase transition region. If the "supplementary phase transition" concept and the "fixed load" concept are used together, then the solution of the stability problem of SMA is uniquely determined in the same sense as the solution of the elastic stability problem under the static approach. In the framework of the "variable load" concept, the result of solving the stability problem for SMA ceases to be unique. But one can find the upper and lower bounds for the critical forces which correspond to the cases of total absence of the supplementary phase transition: the upper bound corresponds to the critical load coinciding with that determined in the framework of the "fixed phase ratio" concept, and the lower bound corresponds to the case where the entire cross-section of the plate experiences the supplementary phase transition. The first version does not need any additional name, and the second version can be called as the "all-round supplementary phase transition" hypothesis. In the present paper, the above concepts are illustrated by examples of solving problems about axisymmetric buckling of a circular freely supported or rigidly fixed plate experiencing reverse martensite transformation under the action of an external force uniformly distributed over the contour. We find analytic solutions in the framework of all the above-listed statements except for the case of free support in the "fixed load" concept, for which we obtain a numerical solution.
Biomechanical studies on the effect of iatrogenic dentin removal on vertical root fractures.
Ossareh, A; Rosentritt, M; Kishen, A
2018-01-01
The aim of this study was to understand the mechanism by which iatrogenic root dentin removal influences radicular stress distribution and subsequently affects the resistance to vertical root fractures (VRF) in endodontically treated teeth. The experiments were conducted in two phases. Phase 1: freshly extracted premolar teeth maintained in phosphate-buffered saline were instrumented to simulate three different degrees of dentin removal, designated as low, medium, and extreme groups. Micro-Ct analyzes were performed to quantitatively determine: (a) the amount of dentin removed, (b) the remaining dentin volume, and (c) the moment of inertia of root dentin. The specimens were then subjected to thermomechanical cycling and continuous loading to determine (a) the mechanical load to fracture and (b) dentin microcracking (fractography) using scanning electron microscopy. Phase 2: Finite element analysis was used to evaluate the influence of dentin removal on the stress distribution pattern in root dentin. The data obtained were analyzed using one-way ANOVA and Tukey's post hoc test ( P < 0.05). Phase 1: A significantly greater volume of dentin was removed from teeth in extreme group when compared to low group ( P < 0.01). The mechanical analysis showed that the load to fracture was significantly lower in teeth from extreme group ( P < 0.05). A linear relationship was observed between the moment of inertia and load to fracture in all experimental groups ( R 2 = 0.52). Fractography showed that most microcracks were initiated from the root canal walls in extreme group. Phase 2: The numerical analysis showed that the radicular stress distribution increased apically and buccolingually with greater degree of root canal dentin removal. The combined experimental/numerical analyses highlighted the influence of remaining root dentin volume on the radicular bending resistance, stress distribution pattern, and subsequent propensity to VRF.
NASA Astrophysics Data System (ADS)
Doletskaya, L. I.; Solopov, R. V.; Kavchenkov, V. P.; Andreenkov, E. S.
2017-12-01
The physical features of the damage of aerial lines with a voltage of 10 kV under ice and wind loads are examined, mathematical models for estimating the reliability the mechanical part in aerial lines with the application of analytical theoretical methods and corresponding mathematical models taking into account the probabilistic nature of ice and wind loads are described, calculation results on reliability, specific damage and average time for restoration in case of emergency outages of 10 kV high-voltage transmission aerial lines with the use of uninsulated and protected wires are presented.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
30 CFR 75.905 - Connection of single-phase loads.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Alternating Current Circuits § 75.905 Connection of single-phase loads. [Statutory Provisions] Single-phase... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Connection of single-phase loads. 75.905 Section 75.905 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE...
30 CFR 75.905 - Connection of single-phase loads.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Alternating Current Circuits § 75.905 Connection of single-phase loads. [Statutory Provisions] Single-phase... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Connection of single-phase loads. 75.905 Section 75.905 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE...
Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Yoko; Aiyoshi, Eitaro
2002-10-15
A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies suchmore » as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less
Model fitting data from syllogistic reasoning experiments.
Hattori, Masasi
2016-12-01
The data presented in this article are related to the research article entitled "Probabilistic representation in syllogistic reasoning: A theory to integrate mental models and heuristics" (M. Hattori, 2016) [1]. This article presents predicted data by three signature probabilistic models of syllogistic reasoning and model fitting results for each of a total of 12 experiments ( N =404) in the literature. Models are implemented in R, and their source code is also provided.
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
1993-04-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
Probabilistic Fatigue Damage Program (FATIG)
NASA Technical Reports Server (NTRS)
Michalopoulos, Constantine
2012-01-01
FATIG computes fatigue damage/fatigue life using the stress rms (root mean square) value, the total number of cycles, and S-N curve parameters. The damage is computed by the following methods: (a) traditional method using Miner s rule with stress cycles determined from a Rayleigh distribution up to 3*sigma; and (b) classical fatigue damage formula involving the Gamma function, which is derived from the integral version of Miner's rule. The integration is carried out over all stress amplitudes. This software solves the problem of probabilistic fatigue damage using the integral form of the Palmgren-Miner rule. The software computes fatigue life using an approach involving all stress amplitudes, up to N*sigma, as specified by the user. It can be used in the design of structural components subjected to random dynamic loading, or by any stress analyst with minimal training for fatigue life estimates of structural components.
Bridges for Pedestrians with Random Parameters using the Stochastic Finite Elements Analysis
NASA Astrophysics Data System (ADS)
Szafran, J.; Kamiński, M.
2017-02-01
The main aim of this paper is to present a Stochastic Finite Element Method analysis with reference to principal design parameters of bridges for pedestrians: eigenfrequency and deflection of bridge span. They are considered with respect to random thickness of plates in boxed-section bridge platform, Young modulus of structural steel and static load resulting from crowd of pedestrians. The influence of the quality of the numerical model in the context of traditional FEM is shown also on the example of a simple steel shield. Steel structures with random parameters are discretized in exactly the same way as for the needs of traditional Finite Element Method. Its probabilistic version is provided thanks to the Response Function Method, where several numerical tests with random parameter values varying around its mean value enable the determination of the structural response and, thanks to the Least Squares Method, its final probabilistic moments.
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1993-01-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
A Step Made Toward Designing Microelectromechanical System (MEMS) Structures With High Reliability
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2003-01-01
The mechanical design of microelectromechanical systems-particularly for micropower generation applications-requires the ability to predict the strength capacity of load-carrying components over the service life of the device. These microdevices, which typically are made of brittle materials such as polysilicon, show wide scatter (stochastic behavior) in strength as well as a different average strength for different sized structures (size effect). These behaviors necessitate either costly and time-consuming trial-and-error designs or, more efficiently, the development of a probabilistic design methodology for MEMS. Over the years, the NASA Glenn Research Center s Life Prediction Branch has developed the CARES/Life probabilistic design methodology to predict the reliability of advanced ceramic components. In this study, done in collaboration with Johns Hopkins University, the ability of the CARES/Life code to predict the reliability of polysilicon microsized structures with stress concentrations is successfully demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.L.; Hooper, M.
This report summarizes the activities and results for the second testing phase (Phase 2) of an Innovative Clean Coal Technology (ICCT) demonstration of advanced tangentially fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. All three levels of Asea Brown Boveri Combustion Engineering Service`s (ABB CE`s) Low-NO{sub x} Concentric Firing System (LNCFS) are being demonstrated during this project. The primary goal of this project is to demonstrate the NO{sub x} emissions characteristics of these technologies when operated under normal load dispatched conditions. The equipment is being tested at Gulf Power Company`s Plant Lansing Smith Unitmore » 2 in Lynn Haven, Florida. The long-term NO{sub x} emission trends were documented while the unit was operating under normal load dispatch conditions with the LNCFS Level II equipment. Fifty-five days of long-term data were collected. The data included the effects of mill patterns, unit load, mill outages, weather, fuel variability, and load swings. Test results indicated full-load (180 MW) NO{sub x} emissions of 0.39 lb/MBtu, which is about equal to the short-term test results. At 110 MW, long-term NO{sub x} emissions increased to 0.42 lb/MBtu, which are slightly higher than the short-term data. At 75 MW, NO{sub x} emissions were 0.51 lb/MBtu, which is significantly higher than the short-term data. The annual and 30-day average achievable NOx emissions were determined to be 0.41 and 0.45 lb/MBtu, respectively, for long-term testing load scenarios. NO{sub x} emissions were reduced by a maximum of 40 percent when compared to the baseline data collected in the previous phase. The long-term NO{sub x} reduction at full load (180 MW) was 37 percent while NO{sub x} reduction at low load was minimal.« less
NASA Astrophysics Data System (ADS)
Krishnan, Vinu B.
Shape memory alloys are incorporated as actuator elements due to their inherent ability to sense a change in temperature and actuate against external loads by undergoing a shape change as a result of a temperature-induced phase transformation. The cubic so-called austenite to the trigonal so-called R-phase transformation in NiTiFe shape memory alloys offers a practical temperature range for actuator operation at low temperatures, as it exhibits a narrow temperature-hysteresis with a desirable fatigue response. Overall, this work is an investigation of selected science and engineering aspects of low temperature NiTiFe shape memory alloys. The scientific study was performed using in situ neutron diffraction measurements at the newly developed low temperature loading capability on the Spectrometer for Materials Research at Temperature and Stress (SMARTS) at Los Alamos National Laboratory and encompasses three aspects of the behavior of Ni46.8Ti50Fe3.2 at 92 K (the lowest steady state temperature attainable with the capability). First, in order to study deformation mechanisms in the R-phase in NiTiFe, measurements were performed at a constant temperature of 92 K under external loading. Second, with the objective of examining NiTiFe in one-time, high-stroke, actuator applications (such as in safety valves), a NiTiFe sample was strained to approximately 5% (the R-phase was transformed to B19' phase in the process) at 92 K and subsequently heated to full strain recovery under a load. Third, with the objective of examining NiTiFe in cyclic, low-stroke, actuator applications (such as in cryogenic thermal switches), a NiTiFe sample was strained to 1% at 92 K and subsequently heated to full strain recovery under load. Neutron diffraction spectra were recorded at selected time and stress intervals during these experiments. The spectra were subsequently used to obtain quantitative information related to the phase-specific strain, texture and phase fraction evolution using the Rietveld technique. The mechanical characterization of NiTiFe alloys using the cryogenic capability at SMARTS provided considerable insight into the mechanisms of phase transformation and twinning at cryogenic temperatures. Both mechanisms contribute to shape memory and pseudoelasticity phenomena. Three phases (R, B19' and B33 phases) were found to coexist at 92 K in the unloaded condition (nominal holding stress of 8 MPa). For the first time the elastic modulus of R-phase was reported from neutron diffraction experiments. Furthermore, for the first time a base-centered orthorhombic (B33) martensitic phase was identified experimentally in a NiTi-based shape memory alloy. The orthorhombic B33 phase has been theoretically predicted in NiTi from density function theory (DFT) calculations but hitherto has never been observed experimentally. The orthorhombic B33 phase was observed while observing shifting of a peak (identified to be {021}B33) between the {111}R and {100}B19' peaks in the diffraction spectra collected during loading. Given the existing ambiguity in the published literature as to whether the trigonal R-phase belongs to the P3 or P3¯ space groups, Rietveld analyses were separately carried out incorporating the symmetries associated with both space groups and the impact of this choice evaluated. The constrained recovery of the B19' phase to the R-phase recorded approximately 4% strain recovery between 150 K and 170 K, with half of that recovery occurring between 160 K and 162 K. Additionally, the aforementioned research methodology developed for Ni46.8Ti50Fe3.2 shape memory alloys was applied to experiments performed on a new high temperature Ni 29.5Ti50.5Pd20 shape memory alloys. The engineering aspect focused on the development of (i) a NiTiFe based thermal conduction switch that minimized the heat gradient across the shape memory actuator element, (ii) a NiTiFe based thermal conduction switch that incorporated the actuator element in the form of helical springs, and (iii) a NiTi based release mechanism. Patents are being filed for all the three shape memory actuators developed as a part of this work. This work was supported by grants from SRI, NASA (NAG3-2751) and NSF (CAREER DMR-0239512) to UCF. Additionally, this work benefited from the use of the Lujan Center at the Los Alamos Neutron Science Center, funded by the United States Department of Energy, Office of Basic Energy Sciences, under Contract No. W-7405-ENG-36.
A screening-level modeling approach to estimate nitrogen ...
This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce
Graminha, Márcia; Cerecetto, Hugo; González, Mercedes
2015-01-01
Cutaneous leishmaniasis (CL) is a resistant form of leishmaniasis that is caused by a parasite belonging to the genus Leishmania. FLU-loaded microemulsions (MEs) were developed by phase diagram for topical administration of fluconazole (FLU) as prominent alternative to combat CL. Three MEs called F1, F2, and F3 (F1—60% 50 M phosphate buffer at pH 7.4 (PB) as aqueous phase, 10% cholesterol (CHO) as oil phase, and 30% soy phosphatidylcholine/oil polyoxyl-60 hydrogenated castor oil/sodium oleate (3/8/6) (S) as surfactant; F2—50% PB, 10% CHO, and 40% S; F3—40% PB, 10% CHO, and 50 % S) were characterized by droplet size analysis, zeta potential analysis, X-ray diffraction, continuous flow, texture profile analysis, and in vitro bioadhesion. MEs presented pseudoplastic flow and thixotropy was dependent on surfactant concentration. Droplet size was not affected by FLU. FLU-loaded MEs improved the FLU safety profile that was evaluated using red cell haemolysis and in vitro cytotoxicity assays with J-774 mouse macrophages. FLU-unloaded MEs did not exhibit leishmanicidal activity that was performed using MTT colourimetric assays; however, FLU-loaded MEs exhibited activity. Therefore, these MEs have potential to modulate FLU action, being a promising platform for drug delivery systems to treat CL. PMID:25650054
Analysis of the Influence of Cracked Sleepers under Static Loading on Ballasted Railway Tracks
Montalbán Domingo, Laura; Zamorano Martín, Clara; Palenzuela Avilés, Cristina; Real Herráiz, Julia I.
2014-01-01
The principal causes of cracking in prestressed concrete sleepers are the dynamic loads induced by track irregularities and imperfections in the wheel-rail contact and the in-phase and out-of-phase track resonances. The most affected points are the mid-span and rail-seat sections of the sleepers. Central and rail-seat crack detection require visual inspections, as legislation establishes, and involve sleepers' renewal even though European Normative considers that thicknesses up to 0.5 mm do not imply an inadequate behaviour of the sleepers. For a better understanding of the phenomenon, the finite element method constitutes a useful tool to assess the effects of cracking from the point of view of structural behaviour in railway track structures. This paper intends to study how the cracks at central or rail-seat section in prestressed concrete sleepers influence the track behaviour under static loading. The track model considers three different sleeper models: uncracked, cracked at central section, and cracked at rail-seat section. These models were calibrated and validated using the frequencies of vibration of the first three bending modes obtained from an experimental modal analysis. The results show the insignificant influence of the central cracks and the notable effects of the rail-seat cracks regarding deflections and stresses. PMID:25530998
Expectancy Learning from Probabilistic Input by Infants
Romberg, Alexa R.; Saffran, Jenny R.
2013-01-01
Across the first few years of life, infants readily extract many kinds of regularities from their environment, and this ability is thought to be central to development in a number of domains. Numerous studies have documented infants’ ability to recognize deterministic sequential patterns. However, little is known about the processes infants use to build and update representations of structure in time, and how infants represent patterns that are not completely predictable. The present study investigated how infants’ expectations fora simple structure develope over time, and how infants update their representations with new information. We measured 12-month-old infants’ anticipatory eye movements to targets that appeared in one of two possible locations. During the initial phase of the experiment, infants either saw targets that appeared consistently in the same location (Deterministic condition) or probabilistically in either location, with one side more frequent than the other (Probabilistic condition). After this initial divergent experience, both groups saw the same sequence of trials for the rest of the experiment. The results show that infants readily learn from both deterministic and probabilistic input, with infants in both conditions reliably predicting the most likely target location by the end of the experiment. Local context had a large influence on behavior: infants adjusted their predictions to reflect changes in the target location on the previous trial. This flexibility was particularly evident in infants with more variable prior experience (the Probabilistic condition). The results provide some of the first data showing how infants learn in real time. PMID:23439947
Microstructural fingerprints of phase transitions in shock-loaded iron
NASA Astrophysics Data System (ADS)
Wang, S. J.; Sui, M. L.; Chen, Y. T.; Lu, Q. H.; Ma, E.; Pei, X. Y.; Li, Q. Z.; Hu, H. B.
2013-01-01
The complex structural transformation in crystals under static pressure or shock loading has been a subject of long-standing interest to materials scientists and physicists. The polymorphic transformation is of particular importance for iron (Fe), due to its technological and sociological significance in the development of human civilization, as well as its prominent presence in the earth's core. The martensitic transformation α-->ɛ (bcc-->hcp) in iron under shock-loading, due to its reversible and transient nature, requires non-trivial detective work to uncover its occurrence. Here we reveal refined microstructural fingerprints, needle-like colonies and three sets of {112}<111> twins with a threefold symmetry, with tell-tale features that are indicative of two sequential martensitic transformations in the reversible α-->ɛ phase transition, even though no ɛ is retained in the post-shock samples. The signature orientation relationships are consistent with previously-proposed transformation mechanisms, and the unique microstructural fingerprints enable a quantitative assessment of the volume fraction transformed.
NASA Astrophysics Data System (ADS)
Yang, Xiu-Qun; Yang, Dejian; Xie, Qian; Zhang, Yaocun; Ren, Xuejuan; Tang, Youmin
2017-04-01
Based on historical forecasts of three quasi-operational multi-model ensemble (MME) systems, this study assesses the superiority of coupled MME over contributing single-model ensembles (SMEs) and over uncoupled atmospheric MME in predicting the Western North Pacific-East Asian summer monsoon variability. The probabilistic and deterministic forecast skills are measured by Brier skill score (BSS) and anomaly correlation (AC), respectively. A forecast-format dependent MME superiority over SMEs is found. The probabilistic forecast skill of the MME is always significantly better than that of each SME, while the deterministic forecast skill of the MME can be lower than that of some SMEs. The MME superiority arises from both the model diversity and the ensemble size increase in the tropics, and primarily from the ensemble size increase in the subtropics. The BSS is composed of reliability and resolution, two attributes characterizing probabilistic forecast skill. The probabilistic skill increase of the MME is dominated by the dramatic improvement in reliability, while resolution is not always improved, similar to AC. A monotonic resolution-AC relationship is further found and qualitatively explained, whereas little relationship can be identified between reliability and AC. It is argued that the MME's success in improving the reliability arises from an effective reduction of the overconfidence in forecast distributions. Moreover, it is examined that the seasonal predictions with coupled MME are more skillful than those with the uncoupled atmospheric MME forced by persisting sea surface temperature (SST) anomalies, since the coupled MME has better predicted the SST anomaly evolution in three key regions.
Biomechanical evaluation of various suture configurations in side-to-side tenorrhaphy.
Wagner, Emilio; Ortiz, Cristian; Wagner, Pablo; Guzman, Rodrigo; Ahumada, Ximena; Maffulli, Nicola
2014-02-05
Side-to-side tenorrhaphy is increasingly used, but its mechanical performance has not been studied. Two porcine flexor digitorum tendon segments of equal length (8 cm) and thickness (1 cm) were placed side by side. Eight tenorrhaphies (involving sixteen tendons) were performed with each of four suture techniques (running locked, simple eight, vertical mattress, and pulley suture). The resulting constructs underwent cyclic loading on a tensile testing machine, followed by monotonically increasing tensile load if failure during cyclic loading did not occur. Clamps secured the tendons on each side of the repair, and specimens were mounted vertically. Cyclic loading varied between 15 N and 35 N, with a distension rate of 1 mm/sec. Cyclic loading strength was determined by applying a force of 70 N. The cause of failure and tendon distension during loading were recorded. All failures occurred in the monotonic loading phase and resulted from tendon stripping. No suture or knot failure was observed. The mean loads resisted by the configurations ranged from 138 to 398 N. The mean load to failure, maximum load resisted prior to 1 cm of distension, and load resisted at 1 cm of distension were significantly lower for the vertical mattress suture group than for any of the other three groups (p < 0.031). All four groups sustained loads well above the physiologic loads expected to occur in tendons in the foot and ankle (e.g., in tendon transfer for tibialis posterior tendon insufficiency). None of the four side-to-side configurations distended appreciably during the cyclic loading phase. The vertical mattress suture configuration appeared to be weaker than the other configurations. For surgeons who advocate immediate loading or motion of a side-to-side tendon repair, a pulley, running locked, or simple eight suture technique appears to provide a larger safety margin compared with a vertical mattress suture technique.
A Technique for Developing Probabilistic Properties of Earth Materials
1988-04-01
Department of Civil Engineering. Responsibility for coordi- nating this program was assigned to Mr. A. E . Jackson, Jr., GD, under the supervision of Dr...assuming deformation as a right circular cylinder E = expected value F = ratio of the between sample variance and the within sample variance F = area...radial strain = true radial strain rT e = axial strainz = number of increments in the covariance analysis VL = loading Poisson’s ratio VUN = unloading
Variable speed wind turbine generator with zero-sequence filter
Muljadi, Eduard
1998-01-01
A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.
Variable Speed Wind Turbine Generator with Zero-sequence Filter
Muljadi, Eduard
1998-08-25
A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.
Variable speed wind turbine generator with zero-sequence filter
Muljadi, E.
1998-08-25
A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility. 14 figs.
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
Dynamic Failure of Sandwich Beams With Fluid-Structure Interaction Under Impact Loading
2010-12-01
constructed using vacuum assisted transfer molding , with a 6.35 mm balsa core and symmetrical plain weave 6 oz E-glass skins. The experiment...consisted of three phases. First, using three- point bending, strain rate characteristics were examined both in air and under water. After establishing...understanding of sandwich composite characteristics subjected to underwater impact. 15. NUMBER OF PAGES 57 14. SUBJECT TERMS Sandwich Composite, Low
Preparation and Oxidation Stability Evaluation of Tea Polyphenols-Loaded Inverse Micro-Emulsion.
Lan, Xiaohong; Sun, Jingjing; Yang, Ying; Chen, Mengjie; Liu, Jianhua; Wu, Jinhong; Wang, Zhengwu
2017-05-01
Compared to synthetic antioxidants, tea polyphenols (TPs) has its own advantages in edible oil industry, however, the hydrophilic properties have restricted its applications. In this study, the ternary phase diagram of TPs-loaded micro-emulsion (ME) system was constructed, in which glyceryl monooleate (GMO), Tween80, linoleic acid as the surfactants, ethanol as the co-surfactant and soybean, corn, sunflower oil as the oil phase, have been used for the preparation of ME. The results indicated that a composition of ME (57.5% oil, 18% Tween80, 18% GMO, 4% Linolic acid, and 2.5% water+ethanol) could dissolve maximum water and could stable for 2 mo at room temperature with an average diameter of 6 to 7 nm, as detected by means of dynamic light scattering (DLS). The loaded of TPs into ME led to an increase of particle size to 15 to 16 nm, due to increased polarity of the water phase. The antioxidant capacity of TPs in ME was characterized by the peroxide value (POV) method. The addition of 1% water phase with 0.1 g/mL TPs could retain the POV at low value for 30 d at accelerating temperature 50 °C. Meanwhile, comparing the three edible oil, ME with corn oil has lower conductivity and higher value of POV during the storage. This work provides an efficient and environmentally friendly approach for the preparation of TPs-loaded ME, which is beneficial to the application of TPs in edible oil. © 2017 Institute of Food Technologists®.
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter
2017-01-01
The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models.
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter
2017-01-01
The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models. PMID:29062288
Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, A. M.; McGhee, D. S.
2003-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.
Statistical Comparison and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; McGhee, David S.
2004-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield ultimate strength and high cycle fatigue capability. This paper examines the cumulative distribution function (CDF) percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematics is then used to calculate the combined value corresponding to any desired percentile along with a curve fit to this value. Another Excel macro is used to calculate the combination using a Monte Carlo simulation. Unlike the traditional techniques, these methods quantify the calculated load value with a Consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Also, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can lower the design loading substantially without losing any of the identified structural reliability.
DOT National Transportation Integrated Search
2013-06-01
This report summarizes a research project aimed at developing degradation models for bridge decks in the state of Michigan based on durability mechanics. A probabilistic framework to implement local-level mechanistic-based models for predicting the c...
Robaina, Nicolle F; Soriano, Silvio; Cassella, Ricardo J
2009-08-15
This paper reports the development of a new procedure for the adsorption of four cationic dyes (Rhodamine B, Methylene Blue, Crystal Violet and Malachite Green) from aqueous medium employing polyurethane foam (PUF) loaded with sodium dodecylsulfate (SDS) as solid phase. PUF loading process was based on the stirring of 200mg PUF cylinders with acidic solutions containing SDS. The conditions for loading were optimized by response surface methodology (RSM) using a Doehlert design with three variables that were SDS and HCl concentrations and stirring time. Results obtained in the optimization process showed that the stirring time is not a relevant parameter in the PUF loading, evidencing that the transport of SDS from solution to PUF surface is fast. On the other hand, both SDS and HCl concentrations were important parameters causing significant variation in the efficiency of the resulting solid phase for the removal of dyes from solution. At optimized conditions, SDS and HCl concentrations were 4.0 x 10(-4) and 0.90 mol L(-1), respectively. The influence of stirring time was evaluated by univariate methodology. A 20 min stirring time was established in order to make the PUF loading process fast and robust without losing efficiency. The procedure was tested for the removal of the four cationic dyes from aqueous solutions and removal efficiencies always better than 90% were achieved for the two concentrations tested (2.0 x 10(-5) and 1.0 x 10(-4)mol L(-1)).
Analytical and experimental study of high phase order induction motors
NASA Technical Reports Server (NTRS)
Klingshirn, Eugene A.
1989-01-01
Induction motors having more than three phases were investigated to determine their suitability for electric vehicle applications. The objective was to have a motor with a current rating lower than that of a three-phase motor. The name chosen for these is high phase order (HPO) motors. Motors having six phases and nine phases were given the most attention. It was found that HPO motors are quite suitable for electric vehicles, and for many other applications as well. They have characteristics which are as good as or better than three-phase motors for practically all applications where polyphase induction motors are appropriate. Some of the analysis methods are presented, and several of the equivalent circuits which facilitate the determination of harmonic currents and losses, or currents with unbalanced sources, are included. The sometimes large stator currents due to harmonics in the source voltages are pointed out. Filters which can limit these currents were developed. An analysis and description of these filters is included. Experimental results which confirm and illustrate much of the theory are also included. These include locked rotor test results and full-load performance with an open phase. Also shown are oscillograms which display the reduction in harmonic currents when a filter is used with the experimental motor supplied by a non-sinusoidal source.
NASA Astrophysics Data System (ADS)
Convertito, Vincenzo; Zollo, Aldo
2011-08-01
In this study, we address the issue of short-term to medium-term probabilistic seismic hazard analysis for two volcanic areas, Campi Flegrei caldera and Mt. Vesuvius in the Campania region of southern Italy. Two different phases of the volcanic activity are considered. The first, which we term the pre-crisis phase, concerns the present quiescent state of the volcanoes that is characterized by low-to-moderate seismicity. The second phase, syn-crisis, concerns the unrest phase that can potentially lead to eruption. For the Campi Flegrei case study, we analyzed the pattern of seismicity during the 1982-1984 ground uplift episode (bradyseism). For Mt. Vesuvius, two different time-evolutionary models for seismicity were adopted, corresponding to different ways in which the volcano might erupt. We performed a site-specific analysis, linked with the hazard map, to investigate the effects of input parameters, in terms of source geometry, mean activity rate, periods of data collection, and return periods, for the syn-crisis phase. The analysis in the present study of the pre-crisis phase allowed a comparison of the results of probabilistic seismic hazard analysis for the two study areas with those provided in the Italian national hazard map. For the Mt. Vesuvius area in particular, the results show that the hazard can be greater than that reported in the national hazard map when information at a local scale is used. For the syn-crisis phase, the main result is that the data recorded during the early months of the unrest phase are substantially representative of the seismic hazard during the whole duration of the crisis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard
In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, themore » proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.« less
Patil, Nitin S; Mendhe, Rakesh B; Sankar, Ajeet A; Iyer, Harish
2008-01-11
In preparative chromatography, often the solubility of the sample in the mobile phase is limited, making the mobile phase unsuitable as a solvent for preparation of load. Generally, solvents that have high solubility for the sample also have higher elution strengths than the mobile phase. Additionally, at high loading volumes, these strong sample solvents are known to adversely affect the band profiles leading to poor chromatographic performance. Here, we show that controlling the mobile phase strength during loading and post-load elution resulted in improved band profiles when the sample solvent was stronger than the mobile phase. Such an approach improves performance in preparative chromatography by allowing either higher sample loading or higher organic content in mobile phase (without loss of yield). Alternately, the approach can be used for improvement in performance by increase in yield or product purity.
Stochastic soil water balance under seasonal climates
Feng, Xue; Porporato, Amilcare; Rodriguez-Iturbe, Ignacio
2015-01-01
The analysis of soil water partitioning in seasonally dry climates necessarily requires careful consideration of the periodic climatic forcing at the intra-annual timescale in addition to daily scale variabilities. Here, we introduce three new extensions to a stochastic soil moisture model which yields seasonal evolution of soil moisture and relevant hydrological fluxes. These approximations allow seasonal climatic forcings (e.g. rainfall and potential evapotranspiration) to be fully resolved, extending the analysis of soil water partitioning to account explicitly for the seasonal amplitude and the phase difference between the climatic forcings. The results provide accurate descriptions of probabilistic soil moisture dynamics under seasonal climates without requiring extensive numerical simulations. We also find that the transfer of soil moisture between the wet to the dry season is responsible for hysteresis in the hydrological response, showing asymmetrical trajectories in the mean soil moisture and in the transient Budyko's curves during the ‘dry-down‘ versus the ‘rewetting‘ phases of the year. Furthermore, in some dry climates where rainfall and potential evapotranspiration are in-phase, annual evapotranspiration can be shown to increase because of inter-seasonal soil moisture transfer, highlighting the importance of soil water storage in the seasonal context. PMID:25663808
Incorporating uncertainty in watershed management decision-making: A mercury TMDL case study
Labiosa, W.; Leckie, J.; Shachter, R.; Freyberg, D.; Rytuba, J.; ,
2005-01-01
Water quality impairment due to high mercury fish tissue concentrations and high mercury aqueous concentrations is a widespread problem in several sub-watersheds that are major sources of mercury to the San Francisco Bay. Several mercury Total Maximum Daily Load regulations are currently being developed to address this problem. Decisions about control strategies are being made despite very large uncertainties about current mercury loading behavior, relationships between total mercury loading and methyl mercury formation, and relationships between potential controls and mercury fish tissue levels. To deal with the issues of very large uncertainties, data limitations, knowledge gaps, and very limited State agency resources, this work proposes a decision analytical alternative for mercury TMDL decision support. The proposed probabilistic decision model is Bayesian in nature and is fully compatible with a "learning while doing" adaptive management approach. Strategy evaluation, sensitivity analysis, and information collection prioritization are examples of analyses that can be performed using this approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etingov, Pavel; Makarov, PNNL Yuri; Subbarao, PNNL Kris
RUT software is designed for use by the Balancing Authorities to predict and display additional requirements caused by the variability and uncertainty in load and generation. The prediction is made for the next operating hours as well as for the next day. The tool predicts possible deficiencies in generation capability and ramping capability. This deficiency of balancing resources can cause serious risks to power system stability and also impact real-time market energy prices. The tool dynamically and adaptively correlates changing system conditions with the additional balancing needs triggered by the interplay between forecasted and actual load and output of variablemore » resources. The assessment is performed using a specially developed probabilistic algorithm incorporating multiple sources of uncertainty including wind, solar and load forecast errors. The tool evaluates required generation for a worst case scenario, with a user-specified confidence level.« less
Relationships between training load, injury, and fitness in sub-elite collision sport athletes.
Gabbett, Tim J; Domrow, Nathan
2007-11-01
The purpose of this study was to develop statistical models that estimate the influence of training load on training injury and physical fitness in collision sport athletes. The incidence of training injuries was studied in 183 rugby league players over two competitive seasons. Participants were assessed for height, body mass, skinfold thickness, vertical jump, 10-m, 20-m and 40-m sprint time, agility, and estimated maximal aerobic power in the off-season, pre-season, mid-season, and end-season. Training load and injury data were summarised into pre-season, early-competition, and late-competition training phases. Individual training load, fitness, and injury data were modelled using a logistic regression model with a binomial distribution and logit link function, while team training load and injury data were modelled using a linear regression model. While physical fitness improved with training, there was no association (P=0.16-0.99) between training load and changes in physical fitness during any of the training phases. However, increases in training load during the early-competition training phase decreased (P= 0.04) agility performance. A relationship (P= 0.01-0.04) was observed between the log of training load and odds of injury during each training phase, resulting in a 1.50 - 2.85 increase in the odds of injury for each arbitrary unit increase in training load. Furthermore, during the pre-season training phase there was a relationship (P= 0.01) between training load and injury incidence within the training load range of 155 and 590 arbitrary units. During the early and late-competition training phases, increases in training load of 175-620 arbitrary units and 145-410 arbitrary units, respectively, resulted in no further increase in injury incidence. These findings demonstrate that increases in training load, particularly during the pre-season training phase, increase the odds of injury in collision sport athletes. However, while increases in training load from 175 to 620 arbitrary units during the early-competition training phase result in no further increase in injury incidence, marked reductions in agility performances can occur. These findings suggest that reductions in training load during the early-competition training phase can reduce the odds of injury without compromising agility performances in collision sport athletes.
Probabilistic assessment of landslide tsunami hazard for the northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Pampell-Manis, A.; Horrillo, J.; Shigihara, Y.; Parambath, L.
2016-01-01
The devastating consequences of recent tsunamis affecting Indonesia and Japan have prompted a scientific response to better assess unexpected tsunami hazards. Although much uncertainty exists regarding the recurrence of large-scale tsunami events in the Gulf of Mexico (GoM), geological evidence indicates that a tsunami is possible and would most likely come from a submarine landslide triggered by an earthquake. This study customizes for the GoM a first-order probabilistic landslide tsunami hazard assessment. Monte Carlo Simulation (MCS) is employed to determine landslide configurations based on distributions obtained from observational submarine mass failure (SMF) data. Our MCS approach incorporates a Cholesky decomposition method for correlated landslide size parameters to capture correlations seen in the data as well as uncertainty inherent in these events. Slope stability analyses are performed using landslide and sediment properties and regional seismic loading to determine landslide configurations which fail and produce a tsunami. The probability of each tsunamigenic failure is calculated based on the joint probability of slope failure and probability of the triggering earthquake. We are thus able to estimate sizes and return periods for probabilistic maximum credible landslide scenarios. We find that the Cholesky decomposition approach generates landslide parameter distributions that retain the trends seen in observational data, improving the statistical validity and relevancy of the MCS technique in the context of landslide tsunami hazard assessment. Estimated return periods suggest that probabilistic maximum credible SMF events in the north and northwest GoM have a recurrence of 5000-8000 years, in agreement with age dates of observed deposits.
Multiscale/Multifunctional Probabilistic Composite Fatigue
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2010-01-01
A multilevel (multiscale/multifunctional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.
Constellation design with geometric and probabilistic shaping
NASA Astrophysics Data System (ADS)
Zhang, Shaoliang; Yaman, Fatih
2018-02-01
A systematic study, including theory, simulation and experiments, is carried out to review the generalized pairwise optimization algorithm for designing optimized constellation. In order to verify its effectiveness, the algorithm is applied in three testing cases: 2-dimensional 8 quadrature amplitude modulation (QAM), 4-dimensional set-partitioning QAM, and probabilistic-shaped (PS) 32QAM. The results suggest that geometric shaping can work together with PS to further bridge the gap toward the Shannon limit.
Geothermal probabilistic cost study
NASA Technical Reports Server (NTRS)
Orren, L. H.; Ziman, G. M.; Jones, S. C.; Lee, T. K.; Noll, R.; Wilde, L.; Sadanand, V.
1981-01-01
A tool is presented to quantify the risks of geothermal projects, the Geothermal Probabilistic Cost Model (GPCM). The GPCM model was used to evaluate a geothermal reservoir for a binary-cycle electric plant at Heber, California. Three institutional aspects of the geothermal risk which can shift the risk among different agents was analyzed. The leasing of geothermal land, contracting between the producer and the user of the geothermal heat, and insurance against faulty performance were examined.
Abdelkader, D H; Osman, M A; El-Gizawy, S A; Faheem, A M; McCarron, P A
2016-03-16
Poly(vinyl alcohol) hydrogels cross-linked with the tetrahydroxyborate anion possess textural and rheological properties that can be used as novel drug-loaded vehicles for application to traumatic wounds. However, addition of soluble drug substances causes concentration-dependent phase separation and rheological changes. The aim of this work was to investigate the effect of adding a local anaesthetic, but keeping the concentration low in an attempt to prevent these changes. Cross-linked hydrogels prepared from three grades of poly(vinyl alcohol) were characterised rheologically. Temperature sweep studies showed an elevated complex viscosity upon moving from 25°C to 80°C, which remained high for 48 h following completion of the cycle. Adhesion to model dermal surfaces achieved a maximum of 2.62 N cm(-2) and were greater than that observed to epidermal substrates, with a strong dependence on the rate of detachment used during testing. An optimised formulation (6% w/w PVA (31-50; 99) and 2% w/w THB) containing lidocaine hydrochloride loaded to an upper maximum concentration of 1.5% w/w was assessed for phase separation and drug crystallisation. After six months, crystallisation was present in formulations containing 0.7% and 1.5% lidocaine HCl. Changes in pH in response to increases in lidocaine loading were low. Drug release was shown to operate via a non-Fickian process for all three concentrations, with 60% occurring after approximately 24h. It can be concluded that using a low concentration of lidocaine hydrochloride in hydrogels based on poly(vinyl alcohol) will result in crystallisation. Furthermore, these hydrogels are unlikely to induce rapid anaesthesia due to the low loading and slow release kinetics. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Resnyansky, A.; McDonald, S.; Withers, P.; Bourne, N.; Millett, J.; Brown, E.; Rae, P.
2013-06-01
Aerospace, defence and automotive applications of polymers and polymer matrix composites have placed these materials under increasingly more extreme conditions. It is therefore important to understand the mechanical response of these multi-phase materials under high pressures and strain rates. Crucial to this is knowledge of the physical damage response in association with the phase transformations during the loading and the ability to predict this via multi-phase simulation taking the thermodynamical non-equilibrium and strain rate sensitivity into account. The current work presents Taylor impact experiments interrogating the effect of dynamic, high-pressure loading on polytetrafluoroethylene (PTFE). In particular, X-ray microtomography has been used to characterise the damage imparted to cylindrical samples due to impact at different velocities. Distinct regions of deformation are present and controlled by fracture within the polymer, with the extent of the deformed region and increasing propagation of the fractures from the impact face showing a clear trend with increase in impact velocity. The experimental observations are discussed with respect to parallel multi-phase model predictions by CTH hydrocode of the shock response from Taylor impact simulations.
Malfait, Bart; Dingenen, Bart; Smeets, Annemie; Staes, Filip; Pataky, Todd; Robinson, Mark A.; Vanrenterghem, Jos; Verschueren, Sabine
2016-01-01
Purpose The purpose was to assess if variation in sagittal plane landing kinematics is associated with variation in neuromuscular activation patterns of the quadriceps-hamstrings muscle groups during drop vertical jumps (DVJ). Methods Fifty female athletes performed three DVJ. The relationship between peak knee and hip flexion angles and the amplitude of four EMG vectors was investigated with trajectory-level canonical correlation analyses over the entire time period of the landing phase. EMG vectors consisted of the {vastus medialis(VM),vastus lateralis(VL)}, {vastus medialis(VM),hamstring medialis(HM)}, {hamstring medialis(HM),hamstring lateralis(HL)} and the {vastus lateralis(VL),hamstring lateralis(HL)}. To estimate the contribution of each individual muscle, linear regressions were also conducted using one-dimensional statistical parametric mapping. Results The peak knee flexion angle was significantly positively associated with the amplitudes of the {VM,HM} and {HM,HL} during the preparatory and initial contact phase and with the {VL,HL} vector during the peak loading phase (p<0.05). Small peak knee flexion angles were significantly associated with higher HM amplitudes during the preparatory and initial contact phase (p<0.001). The amplitudes of the {VM,VL} and {VL,HL} were significantly positively associated with the peak hip flexion angle during the peak loading phase (p<0.05). Small peak hip flexion angles were significantly associated with higher VL amplitudes during the peak loading phase (p = 0.001). Higher external knee abduction and flexion moments were found in participants landing with less flexed knee and hip joints (p<0.001). Conclusion This study demonstrated clear associations between neuromuscular activation patterns and landing kinematics in the sagittal plane during specific parts of the landing. These findings have indicated that an erect landing pattern, characterized by less hip and knee flexion, was significantly associated with an increased medial and posterior neuromuscular activation (dominant hamstrings medialis activity) during the preparatory and initial contact phase and an increased lateral neuromuscular activation (dominant vastus lateralis activity) during the peak loading phase. PMID:27101130
Millimeter-Wave Generation Via Plasma Three-Wave Mixing
1988-06-01
are coupled to a third space -charge wave with dispersion 2w W k -k k . (16) A plasma-loaded-waveguide mode is excited at the intersection of this...DISPERSION "FAST" W PLASMA WAVE Wc PLASMA WAVE A-lA oppositely directed EPWs with different phase velocities (wp/k., and wO/k. 2) are coupled to a third ... space -charge wave with dispersion 2w I- k k .(16) e 2 A plaama-loaded-waveguide mode is excited at the intersection of this coupled space-charge wave
Seismic Hazard analysis of Adjaria Region in Georgia
NASA Astrophysics Data System (ADS)
Jorjiashvili, Nato; Elashvili, Mikheil
2014-05-01
The most commonly used approach to determining seismic-design loads for engineering projects is probabilistic seismic-hazard analysis (PSHA). The primary output from a PSHA is a hazard curve showing the variation of a selected ground-motion parameter, such as peak ground acceleration (PGA) or spectral acceleration (SA), against the annual frequency of exceedance (or its reciprocal, return period). The design value is the ground-motion level that corresponds to a preselected design return period. For many engineering projects, such as standard buildings and typical bridges, the seismic loading is taken from the appropriate seismic-design code, the basis of which is usually a PSHA. For more important engineering projects— where the consequences of failure are more serious, such as dams and chemical plants—it is more usual to obtain the seismic-design loads from a site-specific PSHA, in general, using much longer return periods than those governing code based design. Calculation of Probabilistic Seismic Hazard was performed using Software CRISIS2007 by Ordaz, M., Aguilar, A., and Arboleda, J., Instituto de Ingeniería, UNAM, Mexico. CRISIS implements a classical probabilistic seismic hazard methodology where seismic sources can be modelled as points, lines and areas. In the case of area sources, the software offers an integration procedure that takes advantage of a triangulation algorithm used for seismic source discretization. This solution improves calculation efficiency while maintaining a reliable description of source geometry and seismicity. Additionally, supplementary filters (e.g. fix a sitesource distance that excludes from calculation sources at great distance) allow the program to balance precision and efficiency during hazard calculation. Earthquake temporal occurrence is assumed to follow a Poisson process, and the code facilitates two types of MFDs: a truncated exponential Gutenberg-Richter [1944] magnitude distribution and a characteristic magnitude distribution [Youngs and Coppersmith, 1985]. Notably, the software can deal with uncertainty in the seismicity input parameters such as maximum magnitude value. CRISIS offers a set of built-in GMPEs, as well as the possibility of defining new ones by providing information in a tabular format. Our study shows that in case of Ajaristkali HPP study area, significant contribution to Seismic Hazard comes from local sources with quite low Mmax values, thus these two attenuation lows give us quite different PGA and SA values.
First USGS urban seismic hazard maps predict the effects of soils
Cramer, C.H.; Gomberg, J.S.; Schweig, E.S.; Waldron, B.A.; Tucker, K.
2006-01-01
Probabilistic and scenario urban seismic hazard maps have been produced for Memphis, Shelby County, Tennessee covering a six-quadrangle area of the city. The nine probabilistic maps are for peak ground acceleration and 0.2 s and 1.0 s spectral acceleration and for 10%, 5%, and 2% probability of being exceeded in 50 years. Six scenario maps for these three ground motions have also been generated for both an M7.7 and M6.2 on the southwest arm of the New Madrid seismic zone ending at Marked Tree, Arkansas. All maps include the effect of local geology. Relative to the national seismic hazard maps, the effect of the thick sediments beneath Memphis is to decrease 0.2 s probabilistic ground motions by 0-30% and increase 1.0 s probabilistic ground motions by ???100%. Probabilistic peak ground accelerations remain at levels similar to the national maps, although the ground motion gradient across Shelby County is reduced and ground motions are more uniform within the county. The M7.7 scenario maps show ground motions similar to the 5%-in-50-year probabilistic maps. As an effect of local geology, both M7.7 and M6.2 scenario maps show a more uniform seismic ground-motion hazard across Shelby County than scenario maps with constant site conditions (i.e., NEHRP B/C boundary).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balkey, K.; Witt, F.J.; Bishop, B.A.
1995-06-01
Significant attention has been focused on the issue of reactor vessel pressurized thermal shock (PTS) for many years. Pressurized thermal shock transient events are characterized by a rapid cooldown at potentially high pressure levels that could lead to a reactor vessel integrity concern for some pressurized water reactors. As a result of regulatory and industry efforts in the early 1980`s, a probabilistic risk assessment methodology has been established to address this concern. Probabilistic fracture mechanics analyses are performed as part of this methodology to determine conditional probability of significant flaw extension for given pressurized thermal shock events. While recent industrymore » efforts are underway to benchmark probabilistic fracture mechanics computer codes that are currently used by the nuclear industry, Part I of this report describes the comparison of two independent computer codes used at the time of the development of the original U.S. Nuclear Regulatory Commission (NRC) pressurized thermal shock rule. The work that was originally performed in 1982 and 1983 to compare the U.S. NRC - VISA and Westinghouse (W) - PFM computer codes has been documented and is provided in Part I of this report. Part II of this report describes the results of more recent industry efforts to benchmark PFM computer codes used by the nuclear industry. This study was conducted as part of the USNRC-EPRI Coordinated Research Program for reviewing the technical basis for pressurized thermal shock (PTS) analyses of the reactor pressure vessel. The work focused on the probabilistic fracture mechanics (PFM) analysis codes and methods used to perform the PTS calculations. An in-depth review of the methodologies was performed to verify the accuracy and adequacy of the various different codes. The review was structured around a series of benchmark sample problems to provide a specific context for discussion and examination of the fracture mechanics methodology.« less
NASA Astrophysics Data System (ADS)
Rutberg, Ph G.; Popov, S. D.; Surov, A. V.; Serba, E. O.; Nakonechny, Gh V.; Spodobin, V. A.; Pavlov, A. V.; Surov, A. V.
2012-12-01
The comparison of conductivity obtained in experiments with calculated values is made in this paper. Powerful stationary plasma torches with prolonged period of continuous work are popular for modern plasmachemical applications. The maximum electrode lifetime with the minimum erosion can be reached while working on rather low currents. Meanwhile it is required to provide voltage arc drop for the high power achievement. Electric field strength in the arc column of the high-voltage plasma torch, using air as a plasma-forming gas, does not exceed 15 V/cm. It is possible to obtain the high voltage drop in the long arc stabilized in the channel by the intensive gas flow under given conditions. Models of high voltage plasma torches with rod electrodes with power up to 50 kW have been developed and investigated. The plasma torch arcs are burning in cylindrical channels. Present investigations are directed at studying the possibility of developing long arc plasma torches with higher power. The advantage of AC power supplies usage is the possibility of the loss minimization due to the reactive power compensation. The theoretical maximum of voltage arc drop for power supplies with inductive current limitations is about 50 % of the no-load voltage for a single-phase circuit and about 30 % for the three-phase circuit. Burning of intensively blown arcs in the long cylindrical channel using the AC power supply with 10 kV no-load voltage is experimentally investigated in the work. Voltage drops close to the maximum possible had been reached in the examined arcs in single-phase and three-phase modes. Operating parameters for single-phase mode were: current -30 A, voltage drop -5 kV, air flow rate 35 g/s; for three-phase mode: current (40-85) A, voltage drop (2.5-3.2) kV, air flow rate (60-100) g/s. Arc length in the installations exceeded 2 m.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Zhang, Yingchen
Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture themore » phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.« less
Sharma, Meena Kumari; Kazmi, Absar Ahmad
2015-01-01
A laboratory-scale study was carried out to investigate the effects of physical properties of the supporting media and variable hydraulic shock loads on the hydraulic characteristics of an advanced onsite wastewater treatment system. The system consisted of two upflow anaerobic reactors (a septic tank and an anaerobic filter) accommodated within a single unit. The study was divided into three phases on the basis of three different supporting media (Aqwise carriers, corrugated ring and baked clay) used in the anaerobic filter. Hydraulic loadings were based on peak flow factor (PFF), varying from one to six, to simulate the actual conditions during onsite wastewater treatment. Hydraulic characteristics of the system were identified on the basis of residence time distribution analyses. The system showed a very good hydraulic efficiency, between 0.86 and 0.93, with the media of highest porosity at the hydraulic loading of PFF≤4. At the higher hydraulic loading of PFF 6 also, an appreciable hydraulic efficiency of 0.74 was observed. The system also showed good chemical oxygen demand and total suspended solids removal efficiency of 80.5% and 82.3%, respectively at the higher hydraulic loading of PFF 6. Plug-flow dispersion model was found to be the most appropriate one to describe the mixing pattern of the system, with different supporting media at variable loading, during the tracer study.
Fast loading ester fluorescent Ca2+ and pH indicators into pollen of Pyrus pyrifolia.
Qu, Haiyong; Jiang, Xueting; Shi, Zebin; Liu, Lianmei; Zhang, Shaoling
2012-01-01
Loading of Ca(2+)-sensitive fluorescent probes into plant cells is an essential step to measure activities of free Ca(2+) ions in cytoplasm with a fluorescent imaging technique. Fluo-3 is one of the most suitable Ca(2+) indicators for CLSM. We loaded pollen with fluo-3/AM at three different temperatures. Fluo-3/AM was successfully loaded into pollen at both low (4°C) and high (37°C) temperatures. However, high loading temperature was best suited for pollen, because germination rate of pollen and growth of pollen tubes were relatively little impaired and loading time was shortened. Moreover, Ca(2+) distribution increased in the three apertures of pollen after hydration and showed a Ca(2+) gradient, similar to the tip of growing pollen tubes. The same protocol can be used with the AM-forms of other fluorescent dyes for effective labeling. When loading BCECF-AM into pollen at high temperature, the pollen did not show a pH gradient after hydration. Ca(2+) activities and fluxes had the same periodicity as pollen germination, but pH did not show the same phase and mostly lagged behind. However, the clear zone was alkaline when pollen tube growth was slowed or stopped and turned acidic when growth recovered. It is likely that apical pH(i) regulated pollen tube growth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Small, Ward; Pearson, Mark A.; Metz, Tom R.
Dow Corning SE 1700 (reinforced polydimethylsiloxane) porous structures were made by direct ink writing (DIW) in a simple cubic (SC) configuration. The filament diameter was 250 μm. Structures consisting of 4, 8, or 12 layers were fabricated with center-to-center filament spacing (“road width” (RW)) of 475, 500, 525, 550, or 575 μm. Three compressive load-unload cycles to 2000 kPa were performed on four separate areas of each sample; three samples of each thickness and filament spacing were tested. Geometry-dependent buckling of the SC structure was evident. At a given strain during the third loading phase, stress varied inversely with porosity.more » At strains of 25% and higher, the stress varied inversely with the number of layers (i.e., thickness); however, the relationship between stress and number of layers was more complex at lower strains. Intra-and inter-sample variability of the load deflection response was higher for thinner and less porous structures.« less
Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent
2010-04-01
The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.
Probabilistic BPRRC: Robust Change Detection against Illumination Changes and Background Movements
NASA Astrophysics Data System (ADS)
Yokoi, Kentaro
This paper presents Probabilistic Bi-polar Radial Reach Correlation (PrBPRRC), a change detection method that is robust against illumination changes and background movements. Most of the traditional change detection methods are robust against either illumination changes or background movements; BPRRC is one of the illumination-robust change detection methods. We introduce a probabilistic background texture model into BPRRC and add the robustness against background movements including foreground invasions such as moving cars, walking people, swaying trees, and falling snow. We show the superiority of PrBPRRC in the environment with illumination changes and background movements by using three public datasets and one private dataset: ATON Highway data, Karlsruhe traffic sequence data, PETS 2007 data, and Walking-in-a-room data.
Yoda, Nobuhiro; Ogawa, Toru; Gunji, Yoshinori; Vanegas, Juan R; Kawata, Tetsuo; Sasaki, Keiichi
2016-08-01
The mechanisms by which the loads exerted on implants that support prostheses are modulated during mastication remain unclear. The purpose of this study was to evaluate the effects of food texture on 3-dimensional loads measured at a single implant using a piezoelectric transducer. Two subjects participated in this study. The transducer and the experimental superstructure, which had been adjusted to the subject's occlusal scheme, were attached to the implant with a titanium screw. The foods tested were chewing gum and peanuts. The mean maximum load on the implant in each chewing cycle was significantly higher during peanut chewing than during gum chewing. The direction of maximum load was significantly more widely dispersed during peanut chewing than during gum chewing. The range of changes in load direction during the force-increasing phase of each chewing cycle was significantly wider during peanut chewing than during gum chewing. The load on the implant was affected by food texture in both subjects. This measurement method can be useful to investigate the mechanisms of load modulation on implants during mastication.
Probabilistic structural analysis by extremum methods
NASA Technical Reports Server (NTRS)
Nafday, Avinash M.
1990-01-01
The objective is to demonstrate discrete extremum methods of structural analysis as a tool for structural system reliability evaluation. Specifically, linear and multiobjective linear programming models for analysis of rigid plastic frames under proportional and multiparametric loadings, respectively, are considered. Kinematic and static approaches for analysis form a primal-dual pair in each of these models and have a polyhedral format. Duality relations link extreme points and hyperplanes of these polyhedra and lead naturally to dual methods for system reliability evaluation.
Dawdy, M R; Munter, D W; Gilmore, R A
1997-03-01
This study was designed to examine the relationship between patient entry rates (a measure of physician work load) and documentation errors/omissions in both handwritten and dictated emergency treatment records. The study was carried out in two phases. Phase I examined handwritten records and Phase II examined dictated and transcribed records. A total of 838 charts for three common chief complaints (chest pain, abdominal pain, asthma/chronic obstructive pulmonary disease) were retrospectively reviewed and scored for the presence or absence of 11 predetermined criteria. Patient entry rates were determined by reviewing the emergency department patient registration logs. The data were analyzed using simple correlation and linear regression analysis. A positive correlation was found between patient entry rates and documentation errors in handwritten charts. No such correlation was found in the dictated charts. We conclude that work load may negatively affect documentation accuracy when charts are handwritten. However, the use of dictation services may minimize or eliminate this effect.
A kinetic comparison of back-loading and head-loading in Xhosa women.
Lloyd, R; Parr, B; Davies, S; Cooke, C
2011-04-01
The purpose of this study was to compare the kinetic responses associated with ground reaction force measurements to both head-loading and back-loading in a group of Xhosa women. Altogether, 16 women were divided into two groups based on their experience of head-loading. They walked over a force plate in three conditions: unloaded or carrying 20 kg in either a backpack or on their head. The most striking finding was that there was no difference in kinetic response to head-loading as a consequence of previous experience. Considering the differences between the load carriage methods, most changes were consistent with increasing load. Head-loading was, however, associated with a shorter contact time, smaller thrust maximum and greater vertical force minimum than back-loading. Both loading conditions differed from unloaded walking for a number of temporal variables associated with the ground contact phase, e.g. vertical impact peak was delayed whilst vertical thrust maximum occurred earlier. STATEMENT OF RELEVANCE: Consideration of the kinetics of head and back load carriage in African women is important from a health and safety perspective, providing an understanding of the mechanical adaptations associated with both forms of load carriage for a group of people for whom such load carriage is a daily necessity.
Walker, Simon; Blazevich, Anthony J.; Haff, G. Gregory; Tufano, James J.; Newton, Robert U.; Häkkinen, Keijo
2016-01-01
As training experience increases it becomes more challenging to induce further neuromuscular adaptation. Consequently, strength trainers seek alternative training methods in order to further increase strength and muscle mass. One method is to utilize accentuated eccentric loading, which applies a greater external load during the eccentric phase of the lift as compared to the concentric phase. Based upon this practice, the purpose of this study was to determine the effects of 10 weeks of accentuated eccentric loading vs. traditional isoinertial resistance training in strength-trained men. Young (22 ± 3 years, 177 ± 6 cm, 76 ± 10 kg, n = 28) strength-trained men (2.6 ± 2.2 years experience) were allocated to concentric-eccentric resistance training in the form of accentuated eccentric load (eccentric load = concentric load + 40%) or traditional resistance training, while the control group continued their normal unsupervised training program. Both intervention groups performed three sets of 6-RM (session 1) and three sets of 10-RM (session 2) bilateral leg press and unilateral knee extension exercises per week. Maximum force production was measured by unilateral isometric (110° knee angle) and isokinetic (concentric and eccentric 30°.s−1) knee extension tests, and work capacity was measured by a knee extension repetition-to-failure test. Muscle mass was assessed using panoramic ultrasonography and dual-energy x-ray absorptiometry. Surface electromyogram amplitude normalized to maximum M-wave and the twitch interpolation technique were used to examine maximal muscle activation. After training, maximum isometric torque increased significantly more in the accentuated eccentric load group than control (18 ± 10 vs. 1 ± 5%, p < 0.01), which was accompanied by an increase in voluntary activation (3.5 ± 5%, p < 0.05). Isokinetic eccentric torque increased significantly after accentuated eccentric load training only (10 ± 9%, p < 0.05), whereas concentric torque increased equally in both the accentuated eccentric load (10 ± 9%, p < 0.01) and traditional (9 ± 6%, p < 0.01) resistance training groups; however, the increase in the accentuated eccentric load group was significantly greater (p < 0.05) than control (1 ± 7%). Knee extension repetition-to-failure improved in the accentuated eccentric load group only (28%, p < 0.05). Similar increases in muscle mass occurred in both intervention groups. In summary, accentuated eccentric load training led to greater increases in maximum force production, work capacity and muscle activation, but not muscle hypertrophy, in strength-trained individuals. PMID:27199764
Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.
Krishnamurthy, V; Krishnamurthy, E V
1999-03-01
A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.
Multi-Scale/Multi-Functional Probabilistic Composite Fatigue
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2008-01-01
A multi-level (multi-scale/multi-functional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.
Quantum probability and Hilbert's sixth problem
NASA Astrophysics Data System (ADS)
Accardi, Luigi
2018-04-01
With the birth of quantum mechanics, the two disciplines that Hilbert proposed to axiomatize, probability and mechanics, became entangled and a new probabilistic model arose in addition to the classical one. Thus, to meet Hilbert's challenge, an axiomatization should account deductively for the basic features of all three disciplines. This goal was achieved within the framework of quantum probability. The present paper surveys the quantum probabilistic axiomatization. This article is part of the themed issue `Hilbert's sixth problem'.
Probabilistic modeling of the evolution of gene synteny within reconciled phylogenies
2015-01-01
Background Most models of genome evolution concern either genetic sequences, gene content or gene order. They sometimes integrate two of the three levels, but rarely the three of them. Probabilistic models of gene order evolution usually have to assume constant gene content or adopt a presence/absence coding of gene neighborhoods which is blind to complex events modifying gene content. Results We propose a probabilistic evolutionary model for gene neighborhoods, allowing genes to be inserted, duplicated or lost. It uses reconciled phylogenies, which integrate sequence and gene content evolution. We are then able to optimize parameters such as phylogeny branch lengths, or probabilistic laws depicting the diversity of susceptibility of syntenic regions to rearrangements. We reconstruct a structure for ancestral genomes by optimizing a likelihood, keeping track of all evolutionary events at the level of gene content and gene synteny. Ancestral syntenies are associated with a probability of presence. We implemented the model with the restriction that at most one gene duplication separates two gene speciations in reconciled gene trees. We reconstruct ancestral syntenies on a set of 12 drosophila genomes, and compare the evolutionary rates along the branches and along the sites. We compare with a parsimony method and find a significant number of results not supported by the posterior probability. The model is implemented in the Bio++ library. It thus benefits from and enriches the classical models and methods for molecular evolution. PMID:26452018
Dynamics of fluidic devices with applications to rotor pitch links
NASA Astrophysics Data System (ADS)
Scarborough, Lloyd H., III
Coupling a Fluidic Flexible Matrix Composite (F2MC) to an air-pressurized fluid port produces a fundamentally new class of tunable vibration isolator. This fluidlastic device provides significant vibration reduction at an isolation frequency that can be tuned over a broad frequency range. The material properties and geometry of the F2MC element, as well as the port inertance, determine the isolation frequency. A unique feature of this device is that the port inertance depends on pressure so the isolation frequency can be adjusted by changing the air pressure. For constant port inertance, the isolation frequency is largely independent of the isolated mass so the device is robust to changes in load. A nonlinear model is developed to predict isolator length and port inertance. The model is linearized and the frequency response calculated. Experiments agree with theory, demonstrating a tunable isolation range from 9 Hz to 36 Hz and transmitted force reductions of up to 60 dB at the isolation frequency. Replacing rigid pitch links on rotorcraft with coupled fluidic devices has the potential to reduce the aerodynamic blade loads transmitted through the pitch links to the swashplate. Analytical models of two fluidic devices coupled with three different fluidic circuits are derived. These passive fluidlastic systems are tuned, by varying the fluid inertances and capacitances of each fluidic circuit, to reduce the transmitted pitch-link loads. The different circuit designs result in transmitted pitch link loads reduction at up to three main rotor harmonics. The simulation results show loads reduction at the targeted out-of-phase and in-phase harmonics of up to 88% and 93%, respectively. Experimental validation of two of the fluidic circuits demonstrates loads reduction of up to 89% at the out-of-phase isolation frequencies and up to 81% at the in-phase isolation frequencies. Replacing rigid pitch links on rotorcraft with fluidic pitch links changes the blade torsional impedance. At low frequency, the pitch link must have high impedance to pass through the pilot's collective and cyclic commands to control the aircraft. At higher frequencies, however, the pitch-link impedance can be tuned to change the blade pitching response to higher harmonic loads. Active blade control to produce higher harmonic pitch motions has been shown to reduce hub loads and increase rotor efficiency. This work investigates whether fluidic pitch links can passively provide these benefits. An analytical model of a fluidic pitch link is derived and incorporated into a rotor aeroelastic simulation for a rotor similar to that of the UH-60. Eighty-one simulations with varied fluidic pitch link parameters demonstrate that their impedance can be tailored to reduce rotor power and all six hub forces and moments. While no impedance was found that simultaneously reduced all components, the results include cases with reductions in the lateral 4/rev hub force of up to 91% and 4/rev hub pitching moment of up to 67%, and main rotor power of up to 5%.
Finley, B; Paustenbach, D
1994-02-01
Probabilistic risk assessments are enjoying increasing popularity as a tool to characterize the health hazards associated with exposure to chemicals in the environment. Because probabilistic analyses provide much more information to the risk manager than standard "point" risk estimates, this approach has generally been heralded as one which could significantly improve the conduct of health risk assessments. The primary obstacles to replacing point estimates with probabilistic techniques include a general lack of familiarity with the approach and a lack of regulatory policy and guidance. This paper discusses some of the advantages and disadvantages of the point estimate vs. probabilistic approach. Three case studies are presented which contrast and compare the results of each. The first addresses the risks associated with household exposure to volatile chemicals in tapwater. The second evaluates airborne dioxin emissions which can enter the food-chain. The third illustrates how to derive health-based cleanup levels for dioxin in soil. It is shown that, based on the results of Monte Carlo analyses of probability density functions (PDFs), the point estimate approach required by most regulatory agencies will nearly always overpredict the risk for the 95th percentile person by a factor of up to 5. When the assessment requires consideration of 10 or more exposure variables, the point estimate approach will often predict risks representative of the 99.9th percentile person rather than the 50th or 95th percentile person. This paper recommends a number of data distributions for various exposure variables that we believe are now sufficiently well understood to be used with confidence in most exposure assessments. A list of exposure variables that may require additional research before adequate data distributions can be developed are also discussed.
Loads Model Development and Analysis for the F/A-18 Active Aeroelastic Wing Airplane
NASA Technical Reports Server (NTRS)
Allen, Michael J.; Lizotte, Andrew M.; Dibley, Ryan P.; Clarke, Robert
2005-01-01
The Active Aeroelastic Wing airplane was successfully flight-tested in March 2005. During phase 1 of the two-phase program, an onboard excitation system provided independent control surface movements that were used to develop a loads model for the wing structure and wing control surfaces. The resulting loads model, which was used to develop the control laws for phase 2, is described. The loads model was developed from flight data through the use of a multiple linear regression technique. The loads model input consisted of aircraft states and control surface positions, in addition to nonlinear inputs that were calculated from flight-measured parameters. The loads model output for each wing consisted of wing-root bending moment and torque, wing-fold bending moment and torque, inboard and outboard leading-edge flap hinge moment, trailing-edge flap hinge moment, and aileron hinge moment. The development of the Active Aeroelastic Wing loads model is described, and the ability of the model to predict loads during phase 2 research maneuvers is demonstrated. Results show a good match to phase 2 flight data for all loads except inboard and outboard leading-edge flap hinge moments at certain flight conditions. The average load prediction errors for all loads at all flight conditions are 9.1 percent for maximum stick-deflection rolls, 4.4 percent for 5-g windup turns, and 7.7 percent for 4-g rolling pullouts.
IN-SITU THERMAL TREATMENT SYSTEM PERFORMANCE AND MASS REMOVAL METRICS AT FORT LEWIS
The EGDY is the source of a potentially expanding three mile long TCE plume in a sole source drinking water aquifer. Thermal remediation is being employed to reduce source mass loading to the dissolved phase aquifer plume and reduce the time to reach site cleanup goals. This is...
LESSONS LEARNED FROM IN-SITU RESISTIVE HEATING OF TCE AT FORT LEWIS, WASHINGTON
The EGDY is the source of a potentially expanding, three mile long TCE plume in a sole source drinking water aquifer. Thermal remediation is being employed to reduce source mass loading to the dissolved phase aquifer plume and reduce the time to reach site cleanup goals. This i...
1990-08-01
transformer core, such as loose or fractured core laminations . A sound level meter with an A- weighting frequency network was used for the...loaded on flatbed trucks as shown in Figure 2 and permanently installed at various sites throughout the Pearl Harbor complex. Figure 3 shows the final
Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Constraints
NASA Technical Reports Server (NTRS)
Hinckley, David; Englander, Jacob; Hitt, Darren
2015-01-01
Single trial evaluations Trial creation by Phase-wise GA-style or DE-inspired recombination Bin repository structure requires an initialization period Non-exclusionary Kill Distance Population collapse mechanic Main loop Creation Probabilistic switch between GA and DE creation types Locally optimize Submit to repository Repeat.
Probabilistic Assessment of Cancer Risk from Solar Particle Events
NASA Astrophysics Data System (ADS)
Kim, Myung-Hee Y.; Cucinotta, Francis A.
For long duration missions outside of the protection of the Earth's magnetic field, space radi-ation presents significant health risks including cancer mortality. Space radiation consists of solar particle events (SPEs), comprised largely of medium energy protons (less than several hundred MeV); and galactic cosmic ray (GCR), which include high energy protons and heavy ions. While the frequency distribution of SPEs depends strongly upon the phase within the solar activity cycle, the individual SPE occurrences themselves are random in nature. We es-timated the probability of SPE occurrence using a non-homogeneous Poisson model to fit the historical database of proton measurements. Distributions of particle fluences of SPEs for a specified mission period were simulated ranging from its 5th to 95th percentile to assess the cancer risk distribution. Spectral variability of SPEs was also examined, because the detailed energy spectra of protons are important especially at high energy levels for assessing the cancer risk associated with energetic particles for large events. We estimated the overall cumulative probability of GCR environment for a specified mission period using a solar modulation model for the temporal characterization of the GCR environment represented by the deceleration po-tential (φ). Probabilistic assessment of cancer fatal risk was calculated for various periods of lunar and Mars missions. This probabilistic approach to risk assessment from space radiation is in support of mission design and operational planning for future manned space exploration missions. In future work, this probabilistic approach to the space radiation will be combined with a probabilistic approach to the radiobiological factors that contribute to the uncertainties in projecting cancer risks.
Probabilistic Assessment of Cancer Risk from Solar Particle Events
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Cucinotta, Francis A.
2010-01-01
For long duration missions outside of the protection of the Earth s magnetic field, space radiation presents significant health risks including cancer mortality. Space radiation consists of solar particle events (SPEs), comprised largely of medium energy protons (less than several hundred MeV); and galactic cosmic ray (GCR), which include high energy protons and heavy ions. While the frequency distribution of SPEs depends strongly upon the phase within the solar activity cycle, the individual SPE occurrences themselves are random in nature. We estimated the probability of SPE occurrence using a non-homogeneous Poisson model to fit the historical database of proton measurements. Distributions of particle fluences of SPEs for a specified mission period were simulated ranging from its 5 th to 95th percentile to assess the cancer risk distribution. Spectral variability of SPEs was also examined, because the detailed energy spectra of protons are important especially at high energy levels for assessing the cancer risk associated with energetic particles for large events. We estimated the overall cumulative probability of GCR environment for a specified mission period using a solar modulation model for the temporal characterization of the GCR environment represented by the deceleration potential (^). Probabilistic assessment of cancer fatal risk was calculated for various periods of lunar and Mars missions. This probabilistic approach to risk assessment from space radiation is in support of mission design and operational planning for future manned space exploration missions. In future work, this probabilistic approach to the space radiation will be combined with a probabilistic approach to the radiobiological factors that contribute to the uncertainties in projecting cancer risks.
A framework for the probabilistic analysis of meteotsunamis
Geist, Eric L.; ten Brink, Uri S.; Gove, Matthew D.
2014-01-01
A probabilistic technique is developed to assess the hazard from meteotsunamis. Meteotsunamis are unusual sea-level events, generated when the speed of an atmospheric pressure or wind disturbance is comparable to the phase speed of long waves in the ocean. A general aggregation equation is proposed for the probabilistic analysis, based on previous frameworks established for both tsunamis and storm surges, incorporating different sources and source parameters of meteotsunamis. Parameterization of atmospheric disturbances and numerical modeling is performed for the computation of maximum meteotsunami wave amplitudes near the coast. A historical record of pressure disturbances is used to establish a continuous analytic distribution of each parameter as well as the overall Poisson rate of occurrence. A demonstration study is presented for the northeast U.S. in which only isolated atmospheric pressure disturbances from squall lines and derechos are considered. For this study, Automated Surface Observing System stations are used to determine the historical parameters of squall lines from 2000 to 2013. The probabilistic equations are implemented using a Monte Carlo scheme, where a synthetic catalog of squall lines is compiled by sampling the parameter distributions. For each entry in the catalog, ocean wave amplitudes are computed using a numerical hydrodynamic model. Aggregation of the results from the Monte Carlo scheme results in a meteotsunami hazard curve that plots the annualized rate of exceedance with respect to maximum event amplitude for a particular location along the coast. Results from using multiple synthetic catalogs, resampled from the parent parameter distributions, yield mean and quantile hazard curves. Further refinements and improvements for probabilistic analysis of meteotsunamis are discussed.
Jensen, J; Løkke, H; Holmstrup, M; Krogh, P H; Elsgaard, L
2001-08-01
Linear alkylbenzene sulfonates (LAS) can be found in high concentrations in sewage sludge and, hence, may enter the soil compartment as a result of sludge application. Here, LAS may pose a risk for soil-dwelling organisms. In the present probabilistic risk assessment, statistical extrapolation has been used to assess the risk of LAS to soil ecosystems. By use of a log-normal distribution model, the predicted no-effect concentration (PNEC) was estimated for soil fauna, plants, and a combination of these. Due to the heterogeneous endpoints for microorganisms, including functional as well as structural parameters, the use of sensitivity distributions is not considered to be applicable to this group of organisms, and a direct, expert evaluation of toxicity data was used instead. The soil concentration after sludge application was predicted for a number of scenarios and used as the predicted environmental concentration (PEC) in the risk characterization and calculation of risk quotients (RQ = PEC/PNEC). A LAS concentration of 4.6 mg/kg was used as the current best estimate of PNEC in all RQ calculations. Three levels of LAS contamination (530, 2,600, and 16,100 mg/kg), three half-lives (10, 25, and 40 d), and five different sludge loads (2, 4, 6, 8, and 10 t/ha) were included in the risk scenarios. In Denmark, the initial risk ratio would reach 1.5 in a realistic worst-case consideration. For countries not having similar sludge regulations, the estimated risk ratio may initially be considerably higher. However, even in the most extreme scenarios, the level of LAS is expected to be well beyond the estimated PNEC one year after application. The present risk assessment, therefore, concludes that LAS does not pose a significant risk to fauna, plants, and essential functions of agricultural soils as a result of normal sewage sludge amendment. However, risks have been identified in worst-case scenarios.
SPIKE-2: a Practical Stirling Engine for Kilowatt Level Solar Power
NASA Technical Reports Server (NTRS)
Beale, W. T.
1984-01-01
Recent advances in the art of free piston Stirling engine design make possible the production of 1-10kW free piston Stirling linear alternator engine, hermetically sealed, efficient, durable and simple in construction and operation. Power output is in the form of single or three phase 60 Hz. AC, or DC. The three phase capability is available from single machines without need of external conditioning. Engine voltage control regains set voltage within 5 cycles in response to any load change. The existing SPIKE-2 design has an engine alternator efficiency of 25% at 650 C heater wall temperature and a service life of over three years in solar service. The same system can be scaled over a range of at least 100 watts to 25kW.
NASA Astrophysics Data System (ADS)
Marzocchi, W.
2011-12-01
Eruption forecasting is the probability of eruption in a specific time-space-magnitude window. The use of probabilities to track the evolution of a phase of unrest is unavoidable for two main reasons: first, eruptions are intrinsically unpredictable in a deterministic sense, and, second, probabilities represent a quantitative tool that can be rationally used by decision-makers (this is usually done in many other fields). The primary information for the probability assessment during a phase of unrest come from monitoring data of different quantities, such as the seismic activity, ground deformation, geochemical signatures, and so on. Nevertheless, the probabilistic forecast based on monitoring data presents two main difficulties. First, many high-risk volcanoes do not have monitoring pre-eruptive and unrest databases, making impossible a probabilistic assessment based on the frequency of past observations. The ongoing project WOVOdat (led by Christopher Newhall) is trying to tackle this limitation creating a sort of worldwide epidemiological database that may cope with the lack of monitoring pre-eruptive and unrest databases for a specific volcano using observations of 'analogs' volcanoes. Second, the quantity and quality of monitoring data are rapidly increasing in many volcanoes, creating strongly inhomogeneous dataset. In these cases, classical statistical analysis can be performed on high quality monitoring observations only for (usually too) short periods of time, or alternatively using only few specific monitoring data that are available for longer times (such as the number of earthquakes), therefore neglecting a lot of information carried out by the most recent kind of monitoring. Here, we explore a possible strategy to cope with these limitations. In particular, we present a Bayesian strategy that merges different kinds of information. In this approach, all relevant monitoring observations are embedded into a probabilistic scheme through expert opinion, conceptual models, and, possibly, real past data. After discussing all scientific and philosophical aspects of such approach, we present some applications for Campi Flegrei and Vesuvius.
Load flow and state estimation algorithms for three-phase unbalanced power distribution systems
NASA Astrophysics Data System (ADS)
Madvesh, Chiranjeevi
Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.