Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
Probabilistic Meteorological Characterization for Turbine Loads
NASA Astrophysics Data System (ADS)
Kelly, M.; Larsen, G.; Dimitrov, N. K.; Natarajan, A.
2014-06-01
Beyond the existing, limited IEC prescription to describe fatigue loads on wind turbines, we look towards probabilistic characterization of the loads via analogous characterization of the atmospheric flow, particularly for today's "taller" turbines with rotors well above the atmospheric surface layer. Based on both data from multiple sites as well as theoretical bases from boundary-layer meteorology and atmospheric turbulence, we offer probabilistic descriptions of shear and turbulence intensity, elucidating the connection of each to the other as well as to atmospheric stability and terrain. These are used as input to loads calculation, and with a statistical loads output description, they allow for improved design and loads calculations.
Inferential Framework for Autonomous Cryogenic Loading Operations
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Khasin, Michael; Timucin, Dogan; Sass, Jared; Perotti, Jose; Brown, Barbara
2017-01-01
We address problem of autonomous cryogenic management of loading operations on the ground and in space. As a step towards solution of this problem we develop a probabilistic framework for inferring correlations parameters of two-fluid cryogenic flow. The simulation of two-phase cryogenic flow is performed using nearly-implicit scheme. A concise set of cryogenic correlations is introduced. The proposed approach is applied to an analysis of the cryogenic flow in experimental Propellant Loading System built at NASA KSC. An efficient simultaneous optimization of a large number of model parameters is demonstrated and a good agreement with the experimental data is obtained.
NASA Technical Reports Server (NTRS)
Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George
2000-01-01
This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.
Probabilistic load simulation: Code development status
NASA Astrophysics Data System (ADS)
Newell, J. F.; Ho, H.
1991-05-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
NASA Astrophysics Data System (ADS)
Enzenhoefer, R.; Rodriguez-Pretelin, A.; Nowak, W.
2012-12-01
"From an engineering standpoint, the quantification of uncertainty is extremely important not only because it allows estimating risk but mostly because it allows taking optimal decisions in an uncertain framework" (Renard, 2007). The most common way to account for uncertainty in the field of subsurface hydrology and wellhead protection is to randomize spatial parameters, e.g. the log-hydraulic conductivity or porosity. This enables water managers to take robust decisions in delineating wellhead protection zones with rationally chosen safety margins in the spirit of probabilistic risk management. Probabilistic wellhead protection zones are commonly based on steady-state flow fields. However, several past studies showed that transient flow conditions may substantially influence the shape and extent of catchments. Therefore, we believe they should be accounted for in the probabilistic assessment and in the delineation process. The aim of our work is to show the significance of flow transients and to investigate the interplay between spatial uncertainty and flow transients in wellhead protection zone delineation. To this end, we advance our concept of probabilistic capture zone delineation (Enzenhoefer et al., 2012) that works with capture probabilities and other probabilistic criteria for delineation. The extended framework is able to evaluate the time fraction that any point on a map falls within a capture zone. In short, we separate capture probabilities into spatial/statistical and time-related frequencies. This will provide water managers additional information on how to manage a well catchment in the light of possible hazard conditions close to the capture boundary under uncertain and time-variable flow conditions. In order to save computational costs, we take advantage of super-positioned flow components with time-variable coefficients. We assume an instantaneous development of steady-state flow conditions after each temporal change in driving forces, following an idea by Festger and Walter, 2002. These quasi steady-state flow fields are cast into a geostatistical Monte Carlo framework to admit and evaluate the influence of parameter uncertainty on the delineation process. Furthermore, this framework enables conditioning on observed data with any conditioning scheme, such as rejection sampling, Ensemble Kalman Filters, etc. To further reduce the computational load, we use the reverse formulation of advective-dispersive transport. We simulate the reverse transport by particle tracking random walk in order to avoid numerical dispersion to account for well arrival times.
Dynamic Probabilistic Instability of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2009-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.
Alternate Methods in Refining the SLS Nozzle Plug Loads
NASA Technical Reports Server (NTRS)
Burbank, Scott; Allen, Andrew
2013-01-01
Numerical analysis has shown that the SLS nozzle environmental barrier (nozzle plug) design is inadequate for the prelaunch condition, which consists of two dominant loads: 1) the main engines startup pressure and 2) an environmentally induced pressure. Efforts to reduce load conservatisms included a dynamic analysis which showed a 31% higher safety factor compared to the standard static analysis. The environmental load is typically approached with a deterministic method using the worst possible combinations of pressures and temperatures. An alternate probabilistic approach, utilizing the distributions of pressures and temperatures, resulted in a 54% reduction in the environmental pressure load. A Monte Carlo simulation of environmental load that used five years of historical pressure and temperature data supported the results of the probabilistic analysis, indicating the probabilistic load is reflective of a 3-sigma condition (1 in 370 probability). Utilizing the probabilistic load analysis eliminated excessive conservatisms and will prevent a future overdesign of the nozzle plug. Employing a similar probabilistic approach to other design and analysis activities can result in realistic yet adequately conservative solutions.
Life Predicted in a Probabilistic Design Space for Brittle Materials With Transient Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Palfi, Tamas; Reh, Stefan
2005-01-01
Analytical techniques have progressively become more sophisticated, and now we can consider the probabilistic nature of the entire space of random input variables on the lifetime reliability of brittle structures. This was demonstrated with NASA s CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code combined with the commercially available ANSYS/Probabilistic Design System (ANSYS/PDS), a probabilistic analysis tool that is an integral part of the ANSYS finite-element analysis program. ANSYS/PDS allows probabilistic loads, component geometry, and material properties to be considered in the finite-element analysis. CARES/Life predicts the time dependent probability of failure of brittle material structures under generalized thermomechanical loading--such as that found in a turbine engine hot-section. Glenn researchers coupled ANSYS/PDS with CARES/Life to assess the effects of the stochastic variables of component geometry, loading, and material properties on the predicted life of the component for fully transient thermomechanical loading and cyclic loading.
NASA Astrophysics Data System (ADS)
Hoffmann, K.; Srouji, R. G.; Hansen, S. O.
2017-12-01
The technology development within the structural design of long-span bridges in Norwegian fjords has created a need for reformulating the calculation format and the physical quantities used to describe the properties of wind and the associated wind-induced effects on bridge decks. Parts of a new probabilistic format describing the incoming, undisturbed wind is presented. It is expected that a fixed probabilistic format will facilitate a more physically consistent and precise description of the wind conditions, which in turn increase the accuracy and considerably reduce uncertainties in wind load assessments. Because the format is probabilistic, a quantification of the level of safety and uncertainty in predicted wind loads is readily accessible. A simple buffeting response calculation demonstrates the use of probabilistic wind data in the assessment of wind loads and responses. Furthermore, vortex-induced fatigue damage is discussed in relation to probabilistic wind turbulence data and response measurements from wind tunnel tests.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H. W.; Kurth, R. E.
1991-01-01
The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.
The composite load spectra project
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H.; Kurth, R. E.
1990-01-01
Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.
Probabilistic simulation of stress concentration in composite laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, L.
1993-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The probabilistic composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties while probabilistic finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate stress concentration factors such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using it to simulate the stress concentration factors in composite laminates made from three different composite systems. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the stress concentration factors are influenced by local stiffness variables, by load eccentricities and by initial stress fields.
Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.
Probabilistic structural analysis using a general purpose finite element program
NASA Astrophysics Data System (ADS)
Riha, D. S.; Millwater, H. R.; Thacker, B. H.
1992-07-01
This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.
Probabilistic Aeroelastic Analysis of Turbomachinery Components
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.
2004-01-01
A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.
Probabilistic Simulation of Stress Concentration in Composite Laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, D. G.
1994-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors (SCF's) in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties, whereas the finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate SCF's, such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using is to simulate the SCF's in three different composite laminates. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the SCF's are influenced by local stiffness variables, by load eccentricities, and by initial stress fields.
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
Probabilistic Structural Analysis of SSME Turbopump Blades: Probabilistic Geometry Effects
NASA Technical Reports Server (NTRS)
Nagpal, V. K.
1985-01-01
A probabilistic study was initiated to evaluate the precisions of the geometric and material properties tolerances on the structural response of turbopump blades. To complete this study, a number of important probabilistic variables were identified which are conceived to affect the structural response of the blade. In addition, a methodology was developed to statistically quantify the influence of these probabilistic variables in an optimized way. The identified variables include random geometric and material properties perturbations, different loadings and a probabilistic combination of these loadings. Influences of these probabilistic variables are planned to be quantified by evaluating the blade structural response. Studies of the geometric perturbations were conducted for a flat plate geometry as well as for a space shuttle main engine blade geometry using a special purpose code which uses the finite element approach. Analyses indicate that the variances of the perturbations about given mean values have significant influence on the response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. Schroeder; R. W. Youngblood
The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less
Simulation of probabilistic wind loads and building analysis
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Chamis, Christos C.
1991-01-01
Probabilistic wind loads likely to occur on a structure during its design life are predicted. Described here is a suitable multifactor interactive equation (MFIE) model and its use in the Composite Load Spectra (CLS) computer program to simulate the wind pressure cumulative distribution functions on four sides of a building. The simulated probabilistic wind pressure load was applied to a building frame, and cumulative distribution functions of sway displacements and reliability against overturning were obtained using NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), a stochastic finite element computer code. The geometry of the building and the properties of building members were also considered as random in the NESSUS analysis. The uncertainties of wind pressure, building geometry, and member section property were qualified in terms of their respective sensitivities on the structural response.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
NASA Astrophysics Data System (ADS)
van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.; van Geer, F. C.; Torfs, P. J. J. F.; de Louw, P. G. B.
2010-10-01
Identifying effective measures to reduce nutrient loads of headwaters in lowland catchments requires a thorough understanding of flow routes of water and nutrients. In this paper we assess the value of nested-scale discharge and groundwater level measurements for predictions of catchment-scale discharge and nitrate loads. In order to relate field-site measurements to the catchment-scale an upscaling approach is introduced that assumes that scale differences in flow route fluxes originate from differences in the relationship between groundwater storage and the spatial structure of the groundwater table. This relationship is characterized by the Groundwater Depth Distribution (GDD) curve that relates spatial variation in groundwater depths to the average groundwater depth. The GDD-curve was measured for a single field site (0.009 km2) and simple process descriptions were applied to relate the groundwater levels to flow route discharges. This parsimonious model could accurately describe observed storage, tube drain discharge, overland flow and groundwater flow simultaneously with Nash-Sutcliff coefficients exceeding 0.8. A probabilistic Monte Carlo approach was applied to upscale field-site measurements to catchment scales by inferring scale-specific GDD-curves from hydrographs of two nested catchments (0.4 and 6.5 km2). The estimated contribution of tube drain effluent (a dominant source for nitrates) decreased with increasing scale from 76-79% at the field-site to 34-61% and 25-50% for both catchment scales. These results were validated by demonstrating that a model conditioned on nested-scale measurements simulates better nitrate loads and better predictions of extreme discharges during validation periods compared to a model that was conditioned on catchment discharge only.
Probabilistic structural analysis methods for space propulsion system components
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
The development of a three-dimensional inelastic analysis methodology for the Space Shuttle main engine (SSME) structural components is described. The methodology is composed of: (1) composite load spectra, (2) probabilistic structural analysis methods, (3) the probabilistic finite element theory, and (4) probabilistic structural analysis. The methodology has led to significant technical progress in several important aspects of probabilistic structural analysis. The program and accomplishments to date are summarized.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Kurth, R. E.; Ho, H.
1986-01-01
A multiyear program is performed with the objective to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts. Progress of the first year's effort includes completion of a sufficient portion of each task -- probabilistic models, code development, validation, and an initial operational code. This code has from its inception an expert system philosophy that could be added to throughout the program and in the future. The initial operational code is only applicable to turbine blade type loadings. The probabilistic model included in the operational code has fitting routines for loads that utilize a modified Discrete Probabilistic Distribution termed RASCAL, a barrier crossing method and a Monte Carlo method. An initial load model was developed by Battelle that is currently used for the slowly varying duty cycle type loading. The intent is to use the model and related codes essentially in the current form for all loads that are based on measured or calculated data that have followed a slowly varying profile.
A look-ahead probabilistic contingency analysis framework incorporating smart sampling techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Etingov, Pavel V.; Ren, Huiying
2016-07-18
This paper describes a framework of incorporating smart sampling techniques in a probabilistic look-ahead contingency analysis application. The predictive probabilistic contingency analysis helps to reflect the impact of uncertainties caused by variable generation and load on potential violations of transmission limits.
Probabilistic sizing of laminates with uncertainties
NASA Technical Reports Server (NTRS)
Shah, A. R.; Liaw, D. G.; Chamis, C. C.
1993-01-01
A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.
Probabilistic liquefaction triggering based on the cone penetration test
Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Tokimatsu, K.
2005-01-01
Performance-based earthquake engineering requires a probabilistic treatment of potential failure modes in order to accurately quantify the overall stability of the system. This paper is a summary of the application portions of the probabilistic liquefaction triggering correlations proposed recently proposed by Moss and co-workers. To enable probabilistic treatment of liquefaction triggering, the variables comprising the seismic load and the liquefaction resistance were treated as inherently uncertain. Supporting data from an extensive Cone Penetration Test (CPT)-based liquefaction case history database were used to develop a probabilistic correlation. The methods used to measure the uncertainty of the load and resistance variables, how the interactions of these variables were treated using Bayesian updating, and how reliability analysis was applied to produce curves of equal probability of liquefaction are presented. The normalization for effective overburden stress, the magnitude correlated duration weighting factor, and the non-linear shear mass participation factor used are also discussed.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Kurth, R. E.; Ho, H.
1991-01-01
The objective of this program is to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen posts and system ducting. The first approach will consist of using state of the art probabilistic methods to describe the individual loading conditions and combinations of these loading conditions to synthesize the composite load spectra simulation. The second approach will consist of developing coupled models for composite load spectra simulation which combine the deterministic models for composite load dynamic, acoustic, high pressure, and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients will then be determined using advanced probabilistic simulation methods with and without strategically selected experimental data.
Fracture mechanics analysis of cracked structures using weight function and neural network method
NASA Astrophysics Data System (ADS)
Chen, J. G.; Zang, F. G.; Yang, Y.; Shi, K. K.; Fu, X. L.
2018-06-01
Stress intensity factors(SIFs) due to thermal-mechanical load has been established by using weight function method. Two reference stress states sere used to determine the coefficients in the weight function. Results were evaluated by using data from literature and show a good agreement between them. So, the SIFs can be determined quickly using the weight function obtained when cracks subjected to arbitrary loads, and presented method can be used for probabilistic fracture mechanics analysis. A probabilistic methodology considering Monte-Carlo with neural network (MCNN) has been developed. The results indicate that an accurate probabilistic characteristic of the KI can be obtained by using the developed method. The probability of failure increases with the increasing of loads, and the relationship between is nonlinear.
Probabilistic finite elements for fracture mechanics
NASA Technical Reports Server (NTRS)
Besterfield, Glen
1988-01-01
The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.
Probabilistic Prediction of Lifetimes of Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.
2006-01-01
ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.
Probabilistic structural analysis methods of hot engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1989-01-01
Development of probabilistic structural analysis methods for hot engine structures is a major activity at Lewis Research Center. Recent activities have focused on extending the methods to include the combined uncertainties in several factors on structural response. This paper briefly describes recent progress on composite load spectra models, probabilistic finite element structural analysis, and probabilistic strength degradation modeling. Progress is described in terms of fundamental concepts, computer code development, and representative numerical results.
Probabilistic structural analysis of space propulsion system LOX post
NASA Technical Reports Server (NTRS)
Newell, J. F.; Rajagopal, K. R.; Ho, H. W.; Cunniff, J. M.
1990-01-01
The probabilistic structural analysis program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress; Cruse et al., 1988) is applied to characterize the dynamic loading and response of the Space Shuttle main engine (SSME) LOX post. The design and operation of the SSME are reviewed; the LOX post structure is described; and particular attention is given to the generation of composite load spectra, the finite-element model of the LOX post, and the steps in the NESSUS structural analysis. The results are presented in extensive tables and graphs, and it is shown that NESSUS correctly predicts the structural effects of changes in the temperature loading. The probabilistic approach also facilitates (1) damage assessments for a given failure model (based on gas temperature, heat-shield gap, and material properties) and (2) correlation of the gas temperature with operational parameters such as engine thrust.
Krejsa, Martin; Janas, Petr; Yilmaz, Işık; Marschalko, Marian; Bouchal, Tomas
2013-01-01
The load-carrying system of each construction should fulfill several conditions which represent reliable criteria in the assessment procedure. It is the theory of structural reliability which determines probability of keeping required properties of constructions. Using this theory, it is possible to apply probabilistic computations based on the probability theory and mathematic statistics. Development of those methods has become more and more popular; it is used, in particular, in designs of load-carrying structures with the required level or reliability when at least some input variables in the design are random. The objective of this paper is to indicate the current scope which might be covered by the new method—Direct Optimized Probabilistic Calculation (DOProC) in assessments of reliability of load-carrying structures. DOProC uses a purely numerical approach without any simulation techniques. This provides more accurate solutions to probabilistic tasks, and, in some cases, such approach results in considerably faster completion of computations. DOProC can be used to solve efficiently a number of probabilistic computations. A very good sphere of application for DOProC is the assessment of the bolt reinforcement in the underground and mining workings. For the purposes above, a special software application—“Anchor”—has been developed. PMID:23935412
Impact of Uncertainty from Load-Based Reserves and Renewables on Dispatch Costs and Emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Bowen; Maroukis, Spencer D.; Lin, Yashen
2016-11-21
Aggregations of controllable loads are considered to be a fast-responding, cost-efficient, and environmental-friendly candidate for power system ancillary services. Unlike conventional service providers, the potential capacity from the aggregation is highly affected by factors like ambient conditions and load usage patterns. Previous work modeled aggregations of controllable loads (such as air conditioners) as thermal batteries, which are capable of providing reserves but with uncertain capacity. A stochastic optimal power flow problem was formulated to manage this uncertainty, as well as uncertainty in renewable generation. In this paper, we explore how the types and levels of uncertainty, generation reserve costs, andmore » controllable load capacity affect the dispatch solution, operational costs, and CO2 emissions. We also compare the results of two methods for solving the stochastic optimization problem, namely the probabilistically robust method and analytical reformulation assuming Gaussian distributions. Case studies are conducted on a modified IEEE 9-bus system with renewables, controllable loads, and congestion. We find that different types and levels of uncertainty have significant impacts on dispatch and emissions. More controllable loads and less conservative solution methodologies lead to lower costs and emissions.« less
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
Analysis of flood hazard under consideration of dike breaches
NASA Astrophysics Data System (ADS)
Vorogushyn, S.; Apel, H.; Lindenschmidt, K.-E.; Merz, B.
2009-04-01
The study focuses on the development and application of a new modelling system which allows a comprehensive flood hazard assessment along diked river reaches under consideration of dike failures. The proposed Inundation Hazard Assessment Model (IHAM) represents a hybrid probabilistic-deterministic model. It comprises three models interactively coupled at runtime. These are: (1) 1D unsteady hydrodynamic model of river channel and floodplain flow between dikes, (2) probabilistic dike breach model which determines possible dike breach locations, breach widths and breach outflow discharges, and (3) 2D raster-based diffusion wave storage cell model of the hinterland areas behind the dikes. Due to the unsteady nature of the 1D and 2D coupled models, the dependence between hydraulic load at various locations along the reach is explicitly considered. The probabilistic dike breach model describes dike failures due to three failure mechanisms: overtopping, piping and slope instability caused by the seepage flow through the dike core (micro-instability). Dike failures for each mechanism are simulated based on fragility functions. The probability of breach is conditioned by the uncertainty in geometrical and geotechnical dike parameters. The 2D storage cell model driven by the breach outflow boundary conditions computes an extended spectrum of flood intensity indicators such as water depth, flow velocity, impulse, inundation duration and rate of water rise. IHAM is embedded in a Monte Carlo simulation in order to account for the natural variability of the flood generation processes reflected in the form of input hydrographs and for the randomness of dike failures given by breach locations, times and widths. The scenario calculations for the developed synthetic input hydrographs for the main river and tributary were carried out for floods with return periods of T = 100; 200; 500; 1000 a. Based on the modelling results, probabilistic dike hazard maps could be generated that indicate the failure probability of each discretised dike section for every scenario magnitude. Besides the binary inundation patterns that indicate the probability of raster cells being inundated, IHAM generates probabilistic flood hazard maps. These maps display spatial patterns of the considered flood intensity indicators and their associated return periods. The probabilistic nature of IHAM allows for the generation of percentile flood hazard maps that indicate the median and uncertainty bounds of the flood intensity indicators. The uncertainty results from the natural variability of the flow hydrographs and randomness of dike breach processes. The same uncertainty sources determine the uncertainty in the flow hydrographs along the study reach. The simulations showed that the dike breach stochasticity has an increasing impact on hydrograph uncertainty in downstream direction. Whereas in the upstream part of the reach the hydrograph uncertainty is mainly stipulated by the variability of the flood wave form, the dike failures strongly shape the uncertainty boundaries in the downstream part of the reach. Finally, scenarios of polder deployment for the extreme floods with T = 200; 500; 1000 a were simulated with IHAM. The results indicate a rather weak reduction of the mean and median flow hydrographs in the river channel. However, the capping of the flow peaks resulted in a considerable reduction of the overtopping failures downstream of the polder with a simultaneous slight increase of the piping and slope micro-instability frequencies explained by a more durable average impoundment. The developed IHAM simulation system represents a new scientific tool for studying fluvial inundation dynamics under extreme conditions incorporating effects of technical flood protection measures. With its major outputs in form of novel probabilistic inundation and dike hazard maps, the IHAM system has a high practical value for decision support in flood management.
Probabilistic models for reactive behaviour in heterogeneous condensed phase media
NASA Astrophysics Data System (ADS)
Baer, M. R.; Gartling, D. K.; DesJardin, P. E.
2012-02-01
This work presents statistically-based models to describe reactive behaviour in heterogeneous energetic materials. Mesoscale effects are incorporated in continuum-level reactive flow descriptions using probability density functions (pdfs) that are associated with thermodynamic and mechanical states. A generalised approach is presented that includes multimaterial behaviour by treating the volume fraction as a random kinematic variable. Model simplifications are then sought to reduce the complexity of the description without compromising the statistical approach. Reactive behaviour is first considered for non-deformable media having a random temperature field as an initial state. A pdf transport relationship is derived and an approximate moment approach is incorporated in finite element analysis to model an example application whereby a heated fragment impacts a reactive heterogeneous material which leads to a delayed cook-off event. Modelling is then extended to include deformation effects associated with shock loading of a heterogeneous medium whereby random variables of strain, strain-rate and temperature are considered. A demonstrative mesoscale simulation of a non-ideal explosive is discussed that illustrates the joint statistical nature of the strain and temperature fields during shock loading to motivate the probabilistic approach. This modelling is derived in a Lagrangian framework that can be incorporated in continuum-level shock physics analysis. Future work will consider particle-based methods for a numerical implementation of this modelling approach.
Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables
DOE Office of Scientific and Technical Information (OSTI.GOV)
DallAnese, Emiliano; Baker, Kyri; Summers, Tyler
This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less
MrLavaLoba: A new probabilistic model for the simulation of lava flows as a settling process
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, Mattia; Tarquini, Simone
2018-01-01
A new code to simulate lava flow spread, MrLavaLoba, is presented. In the code, erupted lava is itemized in parcels having an elliptical shape and prescribed volume. New parcels bud from existing ones according to a probabilistic law influenced by the local steepest slope direction and by tunable input settings. MrLavaLoba must be accounted among the probabilistic codes for the simulation of lava flows, because it is not intended to mimic the actual process of flowing or to provide directly the progression with time of the flow field, but rather to guess the most probable inundated area and final thickness of the lava deposit. The code's flexibility allows it to produce variable lava flow spread and emplacement according to different dynamics (e.g. pahoehoe or channelized-'a'ā). For a given scenario, it is shown that model outputs converge, in probabilistic terms, towards a single solution. The code is applied to real cases in Hawaii and Mt. Etna, and the obtained maps are shown. The model is written in Python and the source code is available at http://demichie.github.io/MrLavaLoba/.
Use of model calibration to achieve high accuracy in analysis of computer networks
Frogner, Bjorn; Guarro, Sergio; Scharf, Guy
2004-05-11
A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.
A probabilistic damage model of stress-induced permeability anisotropy during cataclastic flow
NASA Astrophysics Data System (ADS)
Zhu, Wenlu; MontéSi, Laurent G. J.; Wong, Teng-Fong
2007-10-01
A fundamental understanding of the effect of stress on permeability evolution is important for many fault mechanics and reservoir engineering problems. Recent laboratory measurements demonstrate that in the cataclastic flow regime, the stress-induced anisotropic reduction of permeability in porous rocks can be separated into 3 different stages. In the elastic regime (stage I), permeability and porosity reduction are solely controlled by the effective mean stress, with negligible permeability anisotropy. Stage II starts at the onset of shear-enhanced compaction, when a critical yield stress is attained. In stage II, the deviatoric stress exerts primary control over permeability and porosity evolution. The increase in deviatoric stress results in drastic permeability and porosity reduction and considerable permeability anisotropy. The transition from stage II to stage III takes place progressively during the development of pervasive cataclastic flow. In stage III, permeability and porosity reduction becomes gradual again, and permeability anisotropy diminishes. Microstructural observations on deformed samples using laser confocal microscopy reveal that stress-induced microcracking and pore collapse are the primary forms of damage during cataclastic flow. A probabilistic damage model is formulated to characterize the effects of stress on permeability and its anisotropy. In our model, the effects of both effective mean stress and differential stress on permeability evolution are calculated. By introducing stress sensitivity coefficients, we propose a first-order description of the dependence of permeability evolution on different loading paths. Built upon the micromechanisms of deformation in porous rocks, this unified model provides new insight into the coupling of stress and permeability.
A probabilistic model for the persistence of early planar fabrics in polydeformed pelitic schists
Ferguson, C.C.
1984-01-01
Although early planar fabrics are commonly preserved within microlithons in low-grade pelites, in higher-grade (amphibolite facies) pelitic schists fabric regeneration often appears complete. Evidence for early fabrics may be preserved within porphyroblasts but, within the matrix, later deformation often appears to totally obliterate or reorient earlier fabrics. However, examination of several hundred Dalradian pelites from Connemara, western Ireland, reveals that preservation of early fabrics is by no means uncommon; relict matrix domains, although volumetrically insignificant, are remarkably persistent even when inferred later strains are very large and fabric regeneration appears, at first sight, complete. Deterministic plasticity theories are ill-suited to the analysis of such an inhomogeneous material response, and a probabilistic model is proposed instead. It assumes that ductile polycrystal deformation is controlled by elementary flow units which can be activated once their associated stress barrier is overcome. Bulk flow propensity is related to the proportion of simultaneous activations, and a measure of this is derived from the probabilistic interaction between a stress-barrier spectrum and an internal stress spectrum (the latter determined by the external loading and the details of internal stress transfer). The spectra are modelled as Gaussian distributions although the treatment is very general and could be adapted for other distributions. Using the time rate of change of activation probability it is predicted that, initially, fabric development will be rapid but will then slow down dramatically even though stress increases at a constant rate. This highly non-linear response suggests that early fabrics persist because they comprise unfavourable distributions of stress-barriers which remain unregenerated at the time bulk stress is stabilized by steady-state flow. Relict domains will, however, bear the highest stress and are potential upper-bound palaeostress estimators. Some factors relevant to the micromechanical explanation of relict matrix domains are discussed. ?? 1984.
Probabilistic Design and Analysis Framework
NASA Technical Reports Server (NTRS)
Strack, William C.; Nagpal, Vinod K.
2010-01-01
PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.
This paper presents a probabilistic framework for the assessment of groundwater pollution potential by pesticides in two adjacent agricultural watersheds in the Mid-Altantic Coastal Plain. Indices for estimating streams vulnerability to pollutants' load from the surficial aquifer...
Probabilistically modeling lava flows with MOLASSES
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Connor, C.; Gallant, E.
2017-12-01
Modeling lava flows through Cellular Automata methods enables a computationally inexpensive means to quickly forecast lava flow paths and ultimate areal extents. We have developed a lava flow simulator, MOLASSES, that forecasts lava flow inundation over an elevation model from a point source eruption. This modular code can be implemented in a deterministic fashion with given user inputs that will produce a single lava flow simulation. MOLASSES can also be implemented in a probabilistic fashion where given user inputs define parameter distributions that are randomly sampled to create many lava flow simulations. This probabilistic approach enables uncertainty in input data to be expressed in the model results and MOLASSES outputs a probability map of inundation instead of a determined lava flow extent. Since the code is comparatively fast, we use it probabilistically to investigate where potential vents are located that may impact specific sites and areas, as well as the unconditional probability of lava flow inundation of sites or areas from any vent. We have validated the MOLASSES code to community-defined benchmark tests and to the real world lava flows at Tolbachik (2012-2013) and Pico do Fogo (2014-2015). To determine the efficacy of the MOLASSES simulator at accurately and precisely mimicking the inundation area of real flows, we report goodness of fit using both model sensitivity and the Positive Predictive Value, the latter of which is a Bayesian posterior statistic. Model sensitivity is often used in evaluating lava flow simulators, as it describes how much of the lava flow was successfully modeled by the simulation. We argue that the positive predictive value is equally important in determining how good a simulator is, as it describes the percentage of the simulation space that was actually inundated by lava.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The fourth year of technical developments on the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) system for Probabilistic Structural Analysis Methods is summarized. The effort focused on the continued expansion of the Probabilistic Finite Element Method (PFEM) code, the implementation of the Probabilistic Boundary Element Method (PBEM), and the implementation of the Probabilistic Approximate Methods (PAppM) code. The principal focus for the PFEM code is the addition of a multilevel structural dynamics capability. The strategy includes probabilistic loads, treatment of material, geometry uncertainty, and full probabilistic variables. Enhancements are included for the Fast Probability Integration (FPI) algorithms and the addition of Monte Carlo simulation as an alternate. Work on the expert system and boundary element developments continues. The enhanced capability in the computer codes is validated by applications to a turbine blade and to an oxidizer duct.
Pérez, M A
2012-12-01
Probabilistic analyses allow the effect of uncertainty in system parameters to be determined. In the literature, many researchers have investigated static loading effects on dental implants. However, the intrinsic variability and uncertainty of most of the main problem parameters are not accounted for. The objective of this research was to apply a probabilistic computational approach to predict the fatigue life of three different commercial dental implants considering the variability and uncertainty in their fatigue material properties and loading conditions. For one of the commercial dental implants, the influence of its diameter in the fatigue life performance was also studied. This stochastic technique was based on the combination of a probabilistic finite element method (PFEM) and a cumulative damage approach known as B-model. After 6 million of loading cycles, local failure probabilities of 0.3, 0.4 and 0.91 were predicted for the Lifecore, Avinent and GMI implants, respectively (diameter of 3.75mm). The influence of the diameter for the GMI implant was studied and the results predicted a local failure probability of 0.91 and 0.1 for the 3.75mm and 5mm, respectively. In all cases the highest failure probability was located at the upper screw-threads. Therefore, the probabilistic methodology proposed herein may be a useful tool for performing a qualitative comparison between different commercial dental implants. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
Probabilistic Assessment of National Wind Tunnel
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M.; Chamis, C. C.
1996-01-01
A preliminary probabilistic structural assessment of the critical section of National Wind Tunnel (NWT) is performed using NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) computer code. Thereby, the capabilities of NESSUS code have been demonstrated to address reliability issues of the NWT. Uncertainties in the geometry, material properties, loads and stiffener location on the NWT are considered to perform the reliability assessment. Probabilistic stress, frequency, buckling, fatigue and proof load analyses are performed. These analyses cover the major global and some local design requirements. Based on the assumed uncertainties, the results reveal the assurance of minimum 0.999 reliability for the NWT. Preliminary life prediction analysis results show that the life of the NWT is governed by the fatigue of welds. Also, reliability based proof test assessment is performed.
Probabilistic Dynamic Buckling of Smart Composite Shells
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented to evaluate the deterministic and nondeterministic dynamic buckling of smart composite shells. The combined use of composite mechanics, finite element computer codes, and probabilistic analysis enable the effective assessment of the dynamic buckling load of smart composite shells. A universal plot is generated to estimate the dynamic buckling load of composite shells at various load rates and probabilities. The shell structure is also evaluated with smart fibers embedded in the plies right below the outer plies. The results show that, on the average, the use of smart fibers improved the shell buckling resistance by about 10 percent at different probabilities and delayed the buckling occurrence time. The probabilistic sensitivities results indicate that uncertainties in the fiber volume ratio and ply thickness have major effects on the buckling load while uncertainties in the electric field strength and smart material volume fraction have moderate effects. For the specific shell considered in this evaluation, the use of smart composite material is not recommended because the shell buckling resistance can be improved by simply re-arranging the orientation of the outer plies, as shown in the dynamic buckling analysis results presented in this report.
Probabilistic Dynamic Buckling of Smart Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2007-01-01
A computational simulation method is presented to evaluate the deterministic and nondeterministic dynamic buckling of smart composite shells. The combined use of intraply hybrid composite mechanics, finite element computer codes, and probabilistic analysis enable the effective assessment of the dynamic buckling load of smart composite shells. A universal plot is generated to estimate the dynamic buckling load of composite shells at various load rates and probabilities. The shell structure is also evaluated with smart fibers embedded in the plies right next to the outer plies. The results show that, on the average, the use of smart fibers improved the shell buckling resistance by about 10% at different probabilities and delayed the buckling occurrence time. The probabilistic sensitivities results indicate that uncertainties in the fiber volume ratio and ply thickness have major effects on the buckling load while uncertainties in the electric field strength and smart material volume fraction have moderate effects. For the specific shell considered in this evaluation, the use of smart composite material is not recommended because the shell buckling resistance can be improved by simply re-arranging the orientation of the outer plies, as shown in the dynamic buckling analysis results presented in this report.
Recent developments of the NESSUS probabilistic structural analysis computer program
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.
1992-01-01
The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.
Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design
NASA Technical Reports Server (NTRS)
Kuguoglu, Latife; Ludwiczak, Damian
2006-01-01
The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.
NASA Astrophysics Data System (ADS)
van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.; van Geer, F. C.; Torfs, P. J. J. F.; de Louw, P. G. B.
2011-03-01
Identifying effective measures to reduce nutrient loads of headwaters in lowland catchments requires a thorough understanding of flow routes of water and nutrients. In this paper we assess the value of nested-scale discharge and groundwater level measurements for the estimation of flow route volumes and for predictions of catchment discharge. In order to relate field-site measurements to the catchment-scale an upscaling approach is introduced that assumes that scale differences in flow route fluxes originate from differences in the relationship between groundwater storage and the spatial structure of the groundwater table. This relationship is characterized by the Groundwater Depth Distribution (GDD) curve that relates spatial variation in groundwater depths to the average groundwater depth. The GDD-curve was measured for a single field site (0.009 km2) and simple process descriptions were applied to relate groundwater levels to flow route discharges. This parsimonious model could accurately describe observed storage, tube drain discharge, overland flow and groundwater flow simultaneously with Nash-Sutcliff coefficients exceeding 0.8. A probabilistic Monte Carlo approach was applied to upscale field-site measurements to catchment scales by inferring scale-specific GDD-curves from the hydrographs of two nested catchments (0.4 and 6.5 km2). The estimated contribution of tube drain effluent (a dominant source for nitrates) decreased with increasing scale from 76-79% at the field-site to 34-61% and 25-50% for both catchment scales. These results were validated by demonstrating that a model conditioned on nested-scale measurements improves simulations of nitrate loads and predictions of extreme discharges during validation periods compared to a model that was conditioned on catchment discharge only.
Elasto-limited plastic analysis of structures for probabilistic conditions
NASA Astrophysics Data System (ADS)
Movahedi Rad, M.
2018-06-01
With applying plastic analysis and design methods, significant saving in material can be obtained. However, as a result of this benefit excessive plastic deformations and large residual displacements might develop, which in turn might lead to unserviceability and collapse of the structure. In this study, for deterministic problem the residual deformation of structures is limited by considering a constraint on the complementary strain energy of the residual forces. For probabilistic problem the constraint for the complementary strain energy of the residual forces is given randomly and critical stresses updated during the iteration. Limit curves are presented for the plastic limit load factors. The results show that these constraints have significant effects on the load factors. The formulations of the deterministic and probabilistic problems lead to mathematical programming which are solved by the use of nonlinear algorithm.
Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2011-01-01
A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.
Probabilistic Simulation for Combined Cycle Fatigue in Composites
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2010-01-01
A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multifactor interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.
Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2010-01-01
A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.
Probabilistic structural analysis methods of hot engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1989-01-01
Development of probabilistic structural analysis methods for hot engine structures at Lewis Research Center is presented. Three elements of the research program are: (1) composite load spectra methodology; (2) probabilistic structural analysis methodology; and (3) probabilistic structural analysis application. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) turbine blade temperature, pressure, and torque of the space shuttle main engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; and (3) evaluation of the failure probability. Collectively, the results demonstrate that the structural durability of hot engine structural components can be effectively evaluated in a formal probabilistic/reliability framework.
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
NASA Technical Reports Server (NTRS)
Baskaran, Subbiah; Ramachandran, Narayanan; Noever, David
1998-01-01
The use of probabilistic (PNN) and multilayer feed forward (MLFNN) neural networks are investigated for calibration of multi-hole pressure probes and the prediction of associated flow angularity patterns in test flow fields. Both types of networks are studied in detail for their calibration and prediction characteristics. The current formalism can be applied to any multi-hole probe, however the test results for the most commonly used five-hole Cone and Prism probe types alone are reported in this article.
NASA Technical Reports Server (NTRS)
McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.
2012-01-01
This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.
Long-term strength and damage accumulation in laminates
NASA Astrophysics Data System (ADS)
Dzenis, Yuris A.; Joshi, Shiv P.
1993-04-01
A modified version of the probabilistic model developed by authors for damage evolution analysis of laminates subjected to random loading is utilized to predict long-term strength of laminates. The model assumes that each ply in a laminate consists of a large number of mesovolumes. Probabilistic variation functions for mesovolumes stiffnesses as well as strengths are used in the analysis. Stochastic strains are calculated using the lamination theory and random function theory. Deterioration of ply stiffnesses is calculated on the basis of the probabilities of mesovolumes failures using the theory of excursions of random process beyond the limits. Long-term strength and damage accumulation in a Kevlar/epoxy laminate under tension and complex in-plane loading are investigated. Effects of the mean level and stochastic deviation of loading on damage evolution and time-to-failure of laminate are discussed. Long-term cumulative damage at the time of the final failure at low loading levels is more than at high loading levels. The effect of the deviation in loading is more pronounced at lower mean loading levels.
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Wing, Kam Liu
1987-01-01
In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.
NASA Astrophysics Data System (ADS)
Wei, Y.; Thomas, S.; Zhou, H.; Arcas, D.; Titov, V. V.
2017-12-01
The increasing potential tsunami hazards pose great challenges for infrastructures along the coastlines of the U.S. Pacific Northwest. Tsunami impact at a coastal site is usually assessed from deterministic scenarios based on 10,000 years of geological records in the Cascadia Subduction Zone (CSZ). Aside from these deterministic methods, the new ASCE 7-16 tsunami provisions provide engineering design criteria of tsunami loads on buildings based on a probabilistic approach. This work develops a site-specific model near Newport, OR using high-resolution grids, and compute tsunami inundation depth and velocities at the study site resulted from credible probabilistic and deterministic earthquake sources in the Cascadia Subduction Zone. Three Cascadia scenarios, two deterministic scenarios, XXL1 and L1, and a 2,500-yr probabilistic scenario compliant with the new ASCE 7-16 standard, are simulated using combination of a depth-averaged shallow water model for offshore propagation and a Boussinesq-type model for onshore inundation. We speculate on the methods and procedure to obtain the 2,500-year probabilistic scenario for Newport that is compliant with the ASCE 7-16 tsunami provisions. We provide details of model results, particularly the inundation depth and flow speed for a new building, which will also be designated as a tsunami vertical evacuation shelter, at Newport, Oregon. We show that the ASCE 7-16 consistent hazards are between those obtained from deterministic L1 and XXL1 scenarios, and the greatest impact on the building may come from later waves. As a further step, we utilize the inundation model results to numerically compute tracks of large vessels in the vicinity of the building site and estimate if these vessels will impact on the building site during the extreme XXL1 and ASCE 7-16 hazard-consistent scenarios. Two-step study is carried out first to study tracks of massless particles and then large vessels with assigned mass considering drag force, inertial force, ship grounding and mooring. The simulation results show that none of the large vessels will impact on the building site in all tested scenarios.
Probabilistic Analysis of a Composite Crew Module
NASA Technical Reports Server (NTRS)
Mason, Brian H.; Krishnamurthy, Thiagarajan
2011-01-01
An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.
Design for cyclic loading endurance of composites
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.
1993-01-01
The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.
Assuring Life in Composite Systems
NASA Technical Reports Server (NTRS)
Chamis, Christos c.
2008-01-01
A computational simulation method is presented to assure life in composite systems by using dynamic buckling of smart composite shells as an example. The combined use of composite mechanics, finite element computer codes, and probabilistic analysis enable the effective assessment of the dynamic buckling load of smart composite shells. A universal plot is generated to estimate the dynamic buckling load of composite shells at various load rates and probabilities. The shell structure is also evaluated with smart fibers embedded in the plies right below the outer plies. The results show that, on the average, the use of smart fibers improved the shell buckling resistance by about 9% at different probabilities and delayed the buckling occurrence time. The probabilistic sensitivities results indicate that uncertainties in the fiber volume ratio and ply thickness have major effects on the buckling load. The uncertainties in the electric field strength and smart material volume fraction have moderate effects and thereby in the assured life of the shell.
Reliability and risk assessment of structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1991-01-01
Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.
Probabilistic simulation of uncertainties in thermal structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Shiao, Michael
1990-01-01
Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.
Probabilistic Fatigue Life Updating for Railway Bridges Based on Local Inspection and Repair.
Lee, Young-Joo; Kim, Robin E; Suh, Wonho; Park, Kiwon
2017-04-24
Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target bridge life, bridge maintenance activities such as local inspection and repair should be undertaken properly. However, this is a challenging task because there are various sources of uncertainty associated with aging bridges, train loads, environmental conditions, and maintenance work. Therefore, to perform optimal risk-based maintenance of railway bridges, it is essential to estimate the probabilistic fatigue life of a railway bridge and update the life information based on the results of local inspections and repair. Recently, a system reliability approach was proposed to evaluate the fatigue failure risk of structural systems and update the prior risk information in various inspection scenarios. However, this approach can handle only a constant-amplitude load and has limitations in considering a cyclic load with varying amplitude levels, which is the major loading pattern generated by train traffic. In addition, it is not feasible to update the prior risk information after bridges are repaired. In this research, the system reliability approach is further developed so that it can handle a varying-amplitude load and update the system-level risk of fatigue failure for railway bridges after inspection and repair. The proposed method is applied to a numerical example of an in-service railway bridge, and the effects of inspection and repair on the probabilistic fatigue life are discussed.
Probabilistic Fatigue Life Updating for Railway Bridges Based on Local Inspection and Repair
Lee, Young-Joo; Kim, Robin E.; Suh, Wonho; Park, Kiwon
2017-01-01
Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target bridge life, bridge maintenance activities such as local inspection and repair should be undertaken properly. However, this is a challenging task because there are various sources of uncertainty associated with aging bridges, train loads, environmental conditions, and maintenance work. Therefore, to perform optimal risk-based maintenance of railway bridges, it is essential to estimate the probabilistic fatigue life of a railway bridge and update the life information based on the results of local inspections and repair. Recently, a system reliability approach was proposed to evaluate the fatigue failure risk of structural systems and update the prior risk information in various inspection scenarios. However, this approach can handle only a constant-amplitude load and has limitations in considering a cyclic load with varying amplitude levels, which is the major loading pattern generated by train traffic. In addition, it is not feasible to update the prior risk information after bridges are repaired. In this research, the system reliability approach is further developed so that it can handle a varying-amplitude load and update the system-level risk of fatigue failure for railway bridges after inspection and repair. The proposed method is applied to a numerical example of an in-service railway bridge, and the effects of inspection and repair on the probabilistic fatigue life are discussed. PMID:28441768
NASA Astrophysics Data System (ADS)
Velazquez, Antonio; Swartz, R. Andrew
2012-04-01
Wind energy is an increasingly important component of this nation's renewable energy portfolio, however safe and economical wind turbine operation is a critical need to ensure continued adoption. Safe operation of wind turbine structures requires not only information regarding their condition, but their operational environment. Given the difficulty inherent in SHM processes for wind turbines (damage detection, location, and characterization), some uncertainty in conditional assessment is expected. Furthermore, given the stochastic nature of the loading on turbine structures, a probabilistic framework is appropriate to characterize their risk of failure at a given time. Such information will be invaluable to turbine controllers, allowing them to operate the structures within acceptable risk profiles. This study explores the characterization of the turbine loading and response envelopes for critical failure modes of the turbine blade structures. A framework is presented to develop an analytical estimation of the loading environment (including loading effects) based on the dynamic behavior of the blades. This is influenced by behaviors including along and across-wind aero-elastic effects, wind shear gradient, tower shadow effects, and centrifugal stiffening effects. The proposed solution includes methods that are based on modal decomposition of the blades and require frequent updates to the estimated modal properties to account for the time-varying nature of the turbine and its environment. The estimated demand statistics are compared to a code-based resistance curve to determine a probabilistic estimate of the risk of blade failure given the loading environment.
Probabilistic evaluation of fuselage-type composite structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1992-01-01
A methodology is developed to computationally simulate the uncertain behavior of composite structures. The uncertain behavior includes buckling loads, natural frequencies, displacements, stress/strain etc., which are the consequences of the random variation (scatter) of the primitive (independent random) variables in the constituent, ply, laminate and structural levels. This methodology is implemented in the IPACS (Integrated Probabilistic Assessment of Composite Structures) computer code. A fuselage-type composite structure is analyzed to demonstrate the code's capability. The probability distribution functions of the buckling loads, natural frequency, displacement, strain and stress are computed. The sensitivity of each primitive (independent random) variable to a given structural response is also identified from the analyses.
NASA Technical Reports Server (NTRS)
Onwubiko, Chin-Yere; Onyebueke, Landon
1996-01-01
The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.
NASA Technical Reports Server (NTRS)
Cruse, T. A.
1987-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.
1988-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
Probabilistic assessment of uncertain adaptive hybrid composites
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1994-01-01
Adaptive composite structures using actuation materials, such as piezoelectric fibers, were assessed probabilistically utilizing intraply hybrid composite mechanics in conjunction with probabilistic composite structural analysis. Uncertainties associated with the actuation material as well as the uncertainties in the regular (traditional) composite material properties were quantified and considered in the assessment. Static and buckling analyses were performed for rectangular panels with various boundary conditions and different control arrangements. The probability density functions of the structural behavior, such as maximum displacement and critical buckling load, were computationally simulated. The results of the assessment indicate that improved design and reliability can be achieved with actuation material.
Probabilistic safety assessment of the design of a tall buildings under the extreme load
DOE Office of Scientific and Technical Information (OSTI.GOV)
Králik, Juraj, E-mail: juraj.kralik@stuba.sk
2016-06-08
The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
Probabilistic safety assessment of the design of a tall buildings under the extreme load
NASA Astrophysics Data System (ADS)
Králik, Juraj
2016-06-01
The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
Probabilistic Sizing and Verification of Space Ceramic Structures
NASA Astrophysics Data System (ADS)
Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit
2012-07-01
Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Risch, Eva; Gasperi, Johnny; Gromaire, Marie-Christine; Chebbo, Ghassan; Azimi, Sam; Rocher, Vincent; Roux, Philippe; Rosenbaum, Ralph K; Sinfort, Carole
2018-01-01
Sewage systems are a vital part of the urban infrastructure in most cities. They provide drainage, which protects public health, prevents the flooding of property and protects the water environment around urban areas. On some occasions sewers will overflow into the water environment during heavy rain potentially causing unacceptable impacts from releases of untreated sewage into the environment. In typical Life Cycle Assessment (LCA) studies of urban wastewater systems (UWS), average dry-weather conditions are modelled while wet-weather flows from UWS, presenting a high temporal variability, are not currently accounted for. In this context, the loads from several storm events could be important contributors to the impact categories freshwater eutrophication and ecotoxicity. In this study we investigated the contributions of these wet-weather-induced discharges relative to average dry-weather conditions in the life cycle inventory for UWS. In collaboration with the Paris public sanitation service (SIAAP) and Observatory of Urban Pollutants (OPUR) program researchers, this work aimed at identifying and comparing contributing flows from the UWS in the Paris area by a selection of routine wastewater parameters and priority pollutants. This collected data is organized according to archetypal weather days during a reference year. Then, for each archetypal weather day and its associated flows to the receiving river waters (Seine), the parameters of pollutant loads (statistical distribution of concentrations and volumes) were determined. The resulting inventory flows (i.e. the potential loads from the UWS) were used as LCA input data to assess the associated impacts. This allowed investigating the relative importance of episodic wet-weather versus "continuous" dry-weather loads with a probabilistic approach to account for pollutant variability within the urban flows. The analysis at the scale of one year showed that storm events are significant contributors to the impacts of freshwater eutrophication and ecotoxicity compared to those arising from treated effluents. At the rain event scale the wet-weather contributions to these impacts are even more significant, accounting for example for up to 62% of the total impact on freshwater ecotoxicity. This also allowed investigating and discussing the ecotoxicity contribution of each class of pollutants among the broad range of inventoried substances. Finally, with such significant contributions of pollutant loads and associated impacts from wet-weather events, further research is required to better include temporally-differentiated emissions when evaluating eutrophication and ecotoxicity. This will provide a better understanding of how the performance of an UWS system affects the receiving environment for given local weather conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Probabilistic Structural Analysis Program
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
Probabilistic evaluation of SSME structural components
NASA Astrophysics Data System (ADS)
Rajagopal, K. R.; Newell, J. F.; Ho, H.
1991-05-01
The application is described of Composite Load Spectra (CLS) and Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) family of computer codes to the probabilistic structural analysis of four Space Shuttle Main Engine (SSME) space propulsion system components. These components are subjected to environments that are influenced by many random variables. The applications consider a wide breadth of uncertainties encountered in practice, while simultaneously covering a wide area of structural mechanics. This has been done consistent with the primary design requirement for each component. The probabilistic application studies are discussed using finite element models that have been typically used in the past in deterministic analysis studies.
Moving Aerospace Structural Design Practice to a Load and Resistance Factor Approach
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.; Raju, Ivatury S.
2016-01-01
Aerospace structures are traditionally designed using the factor of safety (FOS) approach. The limit load on the structure is determined and the structure is then designed for FOS times the limit load - the ultimate load. Probabilistic approaches utilize distributions for loads and strengths. Failures are predicted to occur in the region of intersection of the two distributions. The load and resistance factor design (LRFD) approach judiciously combines these two approaches by intensive calibration studies on loads and strength to result in structures that are efficient and reliable. This paper discusses these three approaches.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
1991-06-01
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
The pdf approach to turbulent polydispersed two-phase flows
NASA Astrophysics Data System (ADS)
Minier, Jean-Pierre; Peirano, Eric
2001-10-01
The purpose of this paper is to develop a probabilistic approach to turbulent polydispersed two-phase flows. The two-phase flows considered are composed of a continuous phase, which is a turbulent fluid, and a dispersed phase, which represents an ensemble of discrete particles (solid particles, droplets or bubbles). Gathering the difficulties of turbulent flows and of particle motion, the challenge is to work out a general modelling approach that meets three requirements: to treat accurately the physically relevant phenomena, to provide enough information to address issues of complex physics (combustion, polydispersed particle flows, …) and to remain tractable for general non-homogeneous flows. The present probabilistic approach models the statistical dynamics of the system and consists in simulating the joint probability density function (pdf) of a number of fluid and discrete particle properties. A new point is that both the fluid and the particles are included in the pdf description. The derivation of the joint pdf model for the fluid and for the discrete particles is worked out in several steps. The mathematical properties of stochastic processes are first recalled. The various hierarchies of pdf descriptions are detailed and the physical principles that are used in the construction of the models are explained. The Lagrangian one-particle probabilistic description is developed first for the fluid alone, then for the discrete particles and finally for the joint fluid and particle turbulent systems. In the case of the probabilistic description for the fluid alone or for the discrete particles alone, numerical computations are presented and discussed to illustrate how the method works in practice and the kind of information that can be extracted from it. Comments on the current modelling state and propositions for future investigations which try to link the present work with other ideas in physics are made at the end of the paper.
Stochastic fundamental diagram for probabilistic traffic flow modeling.
DOT National Transportation Integrated Search
2011-09-01
Flowing water in river, transported gas or oil in pipe, electric current in wire, moving : goods on conveyor, molecular motors in living cell, and driving vehicles on a highway are : various kinds of flow from physical or non-physical systems, yet ea...
Probabilistic structural analysis methods and applications
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.
1988-01-01
An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.
NASA Astrophysics Data System (ADS)
Králik, Juraj
2017-07-01
The paper presents the probabilistic and sensitivity analysis of the efficiency of the damping devices cover of nuclear power plant under impact of the container of nuclear fuel of type TK C30 drop. The finite element idealization of nuclear power plant structure is used in space. The steel pipe damper system is proposed for dissipation of the kinetic energy of the container free fall. The experimental results of the shock-damper basic element behavior under impact loads are presented. The Newmark integration method is used for solution of the dynamic equations. The sensitivity and probabilistic analysis of damping devices was realized in the AntHILL and ANSYS software.
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1993-01-01
A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.
Probabilistic simulation of the human factor in structural reliability
NASA Astrophysics Data System (ADS)
Chamis, Christos C.; Singhal, Surendra N.
1994-09-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Probabilistic Simulation of the Human Factor in Structural Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Singhal, Surendra N.
1994-01-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
DISCOUNTING OF DELAYED AND PROBABILISTIC LOSSES OVER A WIDE RANGE OF AMOUNTS
Green, Leonard; Myerson, Joel; Oliveira, Luís; Chang, Seo Eun
2014-01-01
The present study examined delay and probability discounting of hypothetical monetary losses over a wide range of amounts (from $20 to $500,000) in order to determine how amount affects the parameters of the hyperboloid discounting function. In separate conditions, college students chose between immediate payments and larger, delayed payments and between certain payments and larger, probabilistic payments. The hyperboloid function accurately described both types of discounting, and amount of loss had little or no systematic effect on the degree of discounting. Importantly, the amount of loss also had little systematic effect on either the rate parameter or the exponent of the delay and probability discounting functions. The finding that the parameters of the hyperboloid function remain relatively constant across a wide range of amounts of delayed and probabilistic loss stands in contrast to the robust amount effects observed with delayed and probabilistic rewards. At the individual level, the degree to which delayed losses were discounted was uncorrelated with the degree to which probabilistic losses were discounted, and delay and probability loaded on two separate factors, similar to what is observed with delayed and probabilistic rewards. Taken together, these findings argue that although delay and probability discounting involve fundamentally different decision-making mechanisms, nevertheless the discounting of delayed and probabilistic losses share an insensitivity to amount that distinguishes it from the discounting of delayed and probabilistic gains. PMID:24745086
Prados-Privado, María; Gehrke, Sérgio A; Rojo, Rosa; Prados-Frutos, Juan Carlos
2018-06-11
The aim of this study was to fully characterize the mechanical behavior of an external hexagonal implant connection (ø3.5 mm, 10-mm length) with an in vitro study, a three-dimensional finite element analysis, and a probabilistic fatigue study. Ten implant-abutment assemblies were randomly divided into two groups, five were subjected to a fracture test to obtain the maximum fracture load, and the remaining were exposed to a fatigue test with 360,000 cycles of 150 ± 10 N. After mechanical cycling, all samples were attached to the torque-testing machine and the removal torque was measured in Newton centimeters. A finite element analysis (FEA) was then executed in ANSYS® to verify all results obtained in the mechanical tests. Finally, due to the randomness of the fatigue phenomenon, a probabilistic fatigue model was computed to obtain the probability of failure associated with each cycle load. FEA demonstrated that the fracture corresponded with a maximum stress of 2454 MPa obtained in the in vitro fracture test. Mean life was verified by the three methods. Results obtained by the FEA, the in vitro test, and the probabilistic approaches were in accordance. Under these conditions, no mechanical etiology failure is expected to occur up to 100,000 cycles. Graphical abstract ᅟ.
Processing of probabilistic information in weight perception and motor prediction.
Trampenau, Leif; van Eimeren, Thilo; Kuhtz-Buschbeck, Johann
2017-02-01
We studied the effects of probabilistic cues, i.e., of information of limited certainty, in the context of an action task (GL: grip-lift) and of a perceptual task (WP: weight perception). Normal subjects (n = 22) saw four different probabilistic visual cues, each of which announced the likely weight of an object. In the GL task, the object was grasped and lifted with a pinch grip, and the peak force rates indicated that the grip and load forces were scaled predictively according to the probabilistic information. The WP task provided the expected heaviness related to each probabilistic cue; the participants gradually adjusted the object's weight until its heaviness matched the expected weight for a given cue. Subjects were randomly assigned to two groups: one started with the GL task and the other one with the WP task. The four different probabilistic cues influenced weight adjustments in the WP task and peak force rates in the GL task in a similar manner. The interpretation and utilization of the probabilistic information was critically influenced by the initial task. Participants who started with the WP task classified the four probabilistic cues into four distinct categories and applied these categories to the subsequent GL task. On the other side, participants who started with the GL task applied three distinct categories to the four cues and retained this classification in the following WP task. The initial strategy, once established, determined the way how the probabilistic information was interpreted and implemented.
NASA Astrophysics Data System (ADS)
Kala, J.; Bajer, M.; Barnat, J.; Smutný, J.
2010-12-01
Pedestrian-induced vibrations are a criterion for serviceability. This loading is significant for light-weight footbridge structures, but was established as a basic loading for the ceilings of various ordinary buildings. Wide variations of this action exist. To verify the different conclusions of various authors, vertical pressure measurements invoked during walking were performed. In the article the approaches of different design codes are also shown.
laboratory's understanding of capacity value in modern power systems and enjoys applying probabilistic systems efficiency and load management opportunities Education M.E.S. in Environment and Resource Studies, University
Reliability analysis of composite structures
NASA Technical Reports Server (NTRS)
Kan, Han-Pin
1992-01-01
A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.
Commercialization of NESSUS: Status
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Millwater, Harry R.
1991-01-01
A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.
Develop Probabilistic Tsunami Design Maps for ASCE 7
NASA Astrophysics Data System (ADS)
Wei, Y.; Thio, H. K.; Chock, G.; Titov, V. V.
2014-12-01
A national standard for engineering design for tsunami effects has not existed before and this significant risk is mostly ignored in engineering design. The American Society of Civil Engineers (ASCE) 7 Tsunami Loads and Effects Subcommittee is completing a chapter for the 2016 edition of ASCE/SEI 7 Standard. Chapter 6, Tsunami Loads and Effects, would become the first national tsunami design provisions. These provisions will apply to essential facilities and critical infrastructure. This standard for tsunami loads and effects will apply to designs as part of the tsunami preparedness. The provisions will have significance as the post-tsunami recovery tool, to plan and evaluate for reconstruction. Maps of 2,500-year probabilistic tsunami inundation for Alaska, Washington, Oregon, California, and Hawaii need to be developed for use with the ASCE design provisions. These new tsunami design zone maps will define the coastal zones where structures of greater importance would be designed for tsunami resistance and community resilience. The NOAA Center for Tsunami Research (NCTR) has developed 75 tsunami inundation models as part of the operational tsunami model forecast capability for the U.S. coastline. NCTR, UW, and URS are collaborating with ASCE to develop the 2,500-year tsunami design maps for the Pacific states using these tsunami models. This ensures the probabilistic criteria are established in ASCE's tsunami design maps. URS established a Probabilistic Tsunami Hazard Assessment approach consisting of a large amount of tsunami scenarios that include both epistemic uncertainty and aleatory variability (Thio et al., 2010). Their study provides 2,500-year offshore tsunami heights at the 100-m water depth, along with the disaggregated earthquake sources. NOAA's tsunami models are used to identify a group of sources that produce these 2,500-year tsunami heights. The tsunami inundation limits and runup heights derived from these sources establish the tsunami design map for the study site. ASCE's Energy Grad Line Analysis then uses these modeling constraints to derive hydrodynamic forces for structures within the tsunami design zone. The probabilistic tsunami design maps will be validated by comparison to state inundation maps under the coordination of the National Tsunami Hazard Mitigation Program.
PROBABILISTIC SAFETY ASSESSMENT OF OPERATIONAL ACCIDENTS AT THE WASTE ISOLATION PILOT PLANT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rucker, D.F.
2000-09-01
This report presents a probabilistic safety assessment of radioactive doses as consequences from accident scenarios to complement the deterministic assessment presented in the Waste Isolation Pilot Plant (WIPP) Safety Analysis Report (SAR). The International Council of Radiation Protection (ICRP) recommends both assessments be conducted to ensure that ''an adequate level of safety has been achieved and that no major contributors to risk are overlooked'' (ICRP 1993). To that end, the probabilistic assessment for the WIPP accident scenarios addresses the wide range of assumptions, e.g. the range of values representing the radioactive source of an accident, that could possibly have beenmore » overlooked by the SAR. Routine releases of radionuclides from the WIPP repository to the environment during the waste emplacement operations are expected to be essentially zero. In contrast, potential accidental releases from postulated accident scenarios during waste handling and emplacement could be substantial, which necessitates the need for radiological air monitoring and confinement barriers (DOE 1999). The WIPP Safety Analysis Report (SAR) calculated doses from accidental releases to the on-site (at 100 m from the source) and off-site (at the Exclusive Use Boundary and Site Boundary) public by a deterministic approach. This approach, as demonstrated in the SAR, uses single-point values of key parameters to assess the 50-year, whole-body committed effective dose equivalent (CEDE). The basic assumptions used in the SAR to formulate the CEDE are retained for this report's probabilistic assessment. However, for the probabilistic assessment, single-point parameter values were replaced with probability density functions (PDF) and were sampled over an expected range. Monte Carlo simulations were run, in which 10,000 iterations were performed by randomly selecting one value for each parameter and calculating the dose. Statistical information was then derived from the 10,000 iteration batch, which included 5%, 50%, and 95% dose likelihood, and the sensitivity of each assumption to the calculated doses. As one would intuitively expect, the doses from the probabilistic assessment for most scenarios were found to be much less than the deterministic assessment. The lower dose of the probabilistic assessment can be attributed to a ''smearing'' of values from the high and low end of the PDF spectrum of the various input parameters. The analysis also found a potential weakness in the deterministic analysis used in the SAR, a detail on drum loading was not taken into consideration. Waste emplacement operations thus far have handled drums from each shipment as a single unit, i.e. drums from each shipment are kept together. Shipments typically come from a single waste stream, and therefore the curie loading of each drum can be considered nearly identical to that of its neighbor. Calculations show that if there are large numbers of drums used in the accident scenario assessment, e.g. 28 drums in the waste hoist failure scenario (CH5), then the probabilistic dose assessment calculations will diverge from the deterministically determined doses. As it is currently calculated, the deterministic dose assessment assumes one drum loaded to the maximum allowable (80 PE-Ci), and the remaining are 10% of the maximum. The effective average of drum curie content is therefore less in the deterministic assessment than the probabilistic assessment for a large number of drums. EEG recommends that the WIPP SAR calculations be revisited and updated to include a probabilistic safety assessment.« less
Kapo, Katherine E; McDonough, Kathleen; Federle, Thomas; Dyer, Scott; Vamshi, Raghu
2015-06-15
Environmental exposure and associated ecological risk related to down-the-drain chemicals discharged by municipal wastewater treatment plants (WWTPs) are strongly influenced by in-stream dilution of receiving waters which varies by geography, flow conditions and upstream wastewater inputs. The iSTREEM® model (American Cleaning Institute, Washington D.C.) was utilized to determine probabilistic distributions for no decay and decay-based dilution factors in mean annual and low (7Q10) flow conditions. The dilution factors derived in this study are "combined" dilution factors which account for both hydrologic dilution and cumulative upstream effluent contributions that will differ depending on the rate of in-stream decay due to biodegradation, volatilization, sorption, etc. for the chemical being evaluated. The median dilution factors estimated in this study (based on various in-stream decay rates from zero decay to a 1h half-life) for WWTP mixing zones dominated by domestic wastewater flow ranged from 132 to 609 at mean flow and 5 to 25 at low flow, while median dilution factors at drinking water intakes (mean flow) ranged from 146 to 2×10(7) depending on the in-stream decay rate. WWTPs within the iSTREEM® model were used to generate a distribution of per capita wastewater generated in the U.S. The dilution factor and per capita wastewater generation distributions developed by this work can be used to conduct probabilistic exposure assessments for down-the-drain chemicals in influent wastewater, wastewater treatment plant mixing zones and at drinking water intakes in the conterminous U.S. In addition, evaluation of types and abundance of U.S. wastewater treatment processes provided insight into treatment trends and the flow volume treated by each type of process. Moreover, removal efficiencies of chemicals can differ by treatment type. Hence, the availability of distributions for per capita wastewater production, treatment type, and dilution factors at a national level provides a series of practical and powerful tools for building probabilistic exposure models. Copyright © 2015 Elsevier B.V. All rights reserved.
SRB attrition rate study of the aft skirt due to water impact cavity collapse loading
NASA Technical Reports Server (NTRS)
Crockett, C. D.
1976-01-01
A methodology was presented so that realistic attrition prediction could aid in selecting an optimum design option for minimizing the effects of updated loads on the Space Shuttle Solid Rocket Booster (SRB) aft skirt. The updated loads resulted in water impact attrition rates greater than 10 percent for the aft skirt structure. Adding weight to reinforce the aft skirt was undesirable. The refined method treats the occurrences of the load distribution probabilistically, radially and longitudinally, with respect to the critical structural response.
Effect of Cyclic Thermo-Mechanical Loads on Fatigue Reliability in Polymer Matrix Composites
NASA Technical Reports Server (NTRS)
Shah, A. R.; Murthy, P. L. N.; Chamis, C. C.
1996-01-01
A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multi-factor interaction relationship developed at NASA Lewis Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability- based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)(sub s) graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.
Probability-Based Design Criteria of the ASCE 7 Tsunami Loads and Effects Provisions (Invited)
NASA Astrophysics Data System (ADS)
Chock, G.
2013-12-01
Mitigation of tsunami risk requires a combination of emergency preparedness for evacuation in addition to providing structural resilience of critical facilities, infrastructure, and key resources necessary for immediate response and economic and social recovery. Critical facilities would include emergency response, medical, tsunami refuges and shelters, ports and harbors, lifelines, transportation, telecommunications, power, financial institutions, and major industrial/commercial facilities. The Tsunami Loads and Effects Subcommittee of the ASCE/SEI 7 Standards Committee is developing a proposed new Chapter 6 - Tsunami Loads and Effects for the 2016 edition of the ASCE 7 Standard. ASCE 7 provides the minimum design loads and requirements for structures subject to building codes such as the International Building Code utilized in the USA. In this paper we will provide a review emphasizing the intent of these new code provisions and explain the design methodology. The ASCE 7 provisions for Tsunami Loads and Effects enables a set of analysis and design methodologies that are consistent with performance-based engineering based on probabilistic criteria. . The ASCE 7 Tsunami Loads and Effects chapter will be initially applicable only to the states of Alaska, Washington, Oregon, California, and Hawaii. Ground shaking effects and subsidence from a preceding local offshore Maximum Considered Earthquake will also be considered prior to tsunami arrival for Alaska and states in the Pacific Northwest regions governed by nearby offshore subduction earthquakes. For national tsunami design provisions to achieve a consistent reliability standard of structural performance for community resilience, a new generation of tsunami inundation hazard maps for design is required. The lesson of recent tsunami is that historical records alone do not provide a sufficient measure of the potential heights of future tsunamis. Engineering design must consider the occurrence of events greater than scenarios in the historical record, and should properly be based on the underlying seismicity of subduction zones. Therefore, Probabilistic Tsunami Hazard Analysis (PTHA) consistent with source seismicity must be performed in addition to consideration of historical event scenarios. A method of Probabilistic Tsunami Hazard Analysis has been established that is generally consistent with Probabilistic Seismic Hazard Analysis in the treatment of uncertainty. These new tsunami design zone maps will define the coastal zones where structures of greater importance would be designed for tsunami resistance and community resilience. Structural member acceptability criteria will be based on performance objectives for a 2,500-year Maximum Considered Tsunami. The approach developed by the ASCE Tsunami Loads and Effects Subcommittee of the ASCE 7 Standard would result in the first national unification of tsunami hazard criteria for design codes reflecting the modern approach of Performance-Based Engineering.
Combined loading criterial influence on structural performance
NASA Technical Reports Server (NTRS)
Kuchta, B. J.; Sealey, D. M.; Howell, L. J.
1972-01-01
An investigation was conducted to determine the influence of combined loading criteria on the space shuttle structural performance. The study consisted of four primary phases: Phase (1) The determination of the sensitivity of structural weight to various loading parameters associated with the space shuttle. Phase (2) The determination of the sensitivity of structural weight to various levels of loading parameter variability and probability. Phase (3) The determination of shuttle mission loading parameters variability and probability as a function of design evolution and the identification of those loading parameters where inadequate data exists. Phase (4) The determination of rational methods of combining both deterministic time varying and probabilistic loading parameters to provide realistic design criteria. The study results are presented.
Probabilistic lifetime strength of aerospace materials via computational simulation
NASA Technical Reports Server (NTRS)
Boyce, Lola; Keating, Jerome P.; Lovelace, Thomas B.; Bast, Callie C.
1991-01-01
The results of a second year effort of a research program are presented. The research included development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic phenomenological constitutive relationship, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects of primitive variables. These primitive variables often originate in the environment and may include stress from loading, temperature, chemical, or radiation attack. This multifactor interaction constitutive equation is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the constitutive equation using actual experimental materials data together with the multiple linear regression of that data.
Quantification of uncertainties in the performance of smart composite structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1993-01-01
A composite wing with spars, bulkheads, and built-in control devices is evaluated using a method for the probabilistic assessment of smart composite structures. Structural responses (such as change in angle of attack, vertical displacements, and stresses in regular plies with traditional materials and in control plies with mixed traditional and actuation materials) are probabilistically assessed to quantify their respective scatter. Probabilistic sensitivity factors are computed to identify those parameters that have a significant influence on a specific structural response. Results show that the uncertainties in the responses of smart composite structures can be quantified. Responses such as structural deformation, ply stresses, frequencies, and buckling loads in the presence of defects can be reliably controlled to satisfy specified design requirements.
Steven C. McKelvey; William D. Smith; Frank Koch
2012-01-01
This project summary describes a probabilistic model developed with funding support from the Forest Health Monitoring Program of the Forest Service, U.S. Department of Agriculture (BaseEM Project SO-R-08-01). The model has been implemented in SODBuster, a standalone software package developed using the Java software development kit from Sun Microsystems.
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Hoge, Peter A.; Patel, B. M.; Nagpal, Vinod K.
2009-01-01
The primary structure of the Ares I-X Upper Stage Simulator (USS) launch vehicle is constructed of welded mild steel plates. There is some concern over the possibility of structural failure due to welding flaws. It was considered critical to quantify the impact of uncertainties in residual stress, material porosity, applied loads, and material and crack growth properties on the reliability of the welds during its pre-flight and flight. A criterion--an existing maximum size crack at the weld toe must be smaller than the maximum allowable flaw size--was established to estimate the reliability of the welds. A spectrum of maximum allowable flaw sizes was developed for different possible combinations of all of the above listed variables by performing probabilistic crack growth analyses using the ANSYS finite element analysis code in conjunction with the NASGRO crack growth code. Two alternative methods were used to account for residual stresses: (1) The mean residual stress was assumed to be 41 ksi and a limit was set on the net section flow stress during crack propagation. The critical flaw size was determined by parametrically increasing the initial flaw size and detecting if this limit was exceeded during four complete flight cycles, and (2) The mean residual stress was assumed to be 49.6 ksi (the parent material s yield strength) and the net section flow stress limit was ignored. The critical flaw size was determined by parametrically increasing the initial flaw size and detecting if catastrophic crack growth occurred during four complete flight cycles. Both surface-crack models and through-crack models were utilized to characterize cracks in the weld toe.
NASA Astrophysics Data System (ADS)
Halder, A.; Miller, F. J.
1982-03-01
A probabilistic model to evaluate the risk of liquefaction at a site and to limit or eliminate damage during earthquake induced liquefaction is proposed. The model is extended to consider three dimensional nonhomogeneous soil properties. The parameters relevant to the liquefaction phenomenon are identified, including: (1) soil parameters; (2) parameters required to consider laboratory test and sampling effects; and (3) loading parameters. The fundamentals of risk based design concepts pertient to liquefaction are reviewed. A detailed statistical evaluation of the soil parameters in the proposed liquefaction model is provided and the uncertainty associated with the estimation of in situ relative density is evaluated for both direct and indirect methods. It is found that the liquefaction potential the uncertainties in the load parameters could be higher than those in the resistance parameters.
Dynamic Stability of Uncertain Laminated Beams Under Subtangential Loads
NASA Technical Reports Server (NTRS)
Goyal, Vijay K.; Kapania, Rakesh K.; Adelman, Howard (Technical Monitor); Horta, Lucas (Technical Monitor)
2002-01-01
Because of the inherent complexity of fiber-reinforced laminated composites, it can be challenging to manufacture composite structures according to their exact design specifications, resulting in unwanted material and geometric uncertainties. In this research, we focus on the deterministic and probabilistic stability analysis of laminated structures subject to subtangential loading, a combination of conservative and nonconservative tangential loads, using the dynamic criterion. Thus a shear-deformable laminated beam element, including warping effects, is derived to study the deterministic and probabilistic response of laminated beams. This twenty-one degrees of freedom element can be used for solving both static and dynamic problems. In the first-order shear deformable model used here we have employed a more accurate method to obtain the transverse shear correction factor. The dynamic version of the principle of virtual work for laminated composites is expressed in its nondimensional form and the element tangent stiffness and mass matrices are obtained using analytical integration The stability is studied by giving the structure a small disturbance about an equilibrium configuration, and observing if the resulting response remains small. In order to study the dynamic behavior by including uncertainties into the problem, three models were developed: Exact Monte Carlo Simulation, Sensitivity Based Monte Carlo Simulation, and Probabilistic FEA. These methods were integrated into the developed finite element analysis. Also, perturbation and sensitivity analysis have been used to study nonconservative problems, as well as to study the stability analysis, using the dynamic criterion.
1980-09-01
relating x’and y’ Figure 2: Basic Laboratory Simulation Model 73 COMPARISON OF COMPUTED AND MEASURED ACCELERATIONS IN A DYNAMICALLY LOADED TACTICAL...Survival (General) Displacements Mines (Ordnance) Telemeter Systems Dynamic Response Models Temperatures Dynamics Moisture Thermal Stresses Energy...probabilistic reliability model for the XM 753 projectile rocket motor to bulkhead joint under extreme loading conditions is constructed. The reliability
NASA Astrophysics Data System (ADS)
Ishibashi, Yoshihiro; Fukui, Minoru
2018-03-01
The effect of the probabilistic delayed start on the one-dimensional traffic flow is investigated on the basis of several models. Analogy with the degeneracy of the states and its resolution, as well as that with the mathematical procedures adopted for them, is utilized. The perturbation is assumed to be proportional to the probability of the delayed start, and the perturbation function is determined so that imposed conditions are fulfilled. The obtained formulas coincide with those previously derived on the basis of the mean-field analyses of the Nagel-Schreckenberg and Fukui-Ishibashi models, and reproduce the cellular automaton simulation results.
GENERAL: A modified weighted probabilistic cellular automaton traffic flow model
NASA Astrophysics Data System (ADS)
Zhuang, Qian; Jia, Bin; Li, Xin-Gang
2009-08-01
This paper modifies the weighted probabilistic cellular automaton model (Li X L, Kuang H, Song T, et al 2008 Chin. Phys. B 17 2366) which considered a diversity of traffic behaviors under real traffic situations induced by various driving characters and habits. In the new model, the effects of the velocity at the last time step and drivers' desire for acceleration are taken into account. The fundamental diagram, spatial-temporal diagram, and the time series of one-minute data are analyzed. The results show that this model reproduces synchronized flow. Finally, it simulates the on-ramp system with the proposed model. Some characteristics including the phase diagram are studied.
Stochastic methods for analysis of power flow in electric networks
NASA Astrophysics Data System (ADS)
1982-09-01
The modeling and effects of probabilistic behavior on steady state power system operation were analyzed. A solution to the steady state network flow equations which adhere both to Kirchoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques was obtained. The development of sound techniques for producing meaningful data to serve as input is examined. Electric demand modeling, equipment failure analysis, and algorithm development are investigated. Two major development areas are described: a decomposition of stochastic processes which gives stationarity, ergodicity, and even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.
Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem
NASA Astrophysics Data System (ADS)
Man, J.; Li, W.; Zeng, L.; Wu, L.
2015-12-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.
A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng
2017-01-01
Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm. PMID:28587084
A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors.
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng
2017-05-25
Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ -connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.
Terminal Model Of Newtonian Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail
1994-01-01
Paper presents study of theory of Newtonian dynamics of terminal attractors and repellers, focusing on issues of reversibility vs. irreversibility and deterministic evolution vs. probabilistic or chaotic evolution of dynamic systems. Theory developed called "terminal dynamics" emphasizes difference between it and classical Newtonian dynamics. Also holds promise for explaining irreversibility, unpredictability, probabilistic behavior, and chaos in turbulent flows, in thermodynamic phenomena, and in other dynamic phenomena and systems.
Probabilistic and Possibilistic Analyses of the Strength of a Bonded Joint
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Krishnamurthy, T.; Smith, Steven A.
2001-01-01
The effects of uncertainties on the strength of a single lap shear joint are explained. Probabilistic and possibilistic methods are used to account for uncertainties. Linear and geometrically nonlinear finite element analyses are used in the studies. To evaluate the strength of the joint, fracture in the adhesive and material strength failure in the strap are considered. The study shows that linear analyses yield conservative predictions for failure loads. The possibilistic approach for treating uncertainties appears to be viable for preliminary design, but with several qualifications.
Improving urban wind flow predictions through data assimilation
NASA Astrophysics Data System (ADS)
Sousa, Jorge; Gorle, Catherine
2017-11-01
Computational fluid dynamic is fundamentally important to several aspects in the design of sustainable and resilient urban environments. The prediction of the flow pattern for example can help to determine pedestrian wind comfort, air quality, optimal building ventilation strategies, and wind loading on buildings. However, the significant variability and uncertainty in the boundary conditions poses a challenge when interpreting results as a basis for design decisions. To improve our understanding of the uncertainties in the models and develop better predictive tools, we started a pilot field measurement campaign on Stanford University's campus combined with a detailed numerical prediction of the wind flow. The experimental data is being used to investigate the potential use of data assimilation and inverse techniques to better characterize the uncertainty in the results and improve the confidence in current wind flow predictions. We consider the incoming wind direction and magnitude as unknown parameters and perform a set of Reynolds-averaged Navier-Stokes simulations to build a polynomial chaos expansion response surface at each sensor location. We subsequently use an inverse ensemble Kalman filter to retrieve an estimate for the probabilistic density function of the inflow parameters. Once these distributions are obtained, the forward analysis is repeated to obtain predictions for the flow field in the entire urban canopy and the results are compared with the experimental data. We would like to acknowledge high-performance computing support from Yellowstone (ark:/85065/d7wd3xhc) provided by NCAR.
An investigation into the probabilistic combination of quasi-static and random accelerations
NASA Technical Reports Server (NTRS)
Schock, R. W.; Tuell, L. P.
1984-01-01
The development of design load factors for aerospace and aircraft components and experiment support structures, which are subject to a simultaneous vehicle dynamic vibration (quasi-static) and acoustically generated random vibration, require the selection of a combination methodology. Typically, the procedure is to define the quasi-static and the random generated response separately, and arithmetically add or root sum square to get combined accelerations. Since the combination of a probabilistic and a deterministic function yield a probabilistic function, a viable alternate approach would be to determine the characteristics of the combined acceleration probability density function and select an appropriate percentile level for the combined acceleration. The following paper develops this mechanism and provides graphical data to select combined accelerations for most popular percentile levels.
Inference in the brain: Statistics flowing in redundant population codes
Pitkow, Xaq; Angelaki, Dora E
2017-01-01
It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors. PMID:28595050
Assessment of Optimal Flexibility in Ensemble of Frequency Responsive Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kundu, Soumya; Hansen, Jacob; Lian, Jianming
2018-04-19
Potential of electrical loads in providing grid ancillary services is often limited due to the uncertainties associated with the load behavior. A knowledge of the expected uncertainties with a load control program would invariably yield to better informed control policies, opening up the possibility of extracting the maximal load control potential without affecting grid operations. In the context of frequency responsive load control, a probabilistic uncertainty analysis framework is presented to quantify the expected error between the target and actual load response, under uncertainties in the load dynamics. A closed-form expression of an optimal demand flexibility, minimizing the expected errormore » in actual and committed flexibility, is provided. Analytical results are validated through Monte Carlo simulations of ensembles of electric water heaters.« less
Probabilistic structural mechanics research for parallel processing computers
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.
1991-01-01
Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.
The Epistemic Representation of Information Flow Security in Probabilistic Systems
1995-06-01
The new characterization also means that our security crite- rion is expressible in a simpler logic and model. 1 Introduction Multilevel security is...ber generator) during its execution. Such probabilistic choices are useful in a multilevel security context for Supported by grants HKUST 608/94E from... 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and
Ensemble reconstruction of spatio-temporal extreme low-flow events in France since 1871
NASA Astrophysics Data System (ADS)
Caillouet, Laurie; Vidal, Jean-Philippe; Sauquet, Eric; Devers, Alexandre; Graff, Benjamin
2017-06-01
The length of streamflow observations is generally limited to the last 50 years even in data-rich countries like France. It therefore offers too small a sample of extreme low-flow events to properly explore the long-term evolution of their characteristics and associated impacts. To overcome this limit, this work first presents a daily 140-year ensemble reconstructed streamflow dataset for a reference network of near-natural catchments in France. This dataset, called SCOPE Hydro (Spatially COherent Probabilistic Extended Hydrological dataset), is based on (1) a probabilistic precipitation, temperature, and reference evapotranspiration downscaling of the Twentieth Century Reanalysis over France, called SCOPE Climate, and (2) continuous hydrological modelling using SCOPE Climate as forcings over the whole period. This work then introduces tools for defining spatio-temporal extreme low-flow events. Extreme low-flow events are first locally defined through the sequent peak algorithm using a novel combination of a fixed threshold and a daily variable threshold. A dedicated spatial matching procedure is then established to identify spatio-temporal events across France. This procedure is furthermore adapted to the SCOPE Hydro 25-member ensemble to characterize in a probabilistic way unrecorded historical events at the national scale. Extreme low-flow events are described and compared in a spatially and temporally homogeneous way over 140 years on a large set of catchments. Results highlight well-known recent events like 1976 or 1989-1990, but also older and relatively forgotten ones like the 1878 and 1893 events. These results contribute to improving our knowledge of historical events and provide a selection of benchmark events for climate change adaptation purposes. Moreover, this study allows for further detailed analyses of the effect of climate variability and anthropogenic climate change on low-flow hydrology at the scale of France.
Stochastic Controls on Nitrate Transport and Cycling
NASA Astrophysics Data System (ADS)
Botter, G.; Settin, T.; Alessi Celegon, E.; Marani, M.; Rinaldo, A.
2005-12-01
In this paper, the impact of nutrient inputs on basin-scale nitrates losses is investigated in a probabilistic framework by means of a continuous, geomorphologically based, Montecarlo approach, which explicitly tackles the random character of the processes controlling nitrates generation, transformation and transport in river basins. This is obtained by coupling the stochastic generation of climatic and rainfall series with simplified hydrologic and biogeochemical models operating at the hillslope scale. Special attention is devoted to the spatial and temporal variability of nitrogen sources of agricultural origin and to the effect of temporally distributed rainfall fields on the ensuing nitrates leaching. The influence of random climatic variables on bio-geochemical processes affecting the nitrogen cycle in the soil-water system (e.g. plant uptake, nitrification and denitrification, mineralization), is also considered. The approach developed has been applied to a catchment located in North-Eastern Italy and is used to provide probabilistic estimates of the NO_3 load transferred downstream, which is received and accumulated in the Venice lagoon. We found that the nitrogen load introduced by fertilizations significantly affects the pdf of the nitrates content in the soil moisture, leading to prolonged risks of increased nitrates leaching from soil. The model allowed the estimation of the impact of different practices on the probabilistic structure of the basin-scale hydrologic and chemical response. As a result, the return period of the water volumes and of the nitrates loads released into the Venice lagoon has been linked directly to the ongoing climatic, pluviometric and agricultural regimes, with relevant implications for environmental planning activities aimed at achieving sustainable management practices.
Probabilistic fatigue life prediction of metallic and composite materials
NASA Astrophysics Data System (ADS)
Xiang, Yibing
Fatigue is one of the most common failure modes for engineering structures, such as aircrafts, rotorcrafts and aviation transports. Both metallic materials and composite materials are widely used and affected by fatigue damage. Huge uncertainties arise from material properties, measurement noise, imperfect models, future anticipated loads and environmental conditions. These uncertainties are critical issues for accurate remaining useful life (RUL) prediction for engineering structures in service. Probabilistic fatigue prognosis considering various uncertainties is of great importance for structural safety. The objective of this study is to develop probabilistic fatigue life prediction models for metallic materials and composite materials. A fatigue model based on crack growth analysis and equivalent initial flaw size concept is proposed for metallic materials. Following this, the developed model is extended to include structural geometry effects (notch effect), environmental effects (corroded specimens) and manufacturing effects (shot peening effects). Due to the inhomogeneity and anisotropy, the fatigue model suitable for metallic materials cannot be directly applied to composite materials. A composite fatigue model life prediction is proposed based on a mixed-mode delamination growth model and a stiffness degradation law. After the development of deterministic fatigue models of metallic and composite materials, a general probabilistic life prediction methodology is developed. The proposed methodology combines an efficient Inverse First-Order Reliability Method (IFORM) for the uncertainty propogation in fatigue life prediction. An equivalent stresstransformation has been developed to enhance the computational efficiency under realistic random amplitude loading. A systematical reliability-based maintenance optimization framework is proposed for fatigue risk management and mitigation of engineering structures.
NASA Astrophysics Data System (ADS)
Gao, Yi
The development and utilization of wind energy for satisfying electrical demand has received considerable attention in recent years due to its tremendous environmental, social and economic benefits, together with public support and government incentives. Electric power generation from wind energy behaves quite differently from that of conventional sources. The fundamentally different operating characteristics of wind energy facilities therefore affect power system reliability in a different manner than those of conventional systems. The reliability impact of such a highly variable energy source is an important aspect that must be assessed when the wind power penetration is significant. The focus of the research described in this thesis is on the utilization of state sampling Monte Carlo simulation in wind integrated bulk electric system reliability analysis and the application of these concepts in system planning and decision making. Load forecast uncertainty is an important factor in long range planning and system development. This thesis describes two approximate approaches developed to reduce the number of steps in a load duration curve which includes load forecast uncertainty, and to provide reasonably accurate generating and bulk system reliability index predictions. The developed approaches are illustrated by application to two composite test systems. A method of generating correlated random numbers with uniform distributions and a specified correlation coefficient in the state sampling method is proposed and used to conduct adequacy assessment in generating systems and in bulk electric systems containing correlated wind farms in this thesis. The studies described show that it is possible to use the state sampling Monte Carlo simulation technique to quantitatively assess the reliability implications associated with adding wind power to a composite generation and transmission system including the effects of multiple correlated wind sites. This is an important development as it permits correlated wind farms to be incorporated in large practical system studies without requiring excessive increases in computer solution time. The procedures described in this thesis for creating monthly and seasonal wind farm models should prove useful in situations where time period models are required to incorporate scheduled maintenance of generation and transmission facilities. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the quantitative system risk and conduct bulk power system planning. A relatively new approach that incorporates deterministic and probabilistic considerations in a single risk assessment framework has been designated as the joint deterministic-probabilistic approach. The research work described in this thesis illustrates that the joint deterministic-probabilistic approach can be effectively used to integrate wind power in bulk electric system planning. The studies described in this thesis show that the application of the joint deterministic-probabilistic method provides more stringent results for a system with wind power than the traditional deterministic N-1 method because the joint deterministic-probabilistic technique is driven by the deterministic N-1 criterion with an added probabilistic perspective which recognizes the power output characteristics of a wind turbine generator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Li, Weixuan; Zeng, Lingzao
2016-06-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less
Comparison of Peak-Flow Estimation Methods for Small Drainage Basins in Maine
Hodgkins, Glenn A.; Hebson, Charles; Lombard, Pamela J.; Mann, Alexander
2007-01-01
Understanding the accuracy of commonly used methods for estimating peak streamflows is important because the designs of bridges, culverts, and other river structures are based on these flows. Different methods for estimating peak streamflows were analyzed for small drainage basins in Maine. For the smallest basins, with drainage areas of 0.2 to 1.0 square mile, nine peak streamflows from actual rainfall events at four crest-stage gaging stations were modeled by the Rational Method and the Natural Resource Conservation Service TR-20 method and compared to observed peak flows. The Rational Method had a root mean square error (RMSE) of -69.7 to 230 percent (which means that approximately two thirds of the modeled flows were within -69.7 to 230 percent of the observed flows). The TR-20 method had an RMSE of -98.0 to 5,010 percent. Both the Rational Method and TR-20 underestimated the observed flows in most cases. For small basins, with drainage areas of 1.0 to 10 square miles, modeled peak flows were compared to observed statistical peak flows with return periods of 2, 50, and 100 years for 17 streams in Maine and adjoining parts of New Hampshire. Peak flows were modeled by the Rational Method, the Natural Resources Conservation Service TR-20 method, U.S. Geological Survey regression equations, and the Probabilistic Rational Method. The regression equations were the most accurate method of computing peak flows in Maine for streams with drainage areas of 1.0 to 10 square miles with an RMSE of -34.3 to 52.2 percent for 50-year peak flows. The Probabilistic Rational Method was the next most accurate method (-38.5 to 62.6 percent). The Rational Method (-56.1 to 128 percent) and particularly the TR-20 method (-76.4 to 323 percent) had much larger errors. Both the TR-20 and regression methods had similar numbers of underpredictions and overpredictions. The Rational Method overpredicted most peak flows and the Probabilistic Rational Method tended to overpredict peak flows from the smaller (less than 5 square miles) drainage basins and underpredict peak flows from larger drainage basins. The results of this study are consistent with the most comprehensive analysis of observed and modeled peak streamflows in the United States, which analyzed statistical peak flows from 70 drainage basins in the Midwest and the Northwest.
Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source
NASA Astrophysics Data System (ADS)
Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.
2014-06-01
To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.
Daniel J. Miller; Kelly M. Burnett
2008-01-01
Debris flows are important geomorphic agents in mountainous terrains that shape channel environments and add a dynamic element to sediment supply and channel disturbance. Identification of channels susceptible to debris-flow inputs of sediment and organic debris, and quantification of the likelihood and magnitude of those inputs, are key tasks for characterizing...
The Extravehicular Suit Impact Load Attenuation Study for Use in Astronaut Bone Fracture Prediction
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Gilkey, Kelly M.; Sulkowski, Christina M.; Samorezov, Sergey; Myers, Jerry G.
2011-01-01
The NASA Integrated Medical Model (IMM) assesses the risk, including likelihood and impact of occurrence, of all credible in-flight medical conditions. Fracture of the proximal femur is a traumatic injury that would likely result in loss of mission if it were to happen during spaceflight. The low gravity exposure causes decreases in bone mineral density which heightens the concern. Researchers at the NASA Glenn Research Center have quantified bone fracture probability during spaceflight with a probabilistic model. It was assumed that a pressurized extravehicular activity (EVA) suit would attenuate load during a fall, but no supporting data was available. The suit impact load attenuation study was performed to collect analogous data. METHODS: A pressurized EVA suit analog test bed was used to study how the offset, defined as the gap between the suit and the astronaut s body, impact load magnitude and suit operating pressure affects the attenuation of impact load. The attenuation data was incorporated into the probabilistic model of bone fracture as a function of these factors, replacing a load attenuation value based on commercial hip protectors. RESULTS: Load attenuation was more dependent on offset than on pressurization or load magnitude, especially at small offsets. Load attenuation factors for offsets between 0.1 - 1.5 cm were 0.69 +/- 0.15, 0.49 +/- 0.22 and 0.35 +/- 0.18 for mean impact forces of 4827, 6400 and 8467 N, respectively. Load attenuation factors for offsets of 2.8 - 5.3 cm were 0.93 +/- 0.2, 0.94 +/- 0.1 and 0.84 +/- 0.5, for the same mean impact forces. Reductions were observed in the 95th percentile confidence interval of the bone fracture probability predictions. CONCLUSIONS: The reduction in uncertainty and improved confidence in bone fracture predictions increased the fidelity and credibility of the fracture risk model and its benefit to mission design and operational decisions.
Valente, Giordano; Taddei, Fulvia; Jonkers, Ilse
2013-09-03
The weakness of hip abductor muscles is related to lower-limb joint osteoarthritis, and joint overloading may increase the risk for disease progression. The relationship between muscle strength, structural joint deterioration and joint loading makes the latter an important parameter in the study of onset and follow-up of the disease. Since the relationship between hip abductor weakness and joint loading still remains an open question, the purpose of this study was to adopt a probabilistic modeling approach to give insights into how the weakness of hip abductor muscles, in the extent to which normal gait could be unaltered, affects ipsilateral joint contact forces. A generic musculoskeletal model was scaled to each healthy subject included in the study, and the maximum force-generating capacity of each hip abductor muscle in the model was perturbed to evaluate how all physiologically possible configurations of hip abductor weakness affected the joint contact forces during walking. In general, the muscular system was able to compensate for abductor weakness. The reduced force-generating capacity of the abductor muscles affected joint contact forces to a mild extent, with 50th percentile mean differences up to 0.5 BW (maximum 1.7 BW). There were greater increases in the peak knee joint loads than in loads at the hip or ankle. Gluteus medius, particularly the anterior compartment, was the abductor muscle with the most influence on hip and knee loads. Further studies should assess if these increases in joint loading may affect initiation and progression of osteoarthritis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.
2014-01-01
Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051
NASA Astrophysics Data System (ADS)
Lee, D. B.; Jerolmack, D. J.
2017-12-01
Bed-load transport is notoriously unpredictable, in part due to stochastic fluctuations in grain entrainment and deposition. A general statistical mechanical framework has been proposed by Furbish and colleagues to formally derive average bed-load flux from grain-scale motion, and its application requires an intimate understanding of the probabilistic motion of individual grains. Recent work by Ancey et al. suggests that, near threshold, particles are entrained collectively. If so, understanding the scales of correlation is a necessary step to complete the probabilistic framework describing bed-load flux. We perform a series of experiments in a steep-sloped channel that directly quantifies fluctuations in grain motion as a function of the feed rate of particles (marbles). As the feed rate is increased, the necessary averaging time is decreased (i.e. transport grows less variable in time). Collective grain motion is defined as spatially clustered movement of several grains at once. We find that entrainment of particles is generally collective, but that these entrained particles deposit independently of each other. The size distribution of collective motion events follows an exponential decay that is consistent across sediment feed rates. To first order, changing feed rate does not change the kinematics of mobile grains, just the frequency of motion. For transport within a given region of the bed, we show that the total displacement of all entrained grains is proportional to the kinetic energy deposited into the bed by impacting grains. Individual grain-bed impacts are the likely cause of both collective and individual grain entrainment. The picture that emerges is similar to generic avalanching dynamics in sandpiles: "avalanches" (collective entrainment events) of a characteristic size relax with a characteristic timescale regardless of feed rate, but the frequency of avalanches increases in proportion to the feed rate. At high enough feed rates the avalanches merge, leading to progressively smoother and continuous transport. As most bed-load transport occurs in the intermittent regime, the length scale of collective entrainment should be considered a fundamental addition to a probabilistic framework that hopes to infer flux from grain motion.
Probabilistic model of bridge vehicle loads in port area based on in-situ load testing
NASA Astrophysics Data System (ADS)
Deng, Ming; Wang, Lei; Zhang, Jianren; Wang, Rei; Yan, Yanhong
2017-11-01
Vehicle load is an important factor affecting the safety and usability of bridges. An statistical analysis is carried out in this paper to investigate the vehicle load data of Tianjin Haibin highway in Tianjin port of China, which are collected by the Weigh-in- Motion (WIM) system. Following this, the effect of the vehicle load on test bridge is calculated, and then compared with the calculation result according to HL-93(AASHTO LRFD). Results show that the overall vehicle load follows a distribution with a weighted sum of four normal distributions. The maximum vehicle load during the design reference period follows a type I extremum distribution. The vehicle load effect also follows a weighted sum of four normal distributions, and the standard value of the vehicle load is recommended as 1.8 times that of the calculated value according to HL-93.
Saturation Length of Erodible Sediment Beds Subject to Shear Flow
NASA Astrophysics Data System (ADS)
Casler, D. M.; Kahn, B. P.; Furbish, D. J.; Schmeeckle, M. W.
2016-12-01
We examine the initial growth and wavelength selection of sand ripples based on probabilistic formulations of the flux and the Exner equation. Current formulations of this problem as a linear stability analysis appeal to the idea of a saturation length-the lag between the bed stress and the flux-as a key stabilizing influence leading to selection of a finite wavelength. We present two contrasting formulations. The first is based on the Fokker-Planck approximation of the divergence form of the Exner equation, and thus involves particle diffusion associated with variations in particle activity, in addition to the conventionally assumed advective term. The role of a saturation length associated with the particle activity is similar to previous analyses. However, particle diffusion provides an attenuating influence on the growth of small wavelengths, independent of a saturation length, and is thus a sufficient, if not necessary, condition contributing to selection of a finite wavelength. The second formulation is based on the entrainment form of the Exner equation. As a precise, probabilistic formulation of conservation, this form of the Exner equation does not distinguish between advection and diffusion, and, because it directly accounts for all particle motions via a convolution of the distribution of particle hop distances, it pays no attention to the idea of a saturation length. The formulation and resulting description of initial ripple growth and wavelength selection thus inherently subsume the effects embodied in the ideas of advection, diffusion, and a saturation length as used in other formulations. Moreover, the formulation does not distinguish between bed load and suspended load, and is thus broader in application. The analysis reveals that the length scales defined by the distribution of hop distances are more fundamental than the saturation length in determining the initial growth or decay of bedforms. Formulations involving the saturation length coincide with the special case of an exponential distribution of hop distance, where the saturation length is equal to the mean hop distance.
Probalistic Finite Elements (PFEM) structural dynamics and fracture mechanics
NASA Technical Reports Server (NTRS)
Liu, Wing-Kam; Belytschko, Ted; Mani, A.; Besterfield, G.
1989-01-01
The purpose of this work is to develop computationally efficient methodologies for assessing the effects of randomness in loads, material properties, and other aspects of a problem by a finite element analysis. The resulting group of methods is called probabilistic finite elements (PFEM). The overall objective of this work is to develop methodologies whereby the lifetime of a component can be predicted, accounting for the variability in the material and geometry of the component, the loads, and other aspects of the environment; and the range of response expected in a particular scenario can be presented to the analyst in addition to the response itself. Emphasis has been placed on methods which are not statistical in character; that is, they do not involve Monte Carlo simulations. The reason for this choice of direction is that Monte Carlo simulations of complex nonlinear response require a tremendous amount of computation. The focus of efforts so far has been on nonlinear structural dynamics. However, in the continuation of this project, emphasis will be shifted to probabilistic fracture mechanics so that the effect of randomness in crack geometry and material properties can be studied interactively with the effect of random load and environment.
Research on a Method of Geographical Information Service Load Balancing
NASA Astrophysics Data System (ADS)
Li, Heyuan; Li, Yongxing; Xue, Zhiyong; Feng, Tao
2018-05-01
With the development of geographical information service technologies, how to achieve the intelligent scheduling and high concurrent access of geographical information service resources based on load balancing is a focal point of current study. This paper presents an algorithm of dynamic load balancing. In the algorithm, types of geographical information service are matched with the corresponding server group, then the RED algorithm is combined with the method of double threshold effectively to judge the load state of serve node, finally the service is scheduled based on weighted probabilistic in a certain period. At the last, an experiment system is built based on cluster server, which proves the effectiveness of the method presented in this paper.
NASA Astrophysics Data System (ADS)
Moncoulon, D.; Labat, D.; Ardon, J.; Onfroy, T.; Leblois, E.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.
2013-07-01
The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible but not yet occurred flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2012 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90% of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of CCR claim database has shown that approximately 45% of the insured flood losses are located inside the floodplains and 45% outside. 10% other percent are due to seasurge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: generation of fictive river flows based on the historical records of the river gauge network and generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (MACIF) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).
Probabilistic Flexural Fatigue in Plain and Fiber-Reinforced Concrete
Ríos, José D.
2017-01-01
The objective of this work is two-fold. First, we attempt to fit the experimental data on the flexural fatigue of plain and fiber-reinforced concrete with a probabilistic model (Saucedo, Yu, Medeiros, Zhang and Ruiz, Int. J. Fatigue, 2013, 48, 308–318). This model was validated for compressive fatigue at various loading frequencies, but not for flexural fatigue. Since the model is probabilistic, it is not necessarily related to the specific mechanism of fatigue damage, but rather generically explains the fatigue distribution in concrete (plain or reinforced with fibers) for damage under compression, tension or flexion. In this work, more than 100 series of flexural fatigue tests in the literature are fit with excellent results. Since the distribution of monotonic tests was not available in the majority of cases, a two-step procedure is established to estimate the model parameters based solely on fatigue tests. The coefficient of regression was more than 0.90 except for particular cases where not all tests were strictly performed under the same loading conditions, which confirms the applicability of the model to flexural fatigue data analysis. Moreover, the model parameters are closely related to fatigue performance, which demonstrates the predictive capacity of the model. For instance, the scale parameter is related to flexural strength, which improves with the addition of fibers. Similarly, fiber increases the scattering of fatigue life, which is reflected by the decreasing shape parameter. PMID:28773123
Probabilistic Flexural Fatigue in Plain and Fiber-Reinforced Concrete.
Ríos, José D; Cifuentes, Héctor; Yu, Rena C; Ruiz, Gonzalo
2017-07-07
The objective of this work is two-fold. First, we attempt to fit the experimental data on the flexural fatigue of plain and fiber-reinforced concrete with a probabilistic model (Saucedo, Yu, Medeiros, Zhang and Ruiz, Int. J. Fatigue, 2013, 48, 308-318). This model was validated for compressive fatigue at various loading frequencies, but not for flexural fatigue. Since the model is probabilistic, it is not necessarily related to the specific mechanism of fatigue damage, but rather generically explains the fatigue distribution in concrete (plain or reinforced with fibers) for damage under compression, tension or flexion. In this work, more than 100 series of flexural fatigue tests in the literature are fit with excellent results. Since the distribution of monotonic tests was not available in the majority of cases, a two-step procedure is established to estimate the model parameters based solely on fatigue tests. The coefficient of regression was more than 0.90 except for particular cases where not all tests were strictly performed under the same loading conditions, which confirms the applicability of the model to flexural fatigue data analysis. Moreover, the model parameters are closely related to fatigue performance, which demonstrates the predictive capacity of the model. For instance, the scale parameter is related to flexural strength, which improves with the addition of fibers. Similarly, fiber increases the scattering of fatigue life, which is reflected by the decreasing shape parameter.
Classic articles and workbook: EPRI monographs on simulation of electric power production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stremel, J.P.
1991-12-01
This monograph republishes several articles including a seminal one on probabilistic production costing for electric power generation. That article is given in the original French along with a English translation. Another article, written by R. Booth, gives a popular explanation of the theory, and a workbook by B. Manhire is included that carries through a simple example step by step. The classical analysis of non-probabilistic generator dispatch by L.K. Kirchmayer is republished along with an introductory essay by J.P. Stremel that puts in perspective the monograph material. The article in French was written by H. Baleriaux, E. Jamoulle, and Fr.more » Linard de Guertechin and first published in Brussels in 1967. It derived a method for calculating the expected value of production costs by modifying a load duration curve through the use of probability factors that account for unplanned random generator outages. Although the paper showed how pump storage plants could be included and how linear programming could be applied, the convolution technique used in the probabilistic calculations is the part most widely applied. The tutorial paper by Booth was written in a light style, and its lucidity helped popularize the method. The workbook by Manhire also shows how the calculation can be shortened significantly using cumulants to approximate the load duration curve.« less
NASA Astrophysics Data System (ADS)
Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz
2018-04-01
Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.
NASA Technical Reports Server (NTRS)
Bast, Callie C.; Jurena, Mark T.; Godines, Cody R.; Chamis, Christos C. (Technical Monitor)
2001-01-01
This project included both research and education objectives. The goal of this project was to advance innovative research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction for improved reliability and safety of structural components of aerospace and aircraft propulsion systems. Research and education partners included Glenn Research Center (GRC) and Southwest Research Institute (SwRI) along with the University of Texas at San Antonio (UTSA). SwRI enhanced the NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) code and provided consulting support for NESSUS-related activities at UTSA. NASA funding supported three undergraduate students, two graduate students, a summer course instructor and the Principal Investigator. Matching funds from UTSA provided for the purchase of additional equipment for the enhancement of the Advanced Interactive Computational SGI Lab established during the first year of this Partnership Award to conduct the probabilistic finite element summer courses. The research portion of this report presents the cumulation of work performed through the use of the probabilistic finite element program, NESSUS, Numerical Evaluation and Structures Under Stress, and an embedded Material Strength Degradation (MSD) model. Probabilistic structural analysis provided for quantification of uncertainties associated with the design, thus enabling increased system performance and reliability. The structure examined was a Space Shuttle Main Engine (SSME) fuel turbopump blade. The blade material analyzed was Inconel 718, since the MSD model was previously calibrated for this material. Reliability analysis encompassing the effects of high temperature and high cycle fatigue, yielded a reliability value of 0.99978 using a fully correlated random field for the blade thickness. The reliability did not change significantly for a change in distribution type except for a change in distribution from Gaussian to Weibull for the centrifugal load. The sensitivity factors determined to be most dominant were the centrifugal loading and the initial strength of the material. These two sensitivity factors were influenced most by a change in distribution type from Gaussian to Weibull. The education portion of this report describes short-term and long-term educational objectives. Such objectives serve to integrate research and education components of this project resulting in opportunities for ethnic minority students, principally Hispanic. The primary vehicle to facilitate such integration was the teaching of two probabilistic finite element method courses to undergraduate engineering students in the summers of 1998 and 1999.
Characterizing the uncertainty in holddown post load measurements
NASA Technical Reports Server (NTRS)
Richardson, J. A.; Townsend, J. S.
1993-01-01
In order to understand unexpectedly erratic load measurements in the launch-pad supports for the space shuttle, the sensitivities of the load cells in the supports were analyzed using simple probabilistic techniques. NASA engineers use the loads in the shuttle's supports to calculate critical stresses in the shuttle vehicle just before lift-off. The support loads are measured with 'load cells' which are actually structural components of the mobile launch platform which have been instrumented with strain gauges. Although these load cells adequately measure vertical loads, the horizontal load measurements have been erratic. The load measurements were simulated in this study using Monte Carlo simulation procedures. The simulation studies showed that the support loads are sensitive to small deviations in strain and calibration. In their current configuration, the load cells will not measure loads with sufficient accuracy to reliably calculate stresses in the shuttle vehicle. A simplified model of the holddown post (HDP) load measurement system was used to study the effect on load measurement accuracy for several factors, including load point deviations, gauge heights, and HDP geometry.
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
Probabilistic Analysis of Structural Member from Recycled Aggregate Concrete
NASA Astrophysics Data System (ADS)
Broukalová, I.; Šeps, K.
2017-09-01
The paper aims at the topic of sustainable building concerning recycling of waste rubble concrete from demolition. Considering demands of maximising recycled aggregate use and minimising of cement consumption, composite from recycled concrete aggregate was proposed. The objective of the presented investigations was to verify feasibility of the recycled aggregate cement based fibre reinforced composite in a structural member. Reliability of wall from recycled aggregate fibre reinforced composite was assessed in a probabilistic analysis of a load-bearing capacity of the wall. The applicability of recycled aggregate fibre reinforced concrete in structural applications was demonstrated. The outcomes refer to issue of high scatter of material parameters of recycled aggregate concretes.
Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano; ...
2018-01-01
This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano
This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less
Probabilistic Fracture Mechanics Analysis of the Orbiter's LH2 Feedline Flowliner
NASA Technical Reports Server (NTRS)
Bonacuse, Peter J. (Technical Monitor); Hudak, Stephen J., Jr.; Huyse, Luc; Chell, Graham; Lee, Yi-Der; Riha, David S.; Thacker, Ben; McClung, Craig; Gardner, Brian; Leverant, Gerald R.;
2005-01-01
Work performed by Southwest Research Institute (SwRI) as part of an Independent Technical Assessment (ITA) for the NASA Engineering and Safety Center (NESC) is summarized. The ITA goal was to establish a flight rationale in light of a history of fatigue cracking due to flow induced vibrations in the feedline flowliners that supply liquid hydrogen to the space shuttle main engines. Prior deterministic analyses using worst-case assumptions predicted failure in a single flight. The current work formulated statistical models for dynamic loading and cryogenic fatigue crack growth properties, instead of using worst-case assumptions. Weight function solutions for bivariant stressing were developed to determine accurate crack "driving-forces". Monte Carlo simulations showed that low flowliner probabilities of failure (POF = 0.001 to 0.0001) are achievable, provided pre-flight inspections for cracks are performed with adequate probability of detection (POD)-specifically, 20/75 mils with 50%/99% POD. Measurements to confirm assumed POD curves are recommended. Since the computed POFs are very sensitive to the cyclic loads/stresses and the analysis of strain gage data revealed inconsistencies with the previous assumption of a single dominant vibrant mode, further work to reconcile this difference is recommended. It is possible that the unaccounted vibrational modes in the flight spectra could increase the computed POFs.
A probabilistic model of a porous heat exchanger
NASA Technical Reports Server (NTRS)
Agrawal, O. P.; Lin, X. A.
1995-01-01
This paper presents a probabilistic one-dimensional finite element model for heat transfer processes in porous heat exchangers. The Galerkin approach is used to develop the finite element matrices. Some of the submatrices are asymmetric due to the presence of the flow term. The Neumann expansion is used to write the temperature distribution as a series of random variables, and the expectation operator is applied to obtain the mean and deviation statistics. To demonstrate the feasibility of the formulation, a one-dimensional model of heat transfer phenomenon in superfluid flow through a porous media is considered. Results of this formulation agree well with the Monte-Carlo simulations and the analytical solutions. Although the numerical experiments are confined to parametric random variables, a formulation is presented to account for the random spatial variations.
NASA Technical Reports Server (NTRS)
Onwubiko, Chin-Yere; Onyebueke, Landon
1996-01-01
Structural failure is rarely a "sudden death" type of event, such sudden failures may occur only under abnormal loadings like bomb or gas explosions and very strong earthquakes. In most cases, structures fail due to damage accumulated under normal loadings such as wind loads, dead and live loads. The consequence of cumulative damage will affect the reliability of surviving components and finally causes collapse of the system. The cumulative damage effects on system reliability under time-invariant loadings are of practical interest in structural design and therefore will be investigated in this study. The scope of this study is, however, restricted to the consideration of damage accumulation as the increase in the number of failed components due to the violation of their strength limits.
Reliability and Creep/Fatigue Analysis of a CMC Component
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Mital, Subodh K.; Gyekenyesi, John Z.; Gyekenyesi, John P.
2007-01-01
High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight and enable higher operating temperatures requiring less cooling; thus leading to increased engine efficiencies. There is a need for convenient design tools that can accommodate various loading conditions and material data with their associated uncertainties to estimate the minimum predicted life as well as the failure probabilities of a structural component. This paper presents a review of the life prediction and probabilistic analyses performed for a CMC turbine stator vane. A computer code, NASALife, is used to predict the life of a 2-D woven silicon carbide fiber reinforced silicon carbide matrix (SiC/SiC) turbine stator vane due to a mission cycle which induces low cycle fatigue and creep. The output from this program includes damage from creep loading, damage due to cyclic loading and the combined damage due to the given loading cycle. Results indicate that the trends predicted by NASALife are as expected for the loading conditions used for this study. In addition, a combination of woven composite micromechanics, finite element structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Results indicate that reducing the scatter in proportional limit strength of the vane material has the greatest effect in improving the overall reliability of the CMC vane.
Economic Dispatch for Microgrid Containing Electric Vehicles via Probabilistic Modeling: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Yin; Gao, Wenzhong; Momoh, James
In this paper, an economic dispatch model with probabilistic modeling is developed for a microgrid. The electric power supply in a microgrid consists of conventional power plants and renewable energy power plants, such as wind and solar power plants. Because of the fluctuation in the output of solar and wind power plants, an empirical probabilistic model is developed to predict their hourly output. According to different characteristics of wind and solar power plants, the parameters for probabilistic distribution are further adjusted individually for both. On the other hand, with the growing trend in plug-in electric vehicles (PHEVs), an integrated microgridmore » system must also consider the impact of PHEVs. The charging loads from PHEVs as well as the discharging output via the vehicle-to-grid (V2G) method can greatly affect the economic dispatch for all of the micro energy sources in a microgrid. This paper presents an optimization method for economic dispatch in a microgrid considering conventional power plants, renewable power plants, and PHEVs. The simulation results reveal that PHEVs with V2G capability can be an indispensable supplement in a modern microgrid.« less
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
A Tool Chain for the V and V of NASA Cryogenic Fuel Loading Health Management
2014-10-02
Here, K. (2011). Formal testing for separation assurance. Ann. Math. Artif . Intell., 63(1), 5–30. Goodrich, C., Narasimhan, S., Daigle, M...Probabilistic Reasoning in Intelligent Sys- tems: Networks of plausible inference Morgan Kauf- mann: . Reed, E., Schumann, J., & Mengshoel, O. (2011
NASA Technical Reports Server (NTRS)
Momoh, James; Chattopadhyay, Deb; Basheer, Omar Ali AL
1996-01-01
The space power system has two sources of energy: photo-voltaic blankets and batteries. The optimal power management problem on-board has two broad operations: off-line power scheduling to determine the load allocation schedule of the next several hours based on the forecast of load and solar power availability. The nature of this study puts less emphasis on speed requirement for computation and more importance on the optimality of the solution. The second category problem, on-line power rescheduling, is needed in the event of occurrence of a contingency to optimally reschedule the loads to minimize the 'unused' or 'wasted' energy while keeping the priority on certain type of load and minimum disturbance of the original optimal schedule determined in the first-stage off-line study. The computational performance of the on-line 'rescheduler' is an important criterion and plays a critical role in the selection of the appropriate tool. The Howard University Center for Energy Systems and Control has developed a hybrid optimization-expert systems based power management program. The pre-scheduler has been developed using a non-linear multi-objective optimization technique called the Outer Approximation method and implemented using the General Algebraic Modeling System (GAMS). The optimization model has the capability of dealing with multiple conflicting objectives viz. maximizing energy utilization, minimizing the variation of load over a day, etc. and incorporates several complex interaction between the loads in a space system. The rescheduling is performed using an expert system developed in PROLOG which utilizes a rule-base for reallocation of the loads in an emergency condition viz. shortage of power due to solar array failure, increase of base load, addition of new activity, repetition of old activity etc. Both the modules handle decision making on battery charging and discharging and allocation of loads over a time-horizon of a day divided into intervals of 10 minutes. The models have been extensively tested using a case study for the Space Station Freedom and the results for the case study will be presented. Several future enhancements of the pre-scheduler and the 'rescheduler' have been outlined which include graphic analyzer for the on-line module, incorporating probabilistic considerations, including spatial location of the loads and the connectivity using a direct current (DC) load flow model.
NASA Astrophysics Data System (ADS)
Singh, Shailesh Kumar
2014-05-01
Streamflow forecasts are essential for making critical decision for optimal allocation of water supplies for various demands that include irrigation for agriculture, habitat for fisheries, hydropower production and flood warning. The major objective of this study is to explore the Ensemble Streamflow Prediction (ESP) based forecast in New Zealand catchments and to highlights the present capability of seasonal flow forecasting of National Institute of Water and Atmospheric Research (NIWA). In this study a probabilistic forecast framework for ESP is presented. The basic assumption in ESP is that future weather pattern were experienced historically. Hence, past forcing data can be used with current initial condition to generate an ensemble of prediction. Small differences in initial conditions can result in large difference in the forecast. The initial state of catchment can be obtained by continuously running the model till current time and use this initial state with past forcing data to generate ensemble of flow for future. The approach taken here is to run TopNet hydrological models with a range of past forcing data (precipitation, temperature etc.) with current initial conditions. The collection of runs is called the ensemble. ESP give probabilistic forecasts for flow. From ensemble members the probability distributions can be derived. The probability distributions capture part of the intrinsic uncertainty in weather or climate. An ensemble stream flow prediction which provide probabilistic hydrological forecast with lead time up to 3 months is presented for Rangitata, Ahuriri, and Hooker and Jollie rivers in South Island of New Zealand. ESP based seasonal forecast have better skill than climatology. This system can provide better over all information for holistic water resource management.
Operating health analysis of electric power systems
NASA Astrophysics Data System (ADS)
Fotuhi-Firuzabad, Mahmud
The required level of operating reserve to be maintained by an electric power system can be determined using both deterministic and probabilistic techniques. Despite the obvious disadvantages of deterministic approaches there is still considerable reluctance to apply probabilistic techniques due to the difficulty of interpreting a single numerical risk index and the lack of sufficient information provided by a single index. A practical way to overcome difficulties is to embed deterministic considerations in the probabilistic indices in order to monitor the system well-being. The system well-being can be designated as healthy, marginal and at risk. The concept of system well-being is examined and extended in this thesis to cover the overall area of operating reserve assessment. Operating reserve evaluation involves the two distinctly different aspects of unit commitment and the dispatch of the committed units. Unit commitment health analysis involves the determination of which unit should be committed to satisfy the operating criteria. The concepts developed for unit commitment health, margin and risk are extended in this thesis to evaluate the response well-being of a generating system. A procedure is presented to determine the optimum dispatch of the committed units to satisfy the response criteria. The impact on the response wellbeing being of variations in the margin time, required regulating margin and load forecast uncertainty are illustrated. The effects on the response well-being of rapid start units, interruptible loads and postponable outages are also illustrated. System well-being is, in general, greatly improved by interconnection with other power systems. The well-being concepts are extended to evaluate the spinning reserve requirements in interconnected systems. The interconnected system unit commitment problem is decomposed into two subproblems in which unit scheduling is performed in each isolated system followed by interconnected system evaluation. A procedure is illustrated to determine the well-being indices of the overall interconnected system. Under normal operating conditions, the system may also be able to carry a limited amount of interruptible load on top of its firm load without violating the operating criterion. An energy based approach is presented to determine the optimum interruptible load carrying capability in both the isolated and interconnected systems. Composite system spinning reserve assessment and composite system well-being are also examined in this research work. The impacts on the composite well-being of operating reserve considerations such as stand-by units, interruptible loads and the physical locations of these resources are illustrated. It is expected that the well-being framework and the concepts developed in this research work will prove extremely useful in the new competitive utility environment.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-01-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Astrophysics Data System (ADS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-08-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1987-01-01
Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Probabilistic analysis for fatigue strength degradation of materials
NASA Technical Reports Server (NTRS)
Royce, Lola
1989-01-01
This report presents the results of the first year of a research program conducted for NASA-LeRC by the University of Texas at San Antonio. The research included development of methodology that provides a probabilistic treatment of lifetime prediction of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Linear elastic fracture mechanics is utilized in the latter model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs, RANDOM2, RANDOM3, and RANDOM4. These programs determine the random lifetime of an engine component, in mechanical load cycles, to reach a critical fatigue strength or crack size. The material considered was a cast nickel base superalloy, one typical of those used in the Space Shuttle Main Engine.
Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder
NASA Astrophysics Data System (ADS)
Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria
2015-11-01
The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
Patil, Narendra G; Rebrov, Evgeny V; Eränen, Kari; Benaskar, Faysal; Meuldijk, Jan; Mikkola, Jyri-Pekka; Hessel, Volker; Hulshof, Lumbertus A; Murzin, Dmitry Yu; Schouten, Jaap C
2012-01-01
A novel heating efficiency analysis of the microwave heated stop-flow (i.e. stagnant liquid) and continuous-flow reactors has been presented. The thermal losses to the surrounding air by natural convection have been taken into account for heating efficiency calculation of the microwave heating process. The effect of the load diameter in the range of 4-29 mm on the heating efficiency of ethylene glycol was studied in a single mode microwave cavity under continuous flow and stop-flow conditions. The variation of the microwave absorbing properties of the load with temperature was estimated. Under stop-flow conditions, the heating efficiency depends on the load diameter. The highest heating efficiency has been observed at the load diameter close to the half wavelength of the electromagnetic field in the corresponding medium. Under continuous-flow conditions, the heating efficiency increased linearly. However, microwave leakage above the propagation diameter restricted further experimentation at higher load diameters. Contrary to the stop-flow conditions, the load temperature did not raise monotonously from the inlet to outlet under continuous-flow conditions. This was due to the combined effect of lagging convective heat fluxes in comparison to volumetric heating. This severely disturbs the uniformity of the electromagnetic field in the axial direction and creates areas of high and low field intensity along the load Length decreasing the heating efficiency as compared to stop-flow conditions.
Bridge condition assessment based on long-term strain monitoring
NASA Astrophysics Data System (ADS)
Sun, LiMin; Sun, Shouwang
2011-04-01
In consideration of the important role that bridges play as transportation infrastructures, their safety, durability and serviceability have always been deeply concerned. Structural Health Monitoring Systems (SHMS) have been installed to many long-span bridges to provide bridge engineers with the information needed in making rational decisions for maintenance. However, SHMS also confronted bridge engineers with the challenge of efficient use of monitoring data. Thus, methodologies which are robust to random disturbance and sensitive to damage become a subject on which many researches in structural condition assessment concentrate. In this study, an innovative probabilistic approach for condition assessment of bridge structures was proposed on the basis of long-term strain monitoring on steel girder of a cable-stayed bridge. First, the methodology of damage detection in the vicinity of monitoring point using strain-based indices was investigated. Then, the composition of strain response of bridge under operational loads was analyzed. Thirdly, the influence of temperature and wind on strains was eliminated and thus strain fluctuation under vehicle loads is obtained. Finally, damage evolution assessment was carried out based on the statistical characteristics of rain-flow cycles derived from the strain fluctuation under vehicle loads. The research conducted indicates that the methodology proposed is qualified for structural condition assessment so far as the following respects are concerned: (a) capability of revealing structural deterioration; (b) immunity to the influence of environmental variation; (c) adaptability to the random characteristic exhibited by long-term monitoring data. Further examination of the applicability of the proposed methodology in aging bridge may provide a more convincing validation.
NASA Astrophysics Data System (ADS)
Moncoulon, D.; Labat, D.; Ardon, J.; Leblois, E.; Onfroy, T.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.
2014-09-01
The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible (but which have not yet occurred) flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2010 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90 % of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff, due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of the CCR (Caisse Centrale de Reassurance) claim database have shown that approximately 45 % of the insured flood losses are located inside the floodplains and 45 % outside. Another 10 % is due to sea surge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: a generation of fictive river flows based on the historical records of the river gauge network and a generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (Macif) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).
Ganni, Venkatarao
2008-08-12
A unique process cycle and apparatus design separates the consumer (cryogenic) load return flow from most of the recycle return flow of a refrigerator and/or liquefier process cycle. The refrigerator and/or liquefier process recycle return flow is recompressed by a multi-stage compressor set and the consumer load return flow is recompressed by an independent consumer load compressor set that maintains a desirable constant suction pressure using a consumer load bypass control valve and the consumer load return pressure control valve that controls the consumer load compressor's suction pressure. The discharge pressure of this consumer load compressor is thereby allowed to float at the intermediate pressure in between the first and second stage recycle compressor sets. Utilizing the unique gas management valve regulation, the unique process cycle and apparatus design in which the consumer load return flow is separate from the recycle return flow, the pressure ratios of each recycle compressor stage and all main pressures associated with the recycle return flow are allowed to vary naturally, thus providing a naturally regulated and balanced floating pressure process cycle that maintains optimal efficiency at design and off-design process cycle capacity and conditions automatically.
Ganni, Venkatarao
2007-10-09
A unique process cycle and apparatus design separates the consumer (cryogenic) load return flow from most of the recycle return flow of a refrigerator and/or liquefier process cycle. The refrigerator and/or liquefier process recycle return flow is recompressed by a multi-stage compressor set and the consumer load return flow is recompressed by an independent consumer load compressor set that maintains a desirable constant suction pressure using a consumer load bypass control valve and the consumer load return pressure control valve that controls the consumer load compressor's suction pressure. The discharge pressure of this consumer load compressor is thereby allowed to float at the intermediate pressure in between the first and second stage recycle compressor sets. Utilizing the unique gas management valve regulation, the unique process cycle and apparatus design in which the consumer load return flow is separate from the recycle return flow, the pressure ratios of each recycle compressor stage and all main pressures associated with the recycle return flow are allowed to vary naturally, thus providing a naturally regulated and balanced floating pressure process cycle that maintains optimal efficiency at design and off-design process cycle capacity and conditions automatically.
Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources
NASA Astrophysics Data System (ADS)
Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi
2017-01-01
Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.
Analysis of the stochastic excitability in the flow chemical reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bashkirtseva, Irina
2015-11-30
A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.
Analysis of the stochastic excitability in the flow chemical reactor
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina
2015-11-01
A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.
Development of Probabilistic Life Prediction Methodologies and Testing Strategies for MEMS and CMC's
NASA Technical Reports Server (NTRS)
Jadaan, Osama
2003-01-01
This effort is to investigate probabilistic life prediction methodologies for ceramic matrix composites and MicroElectroMechanical Systems (MEMS) and to analyze designs that determine stochastic properties of MEMS. For CMC's this includes a brief literature survey regarding lifing methodologies. Also of interest for MEMS is the design of a proper test for the Weibull size effect in thin film (bulge test) specimens. The Weibull size effect is a consequence of a stochastic strength response predicted from the Weibull distribution. Confirming that MEMS strength is controlled by the Weibull distribution will enable the development of a probabilistic design methodology for MEMS - similar to the GRC developed CARES/Life program for bulk ceramics. A main objective of this effort is to further develop and verify the ability of the Ceramics Analysis and Reliability Evaluation of Structures/Life (CARES/Life) code to predict the time-dependent reliability of MEMS structures subjected to multiple transient loads. A second set of objectives is to determine the applicability/suitability of the CARES/Life methodology for CMC analysis, what changes would be needed to the methodology and software, and if feasible, run a demonstration problem. Also important is an evaluation of CARES/Life coupled to the ANSYS Probabilistic Design System (PDS) and the potential of coupling transient reliability analysis to the ANSYS PDS.
NASA Astrophysics Data System (ADS)
Telang, Aparna S.; Bedekar, P. P.
2017-09-01
Load flow analysis is the initial and essential step for any power system computation. It is required for choosing better options for power system expansion to meet with ever increasing load demand. Implementation of Flexible AC Transmission System (FACTS) device like STATCOM, in the load flow, which is having fast and very flexible control, is one of the important tasks for power system researchers. This paper presents a simple and systematic approach for steady state power flow calculations with FACTS controller, static synchronous compensator (STATCOM) using command line usage of MATLAB tool-power system analysis toolbox (PSAT). The complexity of MATLAB language programming increases due to incorporation of STATCOM in an existing Newton-Raphson load flow algorithm. Thus, the main contribution of this paper is to show how command line usage of user friendly MATLAB tool, PSAT, can extensively be used for quicker and wider interpretation of the results of load flow with STATCOM. The novelty of this paper lies in the method of applying the load increase pattern, where the active and reactive loads have been changed simultaneously at all the load buses under consideration for creating stressed conditions for load flow analysis with STATCOM. The performance have been evaluated on many standard IEEE test systems and the results for standard IEEE-30 bus system, IEEE-57 bus system, and IEEE-118 bus system are presented.
Improved reliability of wind turbine towers with active tuned mass dampers (ATMDs)
NASA Astrophysics Data System (ADS)
Fitzgerald, Breiffni; Sarkar, Saptarshi; Staino, Andrea
2018-04-01
Modern multi-megawatt wind turbines are composed of slender, flexible, and lightly damped blades and towers. These components exhibit high susceptibility to wind-induced vibrations. As the size, flexibility and cost of the towers have increased in recent years, the need to protect these structures against damage induced by turbulent aerodynamic loading has become apparent. This paper combines structural dynamic models and probabilistic assessment tools to demonstrate improvements in structural reliability when modern wind turbine towers are equipped with active tuned mass dampers (ATMDs). This study proposes a multi-modal wind turbine model for wind turbine control design and analysis. This study incorporates an ATMD into the tower of this model. The model is subjected to stochastically generated wind loads of varying speeds to develop wind-induced probabilistic demand models for towers of modern multi-megawatt wind turbines under structural uncertainty. Numerical simulations have been carried out to ascertain the effectiveness of the active control system to improve the structural performance of the wind turbine and its reliability. The study constructs fragility curves, which illustrate reductions in the vulnerability of towers to wind loading owing to the inclusion of the damper. Results show that the active controller is successful in increasing the reliability of the tower responses. According to the analysis carried out in this paper, a strong reduction of the probability of exceeding a given displacement at the rated wind speed has been observed.
NASA Astrophysics Data System (ADS)
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.
Probabilistic Simulation of Progressive Fracture in Bolted-Joint Composite Laminates
NASA Technical Reports Server (NTRS)
Minnetyan, L.; Singhal, S. N.; Chamis, C. C.
1996-01-01
This report describes computational methods to probabilistically simulate fracture in bolted composite structures. An innovative approach that is independent of stress intensity factors and fracture toughness was used to simulate progressive fracture. The effect of design variable uncertainties on structural damage was also quantified. A fast probability integrator assessed the scatter in the composite structure response before and after damage. Then the sensitivity of the response to design variables was computed. General-purpose methods, which are applicable to bolted joints in all types of structures and in all fracture processes-from damage initiation to unstable propagation and global structure collapse-were used. These methods were demonstrated for a bolted joint of a polymer matrix composite panel under edge loads. The effects of the fabrication process were included in the simulation of damage in the bolted panel. Results showed that the most effective way to reduce end displacement at fracture is to control both the load and the ply thickness. The cumulative probability for longitudinal stress in all plies was most sensitive to the load; in the 0 deg. plies it was very sensitive to ply thickness. The cumulative probability for transverse stress was most sensitive to the matrix coefficient of thermal expansion. In addition, fiber volume ratio and fiber transverse modulus both contributed significantly to the cumulative probability for the transverse stresses in all the plies.
Zhang, Kejiang; Achari, Gopal; Li, Hua
2009-11-03
Traditionally, uncertainty in parameters are represented as probabilistic distributions and incorporated into groundwater flow and contaminant transport models. With the advent of newer uncertainty theories, it is now understood that stochastic methods cannot properly represent non random uncertainties. In the groundwater flow and contaminant transport equations, uncertainty in some parameters may be random, whereas those of others may be non random. The objective of this paper is to develop a fuzzy-stochastic partial differential equation (FSPDE) model to simulate conditions where both random and non random uncertainties are involved in groundwater flow and solute transport. Three potential solution techniques namely, (a) transforming a probability distribution to a possibility distribution (Method I) then a FSPDE becomes a fuzzy partial differential equation (FPDE), (b) transforming a possibility distribution to a probability distribution (Method II) and then a FSPDE becomes a stochastic partial differential equation (SPDE), and (c) the combination of Monte Carlo methods and FPDE solution techniques (Method III) are proposed and compared. The effects of these three methods on the predictive results are investigated by using two case studies. The results show that the predictions obtained from Method II is a specific case of that got from Method I. When an exact probabilistic result is needed, Method II is suggested. As the loss or gain of information during a probability-possibility (or vice versa) transformation cannot be quantified, their influences on the predictive results is not known. Thus, Method III should probably be preferred for risk assessments.
NASA Astrophysics Data System (ADS)
Neri, Augusto; Bevilacqua, Andrea; Esposti Ongaro, Tomaso; Isaia, Roberto; Aspinall, Willy P.; Bisson, Marina; Flandoli, Franco; Baxter, Peter J.; Bertagnini, Antonella; Iannuzzi, Enrico; Orsucci, Simone; Pistolesi, Marco; Rosi, Mauro; Vitale, Stefano
2015-04-01
Campi Flegrei (CF) is an example of an active caldera containing densely populated settlements at very high risk of pyroclastic density currents (PDCs). We present here an innovative method for assessing background spatial PDC hazard in a caldera setting with probabilistic invasion maps conditional on the occurrence of an explosive event. The method encompasses the probabilistic assessment of potential vent opening positions, derived in the companion paper, combined with inferences about the spatial density distribution of PDC invasion areas from a simplified flow model, informed by reconstruction of deposits from eruptions in the last 15 ka. The flow model describes the PDC kinematics and accounts for main effects of topography on flow propagation. Structured expert elicitation is used to incorporate certain sources of epistemic uncertainty, and a Monte Carlo approach is adopted to produce a set of probabilistic hazard maps for the whole CF area. Our findings show that, in case of eruption, almost the entire caldera is exposed to invasion with a mean probability of at least 5%, with peaks greater than 50% in some central areas. Some areas outside the caldera are also exposed to this danger, with mean probabilities of invasion of the order of 5-10%. Our analysis suggests that these probability estimates have location-specific uncertainties which can be substantial. The results prove to be robust with respect to alternative elicitation models and allow the influence on hazard mapping of different sources of uncertainty, and of theoretical and numerical assumptions, to be quantified.
Research on virtual network load balancing based on OpenFlow
NASA Astrophysics Data System (ADS)
Peng, Rong; Ding, Lei
2017-08-01
The Network based on OpenFlow technology separate the control module and data forwarding module. Global deployment of load balancing strategy through network view of control plane is fast and of high efficiency. This paper proposes a Weighted Round-Robin Scheduling algorithm for virtual network and a load balancing plan for server load based on OpenFlow. Load of service nodes and load balancing tasks distribution algorithm will be taken into account.
Semi-volatile pesticides, such as chlorpyrifos, can move about within a home environment after an application due to physical/chemical processes, resulting in concentration loadings in and on objects and surfaces. Children can be particularly susceptible to the effects of pest...
DOT National Transportation Integrated Search
2009-08-01
Federal Aviation Administration (FAA) air traffic flow management (TFM) : decision-making is based primarily on a comparison of deterministic predictions of demand : and capacity at National Airspace System (NAS) elements such as airports, fixes and ...
On the Accuracy of Probabilistic Bucking Load Prediction
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.
2001-01-01
The buckling strength of thin-walled stiffened or unstiffened, metallic or composite shells is of major concern in aeronautical and space applications. The difficulty to predict the behavior of axially compressed thin-walled cylindrical shells continues to worry design engineers as we enter the third millennium. Thanks to extensive research programs in the late sixties and early seventies and the contributions of many eminent scientists, it is known that buckling strength calculations are affected by the uncertainties in the definition of the parameters of the problem such as definition of loads, material properties, geometric variables, edge support conditions, and the accuracy of the engineering models and analysis tools used in the design phase. The NASA design criteria monographs from the late sixties account for these design uncertainties by the use of a lump sum safety factor. This so-called 'empirical knockdown factor gamma' usually results in overly conservative design. Recently new reliability based probabilistic design procedure for buckling critical imperfect shells have been proposed. It essentially consists of a stochastic approach which introduces an improved 'scientific knockdown factor lambda(sub a)', that is not as conservative as the traditional empirical one. In order to incorporate probabilistic methods into a High Fidelity Analysis Approach one must be able to assess the accuracy of the various steps that must be executed to complete a reliability calculation. In the present paper the effect of size of the experimental input sample on the predicted value of the scientific knockdown factor lambda(sub a) calculated by the First-Order, Second-Moment Method is investigated.
Sun, Tian Yin; Mitrano, Denise M; Bornhöft, Nikolaus A; Scheringer, Martin; Hungerbühler, Konrad; Nowack, Bernd
2017-03-07
The need for an environmental risk assessment for engineered nanomaterials (ENM) necessitates the knowledge about their environmental emissions. Material flow models (MFA) have been used to provide predicted environmental emissions but most current nano-MFA models consider neither the rapid development of ENM production nor the fact that a large proportion of ENM are entering an in-use stock and are released from products over time (i.e., have a lag phase). Here we use dynamic probabilistic material flow modeling to predict scenarios of the future flows of four ENM (nano-TiO 2 , nano-ZnO, nano-Ag and CNT) to environmental compartments and to quantify their amounts in (temporary) sinks such as the in-use stock and ("final") environmental sinks such as soil and sediment. In these scenarios, we estimate likely future amounts if the use and distribution of ENM in products continues along current trends (i.e., a business-as-usual approach) and predict the effect of hypothetical trends in the market development of nanomaterials, such as the emergence of a new widely used product or the ban on certain substances, on the flows of nanomaterials to the environment in years to come. We show that depending on the scenario and the product type affected, significant changes of the flows occur over time, driven by the growth of stocks and delayed release dynamics.
A regional-scale ecological risk framework for environmental flow evaluations
NASA Astrophysics Data System (ADS)
O'Brien, Gordon C.; Dickens, Chris; Hines, Eleanor; Wepener, Victor; Stassen, Retha; Quayle, Leo; Fouchy, Kelly; MacKenzie, James; Graham, P. Mark; Landis, Wayne G.
2018-02-01
Environmental flow (E-flow) frameworks advocate holistic, regional-scale, probabilistic E-flow assessments that consider flow and non-flow drivers of change in a socio-ecological context as best practice. Regional-scale ecological risk assessments of multiple stressors to social and ecological endpoints, which address ecosystem dynamism, have been undertaken internationally at different spatial scales using the relative-risk model since the mid-1990s. With the recent incorporation of Bayesian belief networks into the relative-risk model, a robust regional-scale ecological risk assessment approach is available that can contribute to achieving the best practice recommendations of E-flow frameworks. PROBFLO is a holistic E-flow assessment method that incorporates the relative-risk model and Bayesian belief networks (BN-RRM) into a transparent probabilistic modelling tool that addresses uncertainty explicitly. PROBFLO has been developed to evaluate the socio-ecological consequences of historical, current and future water resource use scenarios and generate E-flow requirements on regional spatial scales. The approach has been implemented in two regional-scale case studies in Africa where its flexibility and functionality has been demonstrated. In both case studies the evidence-based outcomes facilitated informed environmental management decision making, with trade-off considerations in the context of social and ecological aspirations. This paper presents the PROBFLO approach as applied to the Senqu River catchment in Lesotho and further developments and application in the Mara River catchment in Kenya and Tanzania. The 10 BN-RRM procedural steps incorporated in PROBFLO are demonstrated with examples from both case studies. PROBFLO can contribute to the adaptive management of water resources and contribute to the allocation of resources for sustainable use of resources and address protection requirements.
Cluster-based control of a separating flow over a smoothly contoured ramp
NASA Astrophysics Data System (ADS)
Kaiser, Eurika; Noack, Bernd R.; Spohn, Andreas; Cattafesta, Louis N.; Morzyński, Marek
2017-12-01
The ability to manipulate and control fluid flows is of great importance in many scientific and engineering applications. The proposed closed-loop control framework addresses a key issue of model-based control: The actuation effect often results from slow dynamics of strongly nonlinear interactions which the flow reveals at timescales much longer than the prediction horizon of any model. Hence, we employ a probabilistic approach based on a cluster-based discretization of the Liouville equation for the evolution of the probability distribution. The proposed methodology frames high-dimensional, nonlinear dynamics into low-dimensional, probabilistic, linear dynamics which considerably simplifies the optimal control problem while preserving nonlinear actuation mechanisms. The data-driven approach builds upon a state space discretization using a clustering algorithm which groups kinematically similar flow states into a low number of clusters. The temporal evolution of the probability distribution on this set of clusters is then described by a control-dependent Markov model. This Markov model can be used as predictor for the ergodic probability distribution for a particular control law. This probability distribution approximates the long-term behavior of the original system on which basis the optimal control law is determined. We examine how the approach can be used to improve the open-loop actuation in a separating flow dominated by Kelvin-Helmholtz shedding. For this purpose, the feature space, in which the model is learned, and the admissible control inputs are tailored to strongly oscillatory flows.
A Hybrid Demand Response Simulator Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-05-02
A hybrid demand response simulator is developed to test different control algorithms for centralized and distributed demand response (DR) programs in a small distribution power grid. The HDRS is designed to model a wide variety of DR services such as peak having, load shifting, arbitrage, spinning reserves, load following, regulation, emergency load shedding, etc. The HDRS does not model the dynamic behaviors of the loads, rather, it simulates the load scheduling and dispatch process. The load models include TCAs (water heaters, air conditioners, refrigerators, freezers, etc) and non-TCAs (lighting, washer, dishwasher, etc.) The ambient temperature changes, thermal resistance, capacitance, andmore » the unit control logics can be modeled for TCA loads. The use patterns of the non-TCA can be modeled by probability of use and probabilistic durations. Some of the communication network characteristics, such as delays and errors, can also be modeled. Most importantly, because the simulator is modular and greatly simplified the thermal models for TCA loads, it is very easy and fast to be used to test and validate different control algorithms in a simulated environment.« less
NASA Astrophysics Data System (ADS)
Sun, Hu; Zhang, Aijia; Wang, Yishou; Qing, Xinlin P.
2017-04-01
Guided wave-based structural health monitoring (SHM) has been given considerable attention and widely studied for large-scale aircraft structures. Nevertheless, it is difficult to apply SHM systems on board or online, for which one of the most serious reasons is the environmental influence. Load is one fact that affects not only the host structure, in which guided wave propagates, but also the PZT, by which guided wave is transmitted and received. In this paper, numerical analysis using finite element method is used to study the load effect on guided wave acquired by PZT. The static loads with different grades are considered to analyze its effect on guided wave signals that PZT transmits and receives. Based on the variation trend of guided waves versus load, a load compensation method is developed to eliminate effects of load in the process of damage detection. The probabilistic reconstruction algorithm based on the signal variation of transmitter-receiver path is employed to identify the damage. Numerical tests is conducted to verify the feasibility and effectiveness of the given method.
A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges
Wang, Xu; Sun, Baitao
2014-01-01
Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347
A method to approximate a closest loadability limit using multiple load flow solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorino, Naoto; Harada, Shigemi; Cheng, Haozhong
A new method is proposed to approximate a closest loadability limit (CLL), or closest saddle node bifurcation point, using a pair of multiple load flow solutions. More strictly, the obtainable points by the method are the stationary points including not only CLL but also farthest and saddle points. An operating solution and a low voltage load flow solution are used to efficiently estimate the node injections at a CLL as well as the left and right eigenvectors corresponding to the zero eigenvalue of the load flow Jacobian. They can be used in monitoring loadability margin, in identification of weak spotsmore » in a power system and in the examination of an optimal control against voltage collapse. Most of the computation time of the proposed method is taken in calculating the load flow solution pair. The remaining computation time is less than that of an ordinary load flow.« less
NASA Astrophysics Data System (ADS)
Murphy, K. W.; Ellis, A. W.
2012-12-01
The Salt and Verde River watersheds in the Lower Colorado River Basin are a very important surface water resource in the Southwest United States. Their runoff is captured by a downstream reservoir system serving approximately 40% of the water demand and providing hydroelectric power to the Phoenix, Arizona area. Concerns have been expressed over the risks associated with their highly variable climate dependencies under the realization that the short, historical stream flow record was but one of many possible temporal and volumetric outcome sequences. A characterization of the possible range of flow deficits arising from natural variability beyond those evident in the instrumental record can facilitate sustainability planning as well as adaptation to future climate change scenarios. Methods were developed for this study to generate very long seasonal time series of net reservoir inflows by Monte Carlo simulations of the Salt and Verde watersheds which can be analyzed for detailed probabilistic insights. Other efforts to generate stochastic flow representations for impact assessments have been limited by normality distribution assumptions, inability to represent the covariance of flow contributions from multiple watersheds, complexities of different seasonal origins of precipitation and runoff dependencies, and constraints from spectral properties of the observational record. These difficulties were overcome in this study through stationarity assessments and development of joint probability distributions with highly skewed discrete density functions characteristic of the different watershed-season behaviors derived from a 123 year record. As well, methods of introducing season-to-season correlations owing to antecedent precipitation runoff efficiency enhancements have been incorporated. Representative 10,000 year time series have been stochastically generated which reflect a full range of temporal variability in flow volume distributions. Extreme value statistical analysis methods have been employed to characterize periods of flow deficit per various definitions of a drought period. Of concern for water resources are periods of net flows lower than those necessary to maintain reservoirs without sequential depletions. Probabilities of droughts lasting from only a few years up to 25 years duration have been identified along with their distributions of time to occurrence and cumulative flow deficits which can reach 50%. The analysis has yielded representations of the full range of drought severity in both depth and duration, providing useful quantitative guidance to risk management. Similarly, the risks of extremely high flows can be quantified. This study demonstrates that the instrumented historical record, once fully characterized and probabilistically represented, can yield many more insights to threatening periods of both hydrologic deficit and excess than is often assumed.
Energy Approach-Based Simulation of Structural Materials High-Cycle Fatigue
NASA Astrophysics Data System (ADS)
Balayev, A. F.; Korolev, A. V.; Kochetkov, A. V.; Sklyarova, A. I.; Zakharov, O. V.
2016-02-01
The paper describes the mechanism of micro-cracks development in solid structural materials based on the theory of brittle fracture. A probability function of material cracks energy distribution is obtained using a probabilistic approach. The paper states energy conditions for cracks growth at material high-cycle loading. A formula allowing to calculate the amount of energy absorbed during the cracks growth is given. The paper proposes a high- cycle fatigue evaluation criterion allowing to determine the maximum permissible number of solid body loading cycles, at which micro-cracks start growing rapidly up to destruction.
Failed rib region prediction in a human body model during crash events with precrash braking.
Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S
2018-02-28
The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.
Manpower Planning Models. 5. Optimization Models
1975-10-01
aide 11 neceaaary and Identity by block number) Manpower Planning \\ \\ X Modelling Optimization 20. ABS emry and Identity by block number...notation resulting from the previous maximum M. We exploit the probabilistic interpretation of the flow process whenever it eases the exposi - tion
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
Internet traffic load balancing using dynamic hashing with flow volume
NASA Astrophysics Data System (ADS)
Jo, Ju-Yeon; Kim, Yoohwan; Chao, H. Jonathan; Merat, Francis L.
2002-07-01
Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.
NASA Astrophysics Data System (ADS)
Wang, Yaping; Lin, Shunjiang; Yang, Zhibin
2017-05-01
In the traditional three-phase power flow calculation of the low voltage distribution network, the load model is described as constant power. Since this model cannot reflect the characteristics of actual loads, the result of the traditional calculation is always different from the actual situation. In this paper, the load model in which dynamic load represented by air conditioners parallel with static load represented by lighting loads is used to describe characteristics of residents load, and the three-phase power flow calculation model is proposed. The power flow calculation model includes the power balance equations of three-phase (A,B,C), the current balance equations of phase 0, and the torque balancing equations of induction motors in air conditioners. And then an alternating iterative algorithm of induction motor torque balance equations with each node balance equations is proposed to solve the three-phase power flow model. This method is applied to an actual low voltage distribution network of residents load, and by the calculation of three different operating states of air conditioners, the result demonstrates the effectiveness of the proposed model and the algorithm.
Multi-model ensemble hydrologic prediction using Bayesian model averaging
NASA Astrophysics Data System (ADS)
Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh
2007-05-01
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.
2016-08-23
Different percentages of clay (10 to 30%) and sand (35 to 55%) have been used to represent various flow concentrations (Table 1). Dynamic viscosity of the... viscosity , was adopted as the wall boundary treatment method. 2.2 Physical Domain The domain consists of a 7.0m long flume, which has an inclination of...the shear stress, μapp is the apparent viscosity , K is the flow consistency index, n is the flow behavior index, and γ is the shear rate, which is
NASA Astrophysics Data System (ADS)
Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca
2017-04-01
The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of land-use/land-cover changes and river regulation on network-scale connectivity.
NASA Astrophysics Data System (ADS)
Robbins, Joshua; Voth, Thomas
2011-06-01
Material response to dynamic loading is often dominated by microstructure such as grain topology, porosity, inclusions, and defects; however, many models rely on assumptions of homogeneity. We use the probabilistic finite element method (WK Liu, IJNME, 1986) to introduce local uncertainty to account for material heterogeneity. The PFEM uses statistical information about the local material response (i.e., its expectation, coefficient of variation, and autocorrelation) drawn from knowledge of the microstructure, single crystal behavior, and direct numerical simulation (DNS) to determine the expectation and covariance of the system response (velocity, strain, stress, etc). This approach is compared to resolved grain-scale simulations of the equivalent system. The microstructures used for the DNS are produced using Monte Carlo simulations of grain growth, and a sufficient number of realizations are computed to ensure a meaningful comparison. Finally, comments are made regarding the suitability of one-dimensional PFEM for modeling material heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Wei, Yaochi; Kim, Seokpum; Horie, Yasuyuki; Zhou, Min
2017-06-01
A computational approach is developed to predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs). The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific damage mechanisms considered include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to mimic relevant experiments for statistical variations of material behavior due to inherent material heterogeneities. The ignition thresholds and corresponding ignition probability maps are predicted for PBX 9404 and PBX 9501 for the impact loading regime of Up = 200 --1200 m/s. James and Walker-Wasley relations are utilized to establish explicit analytical expressions for the ignition probability as a function of load intensities. The predicted results are in good agreement with available experimental measurements. The capability to computationally predict the macroscopic response out of material microstructures and basic constituent properties lends itself to the design of new materials and the analysis of existing materials. The authors gratefully acknowledge the support from Air Force Office of Scientific Research (AFOSR) and the Defense Threat Reduction Agency (DTRA).
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.
2002-01-01
Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.
Probabilistic fracture finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Lua, Y. J.
1991-01-01
The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.
Probabilistic fracture finite elements
NASA Astrophysics Data System (ADS)
Liu, W. K.; Belytschko, T.; Lua, Y. J.
1991-05-01
The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.
Rocketdyne PSAM: In-house enhancement/application
NASA Technical Reports Server (NTRS)
Newell, J. F.; Rajagopal, K. R.; Ohara, K.
1991-01-01
The development was initiated of the Probabilistic Design Analysis (PDA) Process for rocket engines. This will enable engineers a quantitative assessment of calculated reliability during the design process. The PDA will help choose better designs, make them more robust, and help decide on critical tests to help demonstrate key reliability issues to aid in improving the confidence of the engine capabilities. Rockedyne's involvement with the Composite Loads Spectra (CLS) and Probabilistic Structural Analysis Methodology (PSAM) contracts started this effort and are key elements in the on-going developments. Internal development efforts and hardware applications complement and extend the CLS and PSAM efforts. The completion of the CLS option work and the follow-on PSAM developments will also be integral parts of this methodology. A brief summary of these efforts is presented.
ZERO: probabilistic routing for deploy and forget Wireless Sensor Networks.
Vilajosana, Xavier; Llosa, Jordi; Pacho, Jose Carlos; Vilajosana, Ignasi; Juan, Angel A; Vicario, Jose Lopez; Morell, Antoni
2010-01-01
As Wireless Sensor Networks are being adopted by industry and agriculture for large-scale and unattended deployments, the need for reliable and energy-conservative protocols become critical. Physical and Link layer efforts for energy conservation are not mostly considered by routing protocols that put their efforts on maintaining reliability and throughput. Gradient-based routing protocols route data through most reliable links aiming to ensure 99% packet delivery. However, they suffer from the so-called "hot spot" problem. Most reliable routes waste their energy fast, thus partitioning the network and reducing the area monitored. To cope with this "hot spot" problem we propose ZERO a combined approach at Network and Link layers to increase network lifespan while conserving reliability levels by means of probabilistic load balancing techniques.
NASA Astrophysics Data System (ADS)
Degtyar, V. G.; Kalashnikov, S. T.; Mokin, Yu. A.
2017-10-01
The paper considers problems of analyzing aerodynamic properties (ADP) of reenetry vehicles (RV) as blunted rotary bodies with small random surface distortions. The interactions of math simulation of surface distortions, selection of tools for predicting ADPs of shaped bodies, evaluation of different-type ADP variations and their adaptation for dynamic problems are analyzed. The possibilities of deterministic and probabilistic approaches to evaluation of ADP variations are considered. The practical value of the probabilistic approach is demonstrated. The examples of extremal deterministic evaluations of ADP variations for a sphere and a sharp cone are given.
Concurrent Probabilistic Simulation of High Temperature Composite Structural Response
NASA Technical Reports Server (NTRS)
Abdi, Frank
1996-01-01
A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.
Ganju, N.K.; Knowles, N.; Schoellhamer, D.H.
2008-01-01
In this study we used hydrologic proxies to develop a daily sediment load time-series, which agrees with decadal sediment load estimates, when integrated. Hindcast simulations of bathymetric change in estuaries require daily sediment loads from major tributary rivers, to capture the episodic delivery of sediment during multi-day freshwater flow pulses. Two independent decadal sediment load estimates are available for the Sacramento/San Joaquin River Delta, California prior to 1959, but they must be downscaled to a daily interval for use in hindcast models. Daily flow and sediment load data to the Delta are available after 1930 and 1959, respectively, but bathymetric change simulations for San Francisco Bay prior to this require a method to generate daily sediment load estimates into the Delta. We used two historical proxies, monthly rainfall and unimpaired flow magnitudes, to generate monthly unimpaired flows to the Sacramento/San Joaquin Delta for the 1851-1929 period. This step generated the shape of the monthly hydrograph. These historical monthly flows were compared to unimpaired monthly flows from the modern era (1967-1987), and a least-squares metric selected a modern water year analogue for each historical water year. The daily hydrograph for the modern analogue was then assigned to the historical year and scaled to match the flow volume estimated by dendrochronology methods, providing the correct total flow for the year. We applied a sediment rating curve to this time-series of daily flows, to generate daily sediment loads for 1851-1958. The rating curve was calibrated with the two independent decadal sediment load estimates, over two distinct periods. This novel technique retained the timing and magnitude of freshwater flows and sediment loads, without damping variability or net sediment loads to San Francisco Bay. The time-series represents the hydraulic mining period with sustained periods of increased sediment loads, and a dramatic decrease after 1910, corresponding to a reduction in available mining debris. The analogue selection procedure also permits exploration of the morphological hydrograph concept, where a limited set of hydrographs is used to simulate the same bathymetric change as the actual set of hydrographs. The final daily sediment load time-series and morphological hydrograph concept will be applied as landward boundary conditions for hindcasting simulations of bathymetric change in San Francisco Bay.
Analysis of Ares Crew Launch Vehicle Transonic Alternating Flow Phenomenon
NASA Technical Reports Server (NTRS)
Sekula, Martin K.; Piatak, David J.; Rausch, Russ D.
2012-01-01
A transonic wind tunnel test of the Ares I-X Rigid Buffet Model (RBM) identified a Mach number regime where unusually large buffet loads are present. A subsequent investigation identified the cause of these loads to be an alternating flow phenomenon at the Crew Module-Service Module junction. The conical design of the Ares I-X Crew Module and the cylindrical design of the Service Module exposes the vehicle to unsteady pressure loads due to the sudden transition between a subsonic separated and a supersonic attached flow about the cone-cylinder junction as the local flow randomly fluctuates back and forth between the two flow states. These fluctuations produce a square-wave like pattern in the pressure time histories resulting in large amplitude, impulsive buffet loads. Subsequent testing of the Ares I RBM found much lower buffet loads since the evolved Ares I design includes an ogive fairing that covers the Crew Module-Service Module junction, thereby making the vehicle less susceptible to the onset of alternating flow. An analysis of the alternating flow separation and attachment phenomenon indicates that the phenomenon is most severe at low angles of attack and exacerbated by the presence of vehicle protuberances. A launch vehicle may experience either a single or, at most, a few impulsive loads since it is constantly accelerating during ascent rather than dwelling at constant flow conditions in a wind tunnel. A comparison of a windtunnel- test-data-derived impulsive load to flight-test-data-derived load indicates a significant over-prediction in the magnitude and duration of the buffet load. I. Introduction One
In Situ Distribution Guided Analysis and Visualization of Transonic Jet Engine Simulations.
Dutta, Soumya; Chen, Chun-Ming; Heinlein, Gregory; Shen, Han-Wei; Chen, Jen-Ping
2017-01-01
Study of flow instability in turbine engine compressors is crucial to understand the inception and evolution of engine stall. Aerodynamics experts have been working on detecting the early signs of stall in order to devise novel stall suppression technologies. A state-of-the-art Navier-Stokes based, time-accurate computational fluid dynamics simulator, TURBO, has been developed in NASA to enhance the understanding of flow phenomena undergoing rotating stall. Despite the proven high modeling accuracy of TURBO, the excessive simulation data prohibits post-hoc analysis in both storage and I/O time. To address these issues and allow the expert to perform scalable stall analysis, we have designed an in situ distribution guided stall analysis technique. Our method summarizes statistics of important properties of the simulation data in situ using a probabilistic data modeling scheme. This data summarization enables statistical anomaly detection for flow instability in post analysis, which reveals the spatiotemporal trends of rotating stall for the expert to conceive new hypotheses. Furthermore, the verification of the hypotheses and exploratory visualization using the summarized data are realized using probabilistic visualization techniques such as uncertain isocontouring. Positive feedback from the domain scientist has indicated the efficacy of our system in exploratory stall analysis.
Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2014-04-14
To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less
Analysis of scale effect in compressive ice failure and implications for design
NASA Astrophysics Data System (ADS)
Taylor, Rocky Scott
The main focus of the study was the analysis of scale effect in local ice pressure resulting from probabilistic (spalling) fracture and the relationship between local and global loads due to the averaging of pressures across the width of a structure. A review of fundamental theory, relevant ice mechanics and a critical analysis of data and theory related to the scale dependent pressure behavior of ice were completed. To study high pressure zones (hpzs), data from small-scale indentation tests carried out at the NRC-IOT were analyzed, including small-scale ice block and ice sheet tests. Finite element analysis was used to model a sample ice block indentation event using a damaging, viscoelastic material model and element removal techniques (for spalling). Medium scale tactile sensor data from the Japan Ocean Industries Association (JOIA) program were analyzed to study details of hpz behavior. The averaging of non-simultaneous hpz loads during an ice-structure interaction was examined using local panel pressure data. Probabilistic averaging methodology for extrapolating full-scale pressures from local panel pressures was studied and an improved correlation model was formulated. Panel correlations for high speed events were observed to be lower than panel correlations for low speed events. Global pressure estimates based on probabilistic averaging were found to give substantially lower average errors in estimation of load compared with methods based on linear extrapolation (no averaging). Panel correlations were analyzed for Molikpaq and compared with JOIA results. From this analysis, it was shown that averaging does result in decreasing pressure for increasing structure width. The relationship between local pressure and ice thickness for a panel of unit width was studied in detail using full-scale data from the STRICE, Molikpaq, Cook Inlet and Japan Ocean Industries Association (JOIA) data sets. A distinct trend of decreasing pressure with increasing ice thickness was observed. The pressure-thickness behavior was found to be well modeled by the power law relationships Pavg = 0.278 h-0.408 MPa and Pstd = 0.172h-0.273 MPa for the mean and standard deviation of pressure, respectively. To study theoretical aspects of spalling fracture and the pressure-thickness scale effect, probabilistic failure models have been developed. A probabilistic model based on Weibull theory (tensile stresses only) was first developed. Estimates of failure pressure obtained with this model were orders of magnitude higher than the pressures observed from benchmark data due to the assumption of only tensile failure. A probabilistic fracture mechanics (PFM) model including both tensile and compressive (shear) cracks was developed. Criteria for unstable fracture in tensile and compressive (shear) zones were given. From these results a clear theoretical scale effect in peak (spalling) pressure was observed. This scale effect followed the relationship Pp,th = 0.15h-0.50 MPa which agreed well with the benchmark data. The PFM model was applied to study the effect of ice edge shape (taper angle) and hpz eccentricity. Results indicated that specimens with flat edges spall at lower pressures while those with more tapered edges spall less readily. The mean peak (failure) pressure was also observed to decrease with increased eccentricity. It was concluded that hpzs centered about the middle of the ice thickness are the zones most likely to create the peak pressures that are of interest in design. Promising results were obtained using the PFM model, which provides strong support for continued research in the development and application of probabilistic fracture mechanics to the study of scale effects in compressive ice failure and to guide the development of methods for the estimation of design ice pressures.
Probabilistic Predictions of Traffic Demand for En Route Sectors Based on Individual Flight Data
DOT National Transportation Integrated Search
2010-01-01
The Traffic Flow Management System (TFMS) predicts the demand for each sector, and traffic managers use these predictions to spot possible congestion and to take measures to prevent it. These predictions of sector demand, however, are currently made ...
Thermal conductivity of heterogeneous mixtures and lunar soils
NASA Technical Reports Server (NTRS)
Vachon, R. I.; Prakouras, A. G.; Crane, R.; Khader, M. S.
1973-01-01
The theoretical evaluation of the effective thermal conductivity of granular materials is discussed with emphasis upon the heat transport properties of lunar soil. The following types of models are compared: probabilistic, parallel isotherm, stochastic, lunar, and a model based on nonlinear heat flow system synthesis.
PIV measurements in a compact return diffuser under multi-conditions
NASA Astrophysics Data System (ADS)
Zhou, L.; Lu, W. G.; Shi, W. D.
2013-12-01
Due to the complex three-dimensional geometries of impellers and diffusers, their design is a delicate and difficult task. Slight change could lead to significant changes in hydraulic performance and internal flow structure. Conversely, the grasp of the pump's internal flow pattern could benefit from pump design improvement. The internal flow fields in a compact return diffuser have been investigated experimentally under multi-conditions. A special Particle Image Velocimetry (PIV) test rig is designed, and the two-dimensional PIV measurements are successfully conducted in the diffuser mid-plane to capture the complex flow patterns. The analysis of the obtained results has been focused on the flow structure in diffuser, especially under part-load conditions. The vortex and recirculation flow patterns in diffuser are captured and analysed accordingly. Strong flow separation and back flow appeared at the part-load flow rates. Under the design and over-load conditions, the flow fields in diffuser are uniform, and the flow separation and back flow appear at the part-load flow rates, strong back flow is captured at one diffuser passage under 0.2Qdes.
NASA Astrophysics Data System (ADS)
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2018-05-01
This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.
Comprehensive probabilistic modelling of environmental emissions of engineered nanomaterials.
Sun, Tian Yin; Gottschalk, Fadri; Hungerbühler, Konrad; Nowack, Bernd
2014-02-01
Concerns about the environmental risks of engineered nanomaterials (ENM) are growing, however, currently very little is known about their concentrations in the environment. Here, we calculate the concentrations of five ENM (nano-TiO2, nano-ZnO, nano-Ag, CNT and fullerenes) in environmental and technical compartments using probabilistic material-flow modelling. We apply the newest data on ENM production volumes, their allocation to and subsequent release from different product categories, and their flows into and within those compartments. Further, we compare newly predicted ENM concentrations to estimates from 2009 and to corresponding measured concentrations of their conventional materials, e.g. TiO2, Zn and Ag. We show that the production volume and the compounds' inertness are crucial factors determining final concentrations. ENM production estimates are generally higher than a few years ago. In most cases, the environmental concentrations of corresponding conventional materials are between one and seven orders of magnitude higher than those for ENM. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mahdavi, Alireza; Haghighat, Fariborz; Bahloul, Ali; Brochot, Clothilde; Ostiguy, Claude
2015-06-01
It is necessary to investigate the efficiencies of filtering facepiece respirators (FFRs) exposed to ultrafine particles (UFPs) for long periods of time, since the particle loading time may potentially affect the efficiency of FFRs. This article aims to investigate the filtration efficiency for a model of electrostatic N95 FFRs with constant and 'inhalation-only' cyclic flows, in terms of particle loading time effect, using different humidity conditions. Filters were exposed to generated polydisperse NaCl particles. Experiments were performed mimicking an 'inhalation-only' scenario with a cyclic flow of 85 l min(-1) as the minute volume [or 170 l min(-1) as mean inhalation flow (MIF)] and for two constant flows of 85 and 170 l min(-1), under three relative humidity (RH) levels of 10, 50, and 80%. Each test was performed for loading time periods of 6h and the particle penetration (10-205.4nm in electrical mobility diameter) was measured once every 2h. For a 10% RH, the penetration of smaller size particles (<80nm), including the most penetrating particle size (MPPS), decreased over time for both constant and cyclic flows. For 50 and 80% RH levels, the changes in penetration were typically observed in an opposite direction with less magnitude. The penetrations at MPPS increased with respect to loading time under constant flow conditions (85 and 170 l min(-1)): it did not substantially increase under cyclic flows. The comparison of the cyclic flow (85 l min(-1) as minute volume) and constant flow equal to the cyclic flow minute volume indicated that, for all conditions the penetration was significantly less for the constant flow than that of cyclic flow. The comparison between the cyclic (170 l min(-1) as MIF) and constant flow equal to cyclic flow MIF indicated that, for the initial stage of loading, the penetrations were almost equal, but they were different for the final stages of the loading time. For a 10% RH, the penetration of a wide range of sizes was observed to be higher with the cyclic flow (170 as MIF) than with the equivalent constant flow (170 l min(-1)). For 50 and 80% RH levels, the penetrations were usually greater with a constant flow (170 l min(-1)) than with a cyclic flow (170 l min(-1) as MIF). It is concluded that, for the tested electrostatic N95 filters, the change in penetration as a function of the loading time does not necessarily take place with the same rate under constant (MIF) and cyclic flow. Moreover, for all tested flow rates, the penetration is not only affected by the loading time but also by the RH level. Lower RH levels (10%) have decreasing penetration rates in terms of loading time, while higher RH levels (50 and 80%) have increasing penetration rates. Also, the loading of the filter is normally accompanied with a shift of MPPS towards larger sizes. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Uncertainty analysis of a groundwater flow model in east-central Florida
Sepúlveda, Nicasio; Doherty, John E.
2014-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The “Null Space Monte Carlo” method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model’s capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial/temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context.
Uncertainty analysis of a groundwater flow model in East-central Florida.
Sepúlveda, Nicasio; Doherty, John
2015-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan Aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The "Null Space Monte Carlo" method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model's capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial or temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context. © 2014, National Ground Water Association.
Gurdak, Jason J.; Walvoord, Michelle Ann; McMahon, Peter B.
2008-01-01
Aquifer susceptibility to contamination is controlled in part by the inherent hydrogeologic properties of the vadose zone, which includes preferential-flow pathways. The purpose of this study was to investigate the importance of seasonal ponding near leaky irrigation wells as a mechanism for depression-focused preferential flow and enhanced chemical migration through the vadose zone of the High Plains aquifer. Such a mechanism may help explain the widespread presence of agrichemicals in recently recharged groundwater despite estimates of advective chemical transit times through the vadose zone from diffuse recharge that exceed the historical period of agriculture. Using a combination of field observations, vadose zone flow and transport simulations, and probabilistic neural network modeling, we demonstrated that vadose zone transit times near irrigation wells range from 7 to 50 yr, which are one to two orders of magnitude faster than previous estimates based on diffuse recharge. These findings support the concept of fast and slow transport zones and help to explain the previous discordant findings of long vadose zone transit times and the presence of agrichemicals at the water table. Using predictions of aquifer susceptibility from probabilistic neural network models, we delineated approximately 20% of the areal extent of the aquifer to have conditions that may promote advective chemical transit times to the water table of <50 yr if seasonal ponding and depression-focused flow exist. This aquifer-susceptibility map may help managers prioritize areas for groundwater monitoring or implementation of best management practices.
The effect of mass loading on the temperature of a flowing plasma. [in vicinity of Io
NASA Technical Reports Server (NTRS)
Linker, Jon A.; Kivelson, Margaret G.; Walker, Raymond J.
1989-01-01
How the addition of ions at rest (mass loading) affects the temperature of a flowing plasma in a MHD approximation is investigated, using analytic theory and time dependent, three-dimensional MHD simulations of plasma flow past Io. The MHD equations show that the temperature can increase or decrease relative to the background, depending on the local sonic Mach number M(S), of the flow. For flows with M(S) of greater than sq rt 9/5 (when gamma = 5/3), mass loading increases the plasma temperature. However, the simulations show a nonlinear response to the addition of mass. If the mass loading rate is large enough, the temperature increase may be smaller than expected, or the temperature may actually decrease, because a large mass loading rate slows the flow and decreases the thermal energy of the newly created plasma.
NASA Astrophysics Data System (ADS)
Velazquez, Antonio; Swartz, Raymond A.
2011-04-01
Wind turbine systems are attracting considerable attention due to concerns regarding global energy consumption as well as sustainability. Advances in wind turbine technology promote the tendency to improve efficiency in the structure that support and produce this renewable power source, tending toward more slender and larger towers, larger gear boxes, and larger, lighter blades. The structural design optimization process must account for uncertainties and nonlinear effects (such as wind-induced vibrations, unmeasured disturbances, and material and geometric variabilities). In this study, a probabilistic monitoring approach is developed that measures the response of the turbine tower to stochastic loading, estimates peak demand, and structural resistance (in terms of serviceability). The proposed monitoring system can provide a real-time estimate of the probability of exceedance of design serviceability conditions based on data collected in-situ. Special attention is paid to wind and aerodynamic characteristics that are intrinsically present (although sometimes neglected in health monitoring analysis) and derived from observations or experiments. In particular, little attention has been devoted to buffeting, usually non-catastrophic but directly impacting the serviceability of the operating wind turbine. As a result, modal-based analysis methods for the study and derivation of flutter instability, and buffeting response, have been successfully applied to the assessment of the susceptibility of high-rise slender structures, including wind turbine towers. A detailed finite element model has been developed to generate data (calibrated to published experimental and analytical results). Risk assessment is performed for the effects of along wind forces in a framework of quantitative risk analysis. Both structural resistance and wind load demands were considered probabilistic with the latter assessed by dynamic analyses.
A Tsunami Model for Chile for (Re) Insurance Purposes
NASA Astrophysics Data System (ADS)
Arango, Cristina; Rara, Vaclav; Puncochar, Petr; Trendafiloski, Goran; Ewing, Chris; Podlaha, Adam; Vatvani, Deepak; van Ormondt, Maarten; Chandler, Adrian
2014-05-01
Catastrophe models help (re)insurers to understand the financial implications of catastrophic events such as earthquakes and tsunamis. In earthquake-prone regions such as Chile,(re)insurers need more sophisticated tools to quantify the risks facing their businesses, including models with the ability to estimate secondary losses. The 2010 (M8.8) Maule (Chile) earthquake highlighted the need for quantifying losses from secondary perils such as tsunamis, which can contribute to the overall event losses but are not often modelled. This paper presents some key modelling aspects of a new earthquake catastrophe model for Chile developed by Impact Forecasting in collaboration with Aon Benfield Research partners, focusing on the tsunami component. The model has the capability to model tsunami as a secondary peril - losses due to earthquake (ground-shaking) and induced tsunamis along the Chilean coast are quantified in a probabilistic manner, and also for historical scenarios. The model is implemented in the IF catastrophe modelling platform, ELEMENTS. The probabilistic modelling of earthquake-induced tsunamis uses a stochastic event set that is consistent with the seismic (ground shaking) hazard developed for Chile, representing simulations of earthquake occurrence patterns for the region. Criteria for selecting tsunamigenic events (from the stochastic event set) are proposed which take into consideration earthquake location, depth and the resulting seabed vertical displacement and tsunami inundation depths at the coast. The source modelling software RuptGen by Babeyko (2007) was used to calculate static seabed vertical displacement resulting from earthquake slip. More than 3,600 events were selected for tsunami simulations. Deep and shallow water wave propagation is modelled using the Delft3D modelling suite, which is a state-of-the-art software developed by Deltares. The Delft3D-FLOW module is used in 2-dimensional hydrodynamic simulation settings with non-steady flow. Earthquake-induced static seabed vertical displacement is used as an input boundary condition to the model. The model is hierarchically set up with three nested domain levels; with 250 domains in total covering the entire Chilean coast. Spatial grid-cell resolution is equal to the native SRTM resolution of approximately 90m. In addition to the stochastic events, the 1960 (M9.5) Valdivia and 2010 (M8.8) Maule earthquakes are modelled. The modelled tsunami inundation map for the 2010 Maule event is validated through comparison with real observations. The vulnerability component consists of an extensive damage curves database, including curves for buildings, contents and business interruption for 21 occupancies, 24 structural types and two secondary modifies such as building height and period of construction. The building damage curves are developed by use of load-based method in which the building's capacity to resist tsunami loads is treated as equivalent to the design earthquake load capacity. The contents damage and business interruption curves are developed by use of deductive approach i.e. HAZUS flood vulnerability and business function restoration models are adapted for detailed occupancies and then assigned to the dominant structural types in Chile. The vulnerability component is validated through model overall back testing by use of observed aggregated earthquake and tsunami losses for client portfolios for 2010 Maule earthquake.
Dynamic Statistical Models for Pyroclastic Density Current Generation at Soufrière Hills Volcano
NASA Astrophysics Data System (ADS)
Wolpert, Robert L.; Spiller, Elaine T.; Calder, Eliza S.
2018-05-01
To mitigate volcanic hazards from pyroclastic density currents, volcanologists generate hazard maps that provide long-term forecasts of areas of potential impact. Several recent efforts in the field develop new statistical methods for application of flow models to generate fully probabilistic hazard maps that both account for, and quantify, uncertainty. However a limitation to the use of most statistical hazard models, and a key source of uncertainty within them, is the time-averaged nature of the datasets by which the volcanic activity is statistically characterized. Where the level, or directionality, of volcanic activity frequently changes, e.g. during protracted eruptive episodes, or at volcanoes that are classified as persistently active, it is not appropriate to make short term forecasts based on longer time-averaged metrics of the activity. Thus, here we build, fit and explore dynamic statistical models for the generation of pyroclastic density current from Soufrière Hills Volcano (SHV) on Montserrat including their respective collapse direction and flow volumes based on 1996-2008 flow datasets. The development of this approach allows for short-term behavioral changes to be taken into account in probabilistic volcanic hazard assessments. We show that collapses from the SHV lava dome follow a clear pattern, and that a series of smaller flows in a given direction often culminate in a larger collapse and thereafter directionality of the flows change. Such models enable short term forecasting (weeks to months) that can reflect evolving conditions such as dome and crater morphology changes and non-stationary eruptive behavior such as extrusion rate variations. For example, the probability of inundation of the Belham Valley in the first 180 days of a forecast period is about twice as high for lava domes facing Northwest toward that valley as it is for domes pointing East toward the Tar River Valley. As rich multi-parametric volcano monitoring dataset become increasingly available, eruption forecasting is becoming an increasingly viable and important research field. We demonstrate an approach to utilize such data in order to appropriately 'tune' probabilistic hazard assessments for pyroclastic flows. Our broader objective with development of this method is to help advance time-dependent volcanic hazard assessment, by bridging the
Ensemble reconstruction of severe low flow events in France since 1871
NASA Astrophysics Data System (ADS)
Caillouet, Laurie; Vidal, Jean-Philippe; Sauquet, Eric; Devers, Alexandre; Graff, Benjamin
2016-04-01
This work presents a study of severe low flow events that occurred from 1871 onwards for a large number of near-natural catchments in France. It aims at assessing and comparing their characteristics to improve our knowledge on historical events and to provide a selection of benchmark events for climate change adaptation purposes. The historical depth of streamflow observations is generally limited to the last 50 years and therefore offers too small a sample of severe low flow events to properly explore the long-term evolution of their characteristics and associated impacts. In order to overcome this limit, this work takes advantage of a 140-year ensemble hydrometeorological dataset over France based on: (1) a probabilistic precipitation and temperature downscaling of the Twentieth Century Reanalysis over France (Caillouet et al., 2015), and (2) a continuous hydrological modelling that uses the high-resolution meteorological reconstructions as forcings over the whole period. This dataset provides an ensemble of 25 equally plausible daily streamflow time series for a reference network of stations in France over the whole 1871-2012 period. Severe low flow events are identified based on a combination of a fixed threshold and a daily variable threshold. Each event is characterized by its deficit, duration and timing by applying the Sequent Peak Algorithm. The procedure is applied to the 25 simulated time series as well as to the observed time series in order to compare observed and simulated events over the recent period, and to characterize in a probabilistic way unrecorded historical events. The ensemble aspect of the reconstruction leads to address specific issues, for properly defining events across ensemble simulations, as well as for adequately comparing the simulated characteristics to the observed ones. This study brings forward the outstanding 1921 and 1940s events but also older and less known ones that occurred during the last decade of the 19th century. For the first time, severe low flow events are qualified in a homogeneous way over 140 years on a large set of near-natural French catchments, allowing for detailed analyses of the effect of climate variability and anthropogenic climate change on low flow hydrology. Caillouet, L., Vidal, J.-P., Sauquet, E., and Graff, B. (2015) Probabilistic precipitation and temperature downscaling of the Twentieth Century Reanalysis over France, Clim. Past Discuss., 11, 4425-4482, doi:10.5194/cpd-11-4425-2015
Probabilistic forecasts of debris-flow hazard at the regional scale with a combination of models.
NASA Astrophysics Data System (ADS)
Malet, Jean-Philippe; Remaître, Alexandre
2015-04-01
Debris flows are one of the many active slope-forming processes in the French Alps, where rugged and steep slopes mantled by various slope deposits offer a great potential for triggering hazardous events. A quantitative assessment of debris-flow hazard requires the estimation, in a probabilistic framework, of the spatial probability of occurrence of source areas, the spatial probability of runout areas, the temporal frequency of events, and their intensity. The main objective of this research is to propose a pipeline for the estimation of these quantities at the region scale using a chain of debris-flow models. The work uses the experimental site of the Barcelonnette Basin (South French Alps), where 26 active torrents have produced more than 150 debris-flow events since 1850 to develop and validate the methodology. First, a susceptibility assessment is performed to identify the debris-flow prone source areas. The most frequently used approach is the combination of environmental factors with GIS procedures and statistical techniques, integrating or not, detailed event inventories. Based on a 5m-DEM and derivatives, and information on slope lithology, engineering soils and landcover, the possible source areas are identified with a statistical logistic regression model. The performance of the statistical model is evaluated with the observed distribution of debris-flow events recorded after 1850 in the study area. The source areas in the three most active torrents (Riou-Bourdoux, Faucon, Sanières) are well identified by the model. Results are less convincing for three other active torrents (Bourget, La Valette and Riou-Chanal); this could be related to the type of debris-flow triggering mechanism as the model seems to better spot the open slope debris-flow source areas (e.g. scree slopes), but appears to be less efficient for the identification of landslide-induced debris flows. Second, a susceptibility assessment is performed to estimate the possible runout distance with a process-based model. The MassMov-2D code is a two-dimensional model of mud and debris flow dynamics over complex topography, based on a numerical integration of the depth-averaged motion equations using shallow water approximation. The run-out simulations are performed for the most active torrents. The performance of the model has been evaluated by comparing modelling results with the observed spreading areas of several recent debris flows. Existing data on the debris flow volume, input discharge and deposits were used to back-analyze those events and estimate the values of the model parameters. Third, hazard is estimated on the basis of scenarios computed in a probabilistic way, for volumes in the range 20'000 to 350'000 m3, and for several combinations of rheological parameters. In most cases, the simulations indicate that the debris flows cause significant overflowing on the alluvial fans for volumes exceeding 100'000 m3 (height of deposits > 2 m, velocities > 5 m.s-1). Probabilities of debris flow runout and debris flow intensities are then computed for each terrain units.
Probabilistic constraints from existing and future radar imaging on volcanic activity on Venus
NASA Astrophysics Data System (ADS)
Lorenz, Ralph D.
2015-11-01
We explore the quantitative limits that may be placed on Venus' present-day volcanic activity by radar imaging of surface landforms. The apparent nondetection of new lava flows in the areas observed twice by Magellan suggests that there is a ~60% chance that the eruption rate is ~1 km3/yr or less, using the eruption history and area/volume flow geometry of terrestrial volcanoes (Etna, Mauna Loa and Merapi) as a guide. However, if the detection probability of an individual flow is low (e.g. ~10%) due to poor resolution or quality and unmodeled viewing geometry effects, the constraint (<10 km3/yr) is not useful. Imaging at Magellan resolution or better of only ~10% of the surface area of Venus on a new mission (30 years after Magellan) would yield better than 99% chance of detecting a new lava flow, even if the volcanic activity is at the low end of predictions (~0.01 km3/yr) and is expressed through a single volcano with a stochastic eruption history. Closer re-examination of Magellan data may be worthwhile, both to search for new features, and to establish formal (location-dependent) limits on activity against which data from future missions can be tested. While Magellan-future and future-future comparisons should offer much lower detection thresholds for erupted volumes, a probabilistic approach will be required to properly understand the implications.
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Probabilistic forecasts based on radar rainfall uncertainty
NASA Astrophysics Data System (ADS)
Liguori, S.; Rico-Ramirez, M. A.
2012-04-01
The potential advantages resulting from integrating weather radar rainfall estimates in hydro-meteorological forecasting systems is limited by the inherent uncertainty affecting radar rainfall measurements, which is due to various sources of error [1-3]. The improvement of quality control and correction techniques is recognized to play a role for the future improvement of radar-based flow predictions. However, the knowledge of the uncertainty affecting radar rainfall data can also be effectively used to build a hydro-meteorological forecasting system in a probabilistic framework. This work discusses the results of the implementation of a novel probabilistic forecasting system developed to improve ensemble predictions over a small urban area located in the North of England. An ensemble of radar rainfall fields can be determined as the sum of a deterministic component and a perturbation field, the latter being informed by the knowledge of the spatial-temporal characteristics of the radar error assessed with reference to rain-gauges measurements. This approach is similar to the REAL system [4] developed for use in the Southern-Alps. The radar uncertainty estimate can then be propagated with a nowcasting model, used to extrapolate an ensemble of radar rainfall forecasts, which can ultimately drive hydrological ensemble predictions. A radar ensemble generator has been calibrated using radar rainfall data made available from the UK Met Office after applying post-processing and corrections algorithms [5-6]. One hour rainfall accumulations from 235 rain gauges recorded for the year 2007 have provided the reference to determine the radar error. Statistics describing the spatial characteristics of the error (i.e. mean and covariance) have been computed off-line at gauges location, along with the parameters describing the error temporal correlation. A system has then been set up to impose the space-time error properties to stochastic perturbations, generated in real-time at gauges location, and then interpolated back onto the radar domain, in order to obtain probabilistic radar rainfall fields in real time. The deterministic nowcasting model integrated in the STEPS system [7-8] has been used for the purpose of propagating the uncertainty and assessing the benefit of implementing the radar ensemble generator for probabilistic rainfall forecasts and ultimately sewer flow predictions. For this purpose, events representative of different types of precipitation (i.e. stratiform/convective) and significant at the urban catchment scale (i.e. in terms of sewer overflow within the urban drainage system) have been selected. As high spatial/temporal resolution is required to the forecasts for their use in urban areas [9-11], the probabilistic nowcasts have been set up to be produced at 1 km resolution and 5 min intervals. The forecasting chain is completed by a hydrodynamic model of the urban drainage network. The aim of this work is to discuss the implementation of this probabilistic system, which takes into account the radar error to characterize the forecast uncertainty, with consequent potential benefits in the management of urban systems. It will also allow a comparison with previous findings related to the analysis of different approaches to uncertainty estimation and quantification in terms of rainfall [12] and flows at the urban scale [13]. Acknowledgements The authors would like to acknowledge the BADC, the UK Met Office and Dr. Alan Seed from the Australian Bureau of Meteorology for providing the radar data and the nowcasting model. The authors acknowledge the support from the Engineering and Physical Sciences Research Council (EPSRC) via grant EP/I012222/1.
Experimental Study on Impact Load on a Dam Due to Debris Flow
lwao Miyoshi
1991-01-01
When a dam is struck by mud or debris flow, it is put under a great impact load and sometimes is destroyed. To prevent such destruction, it is important to perform basic research about the impact load on a dam due to debris flow. Thus, we have made an experimental study and tried to establish a method to estimate such a impact load on the dam. The experiment was...
Kimball, B.A.; Runkel, R.L.; Walton-Day, K.
2010-01-01
Historical mining has left complex problems in catchments throughout the world. Land managers are faced with making cost-effective plans to remediate mine influences. Remediation plans are facilitated by spatial mass-loading profiles that indicate the locations of metal mass-loading, seasonal changes, and the extent of biogeochemical processes. Field-scale experiments during both low- and high-flow conditions and time-series data over diel cycles illustrate how this can be accomplished. A low-flow experiment provided spatially detailed loading profiles to indicate where loading occurred. For example, SO42 - was principally derived from sources upstream from the study reach, but three principal locations also were important for SO42 - loading within the reach. During high-flow conditions, Lagrangian sampling provided data to interpret seasonal changes and indicated locations where snowmelt runoff flushed metals to the stream. Comparison of metal concentrations between the low- and high-flow experiments indicated substantial increases in metal loading at high flow, but little change in metal concentrations, showing that toxicity at the most downstream sampling site was not substantially greater during snowmelt runoff. During high-flow conditions, a detailed temporal sampling at fixed sites indicated that Zn concentration more than doubled during the diel cycle. Monitoring programs must account for diel variation to provide meaningful results. Mass-loading studies during different flow conditions and detailed time-series over diel cycles provide useful scientific support for stream management decisions.
Probabilistic Aeroelastic Analysis Developed for Turbomachinery Components
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Mital, Subodh K.; Stefko, George L.; Pai, Shantaram S.
2003-01-01
Aeroelastic analyses for advanced turbomachines are being developed for use at the NASA Glenn Research Center and industry. However, these analyses at present are used for turbomachinery design with uncertainties accounted for by using safety factors. This approach may lead to overly conservative designs, thereby reducing the potential of designing higher efficiency engines. An integration of the deterministic aeroelastic analysis methods with probabilistic analysis methods offers the potential to design efficient engines with fewer aeroelastic problems and to make a quantum leap toward designing safe reliable engines. In this research, probabilistic analysis is integrated with aeroelastic analysis: (1) to determine the parameters that most affect the aeroelastic characteristics (forced response and stability) of a turbomachine component such as a fan, compressor, or turbine and (2) to give the acceptable standard deviation on the design parameters for an aeroelastically stable system. The approach taken is to combine the aeroelastic analysis of the MISER (MIStuned Engine Response) code with the FPI (fast probability integration) code. The role of MISER is to provide the functional relationships that tie the structural and aerodynamic parameters (the primitive variables) to the forced response amplitudes and stability eigenvalues (the response properties). The role of FPI is to perform probabilistic analyses by utilizing the response properties generated by MISER. The results are a probability density function for the response properties. The probabilistic sensitivities of the response variables to uncertainty in primitive variables are obtained as a byproduct of the FPI technique. The combined analysis of aeroelastic and probabilistic analysis is applied to a 12-bladed cascade vibrating in bending and torsion. Out of the total 11 design parameters, 6 are considered as having probabilistic variation. The six parameters are space-to-chord ratio (SBYC), stagger angle (GAMA), elastic axis (ELAXS), Mach number (MACH), mass ratio (MASSR), and frequency ratio (WHWB). The cascade is considered to be in subsonic flow with Mach 0.7. The results of the probabilistic aeroelastic analysis are the probability density function of predicted aerodynamic damping and frequency for flutter and the response amplitudes for forced response.
Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.
2012-01-01
Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.
A Novel TRM Calculation Method by Probabilistic Concept
NASA Astrophysics Data System (ADS)
Audomvongseree, Kulyos; Yokoyama, Akihiko; Verma, Suresh Chand; Nakachi, Yoshiki
In a new competitive environment, it becomes possible for the third party to access a transmission facility. From this structure, to efficiently manage the utilization of the transmission network, a new definition about Available Transfer Capability (ATC) has been proposed. According to the North American ElectricReliability Council (NERC)’s definition, ATC depends on several parameters, i. e. Total Transfer Capability (TTC), Transmission Reliability Margin (TRM), and Capacity Benefit Margin (CBM). This paper is focused on the calculation of TRM which is one of the security margin reserved for any uncertainty of system conditions. The TRM calculation by probabilistic method is proposed in this paper. Based on the modeling of load forecast error and error in transmission line limitation, various cases of transmission transfer capability and its related probabilistic nature can be calculated. By consideration of the proposed concept of risk analysis, the appropriate required amount of TRM can be obtained. The objective of this research is to provide realistic information on the actual ability of the network which may be an alternative choice for system operators to make an appropriate decision in the competitive market. The advantages of the proposed method are illustrated by application to the IEEJ-WEST10 model system.
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating
Lee, Young-Joo; Cho, Soojin
2016-01-01
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125
A probabilistic approach to modeling postfire erosion after the 2009 australian brushfires
USDA-ARS?s Scientific Manuscript database
Major concerns after bushfires and wildfires include increased flooding, erosion and debris flows due to loss of the protective forest floor layer, loss of water storage, and creation of water repellent soil conditions. To assist postfire assessment teams in their efforts to evaluate fire effects an...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Justin Leigh; Veeraraghavan, Swetha; Bolisetti, Chandrakanth
MASTODON has the capability to model stochastic nonlinear soil-structure interaction (NLSSI) in a dynamic probabilistic risk assessment framework. The NLSSI simulations include structural dynamics, time integration, dynamic porous media flow, nonlinear hysteretic soil constitutive models, geometric nonlinearities (gapping, sliding, and uplift). MASTODON is also the MOOSE based master application for dynamic PRA of external hazards.
An evaluation of flow-stratified sampling for estimating suspended sediment loads
Robert B. Thomas; Jack Lewis
1995-01-01
Abstract - Flow-stratified sampling is a new method for sampling water quality constituents such as suspended sediment to estimate loads. As with selection-at-list-time (SALT) and time-stratified sampling, flow-stratified sampling is a statistical method requiring random sampling, and yielding unbiased estimates of load and variance. It can be used to estimate event...
Analytical study of pressure balancing in gas film seals
NASA Technical Reports Server (NTRS)
Zuk, J.
1973-01-01
The load factor is investigated for subsonic and choked flow conditions, laminar and turbulent flows, and various seal entrance conditions. Both parallel sealing surfaces and surfaces with small linear deformation were investigated. The load factor for subsonic flow depends strongly on pressure ratio; under choked flow conditions, however the load factor is found to depend more strongly on film thickness and flow entrance conditions rather than pressure ratio. The importance of generating hydrodynamic forces to keep the seal balanced under severe and multipoint operation is also discussed.
NASA Astrophysics Data System (ADS)
Chen, Tzikang J.; Shiao, Michael
2016-04-01
This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.
Probabilistic modeling of the flows and environmental risks of nano-silica.
Wang, Yan; Kalinina, Anna; Sun, Tianyin; Nowack, Bernd
2016-03-01
Nano-silica, the engineered nanomaterial with one of the largest production volumes, has a wide range of applications in consumer products and industry. This study aimed to quantify the exposure of nano-silica to the environment and to assess its risk to surface waters. Concentrations were calculated for four environmental (air, soil, surface water, sediments) and two technical compartments (wastewater, solid waste) for the EU and Switzerland using probabilistic material flow modeling. The corresponding median concentration in surface water is predicted to be 0.12 μg/l in the EU (0.053-3.3 μg/l, 15/85% quantiles). The concentrations in sediments in the complete sedimentation scenario were found to be the largest among all environmental compartments, with a median annual increase of 0.43 mg/kg · y in the EU (0.19-12 mg/kg · y, 15/85% quantiles). Moreover, probabilistic species sensitivity distributions (PSSD) were computed and the risk of nano-silica in surface waters was quantified by comparing the predicted environmental concentration (PEC) with the predicted no-effect concentration (PNEC) distribution, which was derived from the cumulative PSSD. This assessment suggests that nano-silica currently poses no risk to aquatic organisms in surface waters. Further investigations are needed to assess the risk of nano-silica in other environmental compartments, which is currently not possible due to a lack of ecotoxicological data. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Doležel, Jiří; Novák, Drahomír; Petrů, Jan
2017-09-01
Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.
Flow Past a Descending Balloon
NASA Technical Reports Server (NTRS)
Baginski, Frank
2001-01-01
In this report, we present our findings related to aerodynamic loading of partially inflated balloon shapes. This report will consider aerodynamic loading of partially inflated inextensible natural shape balloons and some relevant problems in potential flow. For the axisymmetric modeling, we modified our Balloon Design Shape Program (BDSP) to handle axisymmetric inextensible ascent shapes with aerodynamic loading. For a few simple examples of two dimensional potential flows, we used the Matlab PDE Toolbox. In addition, we propose a model for aerodynamic loading of strained energy minimizing balloon shapes with lobes. Numerical solutions are presented for partially inflated strained balloon shapes with lobes and no aerodynamic loading.
[Forecast of costs of ecodependent cancer treatment for the development of management decisions].
Krasovskiy, V O
2014-01-01
The methodical approach for probabilistic forecasting and differentiation of treatment of costs of ecodependent cancer cases has been elaborated. The modality is useful in the organization of medical aid to cancer patients, in developing management decisions for the reduction the occupational load on the population, as well as in solutions problems in compensation to the population economic and social loss from industrial plants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilton, Harry H.
Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.
NASA Astrophysics Data System (ADS)
Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing
2018-01-01
For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.
Probabilistic Analysis of a SiC/SiC Ceramic Matrix Composite Turbine Vane
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Nemeth, Noel N.; Brewer, David N.; Mital, Subodh
2004-01-01
To demonstrate the advanced composite materials technology under development within the Ultra-Efficient Engine Technology (UEET) Program, it was planned to fabricate, test, and analyze a turbine vane made entirely of silicon carbide-fiber-reinforced silicon carbide matrix composite (SiC/SiC CMC) material. The objective was to utilize a five-harness satin weave melt-infiltrated (MI) SiC/SiC composite material developed under this program to design and fabricate a stator vane that can endure 1000 hours of engine service conditions. The vane was designed such that the expected maximum stresses were kept within the proportional limit strength of the material. Any violation of this design requirement was considered as the failure. This report presents results of a probabilistic analysis and reliability assessment of the vane. Probability of failure to meet the design requirements was computed. In the analysis, material properties, strength, and pressure loading were considered as random variables. The pressure loads were considered normally distributed with a nominal variation. A temperature profile on the vane was obtained by performing a computational fluid dynamics (CFD) analysis and was assumed to be deterministic. The results suggest that for the current vane design, the chance of not meeting design requirements is about 1.6 percent.
Probabilistic evaluation of uncertainties and risks in aerospace components
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.
1992-01-01
This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.
Deng, J.; Hudnut, K.; Gurnis, M.; Hauksson, E.
1999-01-01
Following the M(w) 6.7 Northridge earthquake, significant postseismic displacements were resolved with GPS. Using a three-dimensional viscoelastic model, we suggest that this deformation is mainly driven by viscous flow in the lower crust. Such flow can transfer stress to the upper crust and load the rupture zone of the main shock at a decaying rate. Most aftershocks within the rupture zone, especially those that occurred after the first several weeks of the main shock, may have been triggered by continuous stress loading from viscous flow. The long-term decay time of aftershocks (about 2 years) approximately matches the decay of viscoelastic loading, and thus is controlled by the viscosity of the lower crust. Our model provides a physical interpretation of the observed correlation between aftershock decay rate and surface heat flow.Following the Mw 6.7 Northridge earthquake, significant postseismic displacements were resolved with GPS. Using a three-dimensional viscoelastic model, we suggest that this deformation is mainly driven by viscous flow in the lower crust. Such flow can transfer stress to the upper crust and load the rupture zone of the main shock at a decaying rate. Most aftershocks within the rupture zone, especially those that occurred after the first several weeks of the main shock, may have been triggered by continuous stress loading from viscous flow. The long-term decay time of aftershocks (about 2 years) approximately matches the decay of viscoelastic loading, and thus is controlled by the viscosity of the lower crust. Our model provides a physical interpretation of the observed correlation between aftershock decay rate and surface heat flow.
Thermally determining flow and/or heat load distribution in parallel paths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chainer, Timothy J.; Iyengar, Madhusudan K.; Parida, Pritish R.
A method including obtaining calibration data for at least one sub-component in a heat transfer assembly, wherein the calibration data comprises at least one indication of coolant flow rate through the sub-component for a given surface temperature delta of the sub-component and a given heat load into said sub-component, determining a measured heat load into the sub-component, determining a measured surface temperature delta of the sub-component, and determining a coolant flow distribution in a first flow path comprising the sub-component from the calibration data according to the measured heat load and the measured surface temperature delta of the sub-component.
Thermally determining flow and/or heat load distribution in parallel paths
Chainer, Timothy J.; Iyengar, Madhusudan K.; Parida, Pritish R.
2016-12-13
A method including obtaining calibration data for at least one sub-component in a heat transfer assembly, wherein the calibration data comprises at least one indication of coolant flow rate through the sub-component for a given surface temperature delta of the sub-component and a given heat load into said sub-component, determining a measured heat load into the sub-component, determining a measured surface temperature delta of the sub-component, and determining a coolant flow distribution in a first flow path comprising the sub-component from the calibration data according to the measured heat load and the measured surface temperature delta of the sub-component.
ESTIMATING URBAN WET-WEATHER POLLUTANT LOADING
This paper presents procedures for estimating pollutant loads in urban watersheds emanating from wet-weather flow discharge. Equations for pollutant loading estimates will focus on the effects of wastewater characteristics, sewer flow carrying velocity, and sewer-solids depositi...
Su, Kuo-Chih; Chang, Chih-Han; Chuang, Shu-Fen; Ng, Eddie Yin-Kwee
2013-06-01
This study uses a fluid-structure interaction (FSI) simulation to evaluate the fluid flow in a dental intrapulpal chamber induced by the deformation of the tooth structure during loading in various directions. The FSI is used for the biomechanics simulation of dental intrapulpal responses with the force loading gradually increasing from 0 to 100N at 0°, 30°, 45°, 60°, and 90° on the tooth surface in 1s, respectively. The effect of stress or deformation on tooth and fluid flow changes in the pulp chamber are evaluated. A horizontal loading force on a tooth may induce tooth structure deformation, which increases fluid flow velocity in the coronal pulp. Thus, horizontal loading on a tooth may easily induce tooth pain. This study suggests that experiments to investigate the relationship between loading in various directions and dental pain should avoid measuring the bulk pulpal fluid flow from radicular pulp, but rather should measure the dentinal fluid flow in the dentinal tubules or coronal pulp. The FSI analysis used here could provide a powerful tool for investigating problems with coupled solid and fluid structures in dental biomechanics. Copyright © 2012 Elsevier Ltd. All rights reserved.
Impact of Groundwater Flow and Energy Load on Multiple Borehole Heat Exchangers.
Dehkordi, S Emad; Schincariol, Robert A; Olofsson, Bo
2015-01-01
The effect of array configuration, that is, number, layout, and spacing, on the performance of multiple borehole heat exchangers (BHEs) is generally known under the assumption of fully conductive transport. The effect of groundwater flow on BHE performance is also well established, but most commonly for single BHEs. In multiple-BHE systems the effect of groundwater advection can be more complicated due to the induced thermal interference between the boreholes. To ascertain the influence of groundwater flow and borehole arrangement, this study investigates single- and multi-BHE systems of various configurations. Moreover, the influence of energy load balance is also examined. The results from corresponding cases with and without groundwater flow as well as balanced and unbalanced energy loads are cross-compared. The groundwater flux value, 10(-7) m/s, is chosen based on the findings of previous studies on groundwater flow interaction with BHEs and thermal response tests. It is observed that multi-BHE systems with balanced loads are less sensitive to array configuration attributes and groundwater flow, in the long-term. Conversely, multi-BHE systems with unbalanced loads are influenced by borehole array configuration as well as groundwater flow; these effects become more pronounced with time, unlike when the load is balanced. Groundwater flow has more influence on stabilizing loop temperatures, compared to array characteristics. Although borehole thermal energy storage (BTES) systems have a balanced energy load function, preliminary investigation on their efficiency shows a negative impact by groundwater which is due to their dependency on high temperature gradients between the boreholes and surroundings. © 2014, National Ground Water Association.
Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente
2009-12-20
This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.
NASA Technical Reports Server (NTRS)
Mccomb, Harvey G , Jr
1954-01-01
Equations are derived for the stress distributions caused by three types of loading on infinitely long circular, semimonocoque cylinders with flexible rings. The results are given as formula for the stringer loads and shear flows in the shell due to each type of loading. For each loading case these formulas can be used to construct tables of influence coefficients giving stringer loads and shear flows in the neighborhood of the load due to a unit magnitude of the load. (author)
NASA Astrophysics Data System (ADS)
Doletskaya, L. I.; Solopov, R. V.; Kavchenkov, V. P.; Andreenkov, E. S.
2017-12-01
The physical features of the damage of aerial lines with a voltage of 10 kV under ice and wind loads are examined, mathematical models for estimating the reliability the mechanical part in aerial lines with the application of analytical theoretical methods and corresponding mathematical models taking into account the probabilistic nature of ice and wind loads are described, calculation results on reliability, specific damage and average time for restoration in case of emergency outages of 10 kV high-voltage transmission aerial lines with the use of uninsulated and protected wires are presented.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
Gulati, Shelly; Stubblefield, Ashley A; Hanlon, Jeremy S; Spier, Chelsea L; Stringfellow, William T
2014-03-01
Measuring the discharge of diffuse pollution from agricultural watersheds presents unique challenges. Flows in agricultural watersheds, particularly in Mediterranean climates, can be predominately irrigation runoff and exhibit large diurnal fluctuation in both volume and concentration. Flow and pollutant concentrations in these smaller watersheds dominated by human activity do not conform to a normal distribution and it is not clear if parametric methods are appropriate or accurate for load calculations. The objective of this study was to compare the accuracy of five load estimation methods to calculate pollutant loads from agricultural watersheds. Calculation of loads using results from discrete (grab) samples was compared with the true-load computed using in situ continuous monitoring measurements. A new method is introduced that uses a non-parametric measure of central tendency (the median) to calculate loads (median-load). The median-load method was compared to more commonly used parametric estimation methods which rely on using the mean as a measure of central tendency (mean-load and daily-load), a method that utilizes the total flow volume (volume-load), and a method that uses measure of flow at the time of sampling (instantaneous-load). Using measurements from ten watersheds in the San Joaquin Valley of California, the average percent error compared to the true-load for total dissolved solids (TDS) was 7.3% for the median-load, 6.9% for the mean-load, 6.9% for the volume-load, 16.9% for the instantaneous-load, and 18.7% for the daily-load methods of calculation. The results of this study show that parametric methods are surprisingly accurate, even for data that have starkly non-normal distributions and are highly skewed. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Keener, V. W.; Feyereisen, G. W.; Lall, U.; Jones, J. W.; Bosch, D. D.; Lowrance, R.
2010-02-01
SummaryAs climate variability increases, it is becoming increasingly critical to find predictable patterns that can still be identified despite overall uncertainty. The El-Niño/Southern Oscillation is the best known pattern. Its global effects on weather, hydrology, ecology and human health have been well documented. Climate variability manifested through ENSO has strong effects in the southeast United States, seen in precipitation and stream flow data. However, climate variability may also affect water quality in nutrient concentrations and loads, and have impacts on ecosystems, health, and food availability in the southeast. In this research, we establish a teleconnection between ENSO and the Little River Watershed (LRW), GA., as seen in a shared 3-7 year mode of variability for precipitation, stream flow, and nutrient load time series. Univariate wavelet analysis of the NINO 3.4 index of sea surface temperature (SST) and of precipitation, stream flow, NO 3 concentration and load time series from the watershed was used to identify common signals. Shared 3-7 year modes of variability were seen in all variables, most strongly in precipitation, stream flow and nutrient load in strong El Niño years. The significance of shared 3-7 year periodicity over red noise with 95% confidence in SST and precipitation, stream flow, and NO 3 load time series was confirmed through cross-wavelet and wavelet-coherence transforms, in which common high power and co-variance were computed for each set of data. The strongest 3-7 year shared power was seen in SST and stream flow data, while the strongest co-variance was seen in SST and NO 3 load data. The strongest cross-correlation was seen as a positive value between the NINO 3.4 and NO 3 load with a three-month lag. The teleconnection seen in the LRW between the NINO 3.4 index and precipitation, stream flow, and NO 3 load can be utilized in a model to predict monthly nutrient loads based on short-term climate variability, facilitating management in high risk seasons.
The United States of America as represented by the United States Department of Energy
2009-12-15
An apparatus and method for transferring thermal energy from a heat load is disclosed. In particular, use of a phase change material and specific flow designs enables cooling with temperature regulation well above the fusion temperature of the phase change material for medium and high heat loads from devices operated intermittently (in burst mode). Exemplary heat loads include burst mode lasers and laser diodes, flight avionics, and high power space instruments. Thermal energy is transferred from the heat load to liquid phase change material from a phase change material reservoir. The liquid phase change material is split into two flows. Thermal energy is transferred from the first flow via a phase change material heat sink. The second flow bypasses the phase change material heat sink and joins with liquid phase change material exiting from the phase change material heat sink. The combined liquid phase change material is returned to the liquid phase change material reservoir. The ratio of bypass flow to flow into the phase change material heat sink can be varied to adjust the temperature of the liquid phase change material returned to the liquid phase change material reservoir. Varying the flowrate and temperature of the liquid phase change material presented to the heat load determines the magnitude of thermal energy transferred from the heat load.
Quantification and Formalization of Security
2010-02-01
Quantification of Information Flow . . . . . . . . . . . . . . . . . . 30 2.4 Language Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 46...system behavior observed by users holding low clearances. This policy, or a variant of it, is enforced by many pro- gramming language -based mechanisms...illustrates with a particular programming language (while-programs plus probabilistic choice). The model is extended in §2.5 to programs in which
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
1993-04-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
Probabilistic Fatigue Damage Program (FATIG)
NASA Technical Reports Server (NTRS)
Michalopoulos, Constantine
2012-01-01
FATIG computes fatigue damage/fatigue life using the stress rms (root mean square) value, the total number of cycles, and S-N curve parameters. The damage is computed by the following methods: (a) traditional method using Miner s rule with stress cycles determined from a Rayleigh distribution up to 3*sigma; and (b) classical fatigue damage formula involving the Gamma function, which is derived from the integral version of Miner's rule. The integration is carried out over all stress amplitudes. This software solves the problem of probabilistic fatigue damage using the integral form of the Palmgren-Miner rule. The software computes fatigue life using an approach involving all stress amplitudes, up to N*sigma, as specified by the user. It can be used in the design of structural components subjected to random dynamic loading, or by any stress analyst with minimal training for fatigue life estimates of structural components.
Bridges for Pedestrians with Random Parameters using the Stochastic Finite Elements Analysis
NASA Astrophysics Data System (ADS)
Szafran, J.; Kamiński, M.
2017-02-01
The main aim of this paper is to present a Stochastic Finite Element Method analysis with reference to principal design parameters of bridges for pedestrians: eigenfrequency and deflection of bridge span. They are considered with respect to random thickness of plates in boxed-section bridge platform, Young modulus of structural steel and static load resulting from crowd of pedestrians. The influence of the quality of the numerical model in the context of traditional FEM is shown also on the example of a simple steel shield. Steel structures with random parameters are discretized in exactly the same way as for the needs of traditional Finite Element Method. Its probabilistic version is provided thanks to the Response Function Method, where several numerical tests with random parameter values varying around its mean value enable the determination of the structural response and, thanks to the Least Squares Method, its final probabilistic moments.
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1993-01-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
Remembrance of inferences past: Amortization in human hypothesis generation.
Dasgupta, Ishita; Schulz, Eric; Goodman, Noah D; Gershman, Samuel J
2018-05-21
Bayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain's limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants' responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference. Copyright © 2018 Elsevier B.V. All rights reserved.
A Step Made Toward Designing Microelectromechanical System (MEMS) Structures With High Reliability
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2003-01-01
The mechanical design of microelectromechanical systems-particularly for micropower generation applications-requires the ability to predict the strength capacity of load-carrying components over the service life of the device. These microdevices, which typically are made of brittle materials such as polysilicon, show wide scatter (stochastic behavior) in strength as well as a different average strength for different sized structures (size effect). These behaviors necessitate either costly and time-consuming trial-and-error designs or, more efficiently, the development of a probabilistic design methodology for MEMS. Over the years, the NASA Glenn Research Center s Life Prediction Branch has developed the CARES/Life probabilistic design methodology to predict the reliability of advanced ceramic components. In this study, done in collaboration with Johns Hopkins University, the ability of the CARES/Life code to predict the reliability of polysilicon microsized structures with stress concentrations is successfully demonstrated.
A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.
Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang
2013-01-01
The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.
A flexible open-source toolkit for lava flow simulations
NASA Astrophysics Data System (ADS)
Mossoux, Sophie; Feltz, Adelin; Poppe, Sam; Canters, Frank; Kervyn, Matthieu
2014-05-01
Lava flow hazard modeling is a useful tool for scientists and stakeholders confronted with imminent or long term hazard from basaltic volcanoes. It can improve their understanding of the spatial distribution of volcanic hazard, influence their land use decisions and improve the city evacuation during a volcanic crisis. Although a range of empirical, stochastic and physically-based lava flow models exists, these models are rarely available or require a large amount of physical constraints. We present a GIS toolkit which models lava flow propagation from one or multiple eruptive vents, defined interactively on a Digital Elevation Model (DEM). It combines existing probabilistic (VORIS) and deterministic (FLOWGO) models in order to improve the simulation of lava flow spatial spread and terminal length. Not only is this toolkit open-source, running in Python, which allows users to adapt the code to their needs, but it also allows users to combine the models included in different ways. The lava flow paths are determined based on the probabilistic steepest slope (VORIS model - Felpeto et al., 2001) which can be constrained in order to favour concentrated or dispersed flow fields. Moreover, the toolkit allows including a corrective factor in order for the lava to overcome small topographical obstacles or pits. The lava flow terminal length can be constrained using a fixed length value, a Gaussian probability density function or can be calculated based on the thermo-rheological properties of the open-channel lava flow (FLOWGO model - Harris and Rowland, 2001). These slope-constrained properties allow estimating the velocity of the flow and its heat losses. The lava flow stops when its velocity is zero or the lava temperature reaches the solidus. Recent lava flows of Karthala volcano (Comoros islands) are here used to demonstrate the quality of lava flow simulations with the toolkit, using a quantitative assessment of the match of the simulation with the real lava flows. The influence of the different input parameters on the quality of the simulations is discussed. REFERENCES: Felpeto et al. (2001), Assessment and modelling of lava flow hazard on Lanzarote (Canary islands), Nat. Hazards, 23, 247-257. Harris and Rowland (2001), FLOWGO: a kinematic thermo-rheological model for lava flowing in a channel, Bull. Volcanol., 63, 20-44.
A screening-level modeling approach to estimate nitrogen ...
This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce
Medalie, Laura
2007-01-01
The effectiveness of best-management practices (BMPs) in improving water quality in Lake Champlain tributaries was evaluated from 2000 through 2005 on the basis of analysis of data collected on concentrations of total phosphorus and suspended sediment in Englesby Brook, an urban stream in Burlington, and Little Otter Creek, an agricultural stream in Ferrisburg. Data also were collected on concentrations of total nitrogen in the Englesby Brook watershed. In the winter of 2001-2002, one of three planned structural BMPs was installed in the urban watershed. At approximately the same time, a set of barnyard BMPs was installed in the agricultural watershed; however, the other planned BMPs, which included streambank fencing and nutrient management, were not implemented within the study period. At Englesby Brook, concentrations of phosphorus ranged from 0.024 to 0.3 milligrams per liter (mg/L) during base-flow and from 0.032 to 11.8 mg/L during high-flow conditions. Concentrations of suspended sediment ranged from 3 to 189 mg/L during base-flow and from 5 to 6,880 mg/L during high-flow conditions. An assessment of the effectiveness of an urban BMP was made by comparing concentrations and loads of phosphorus and suspended sediment before and after a golf-course irrigation pond in the Englesby Brook watershed was retrofitted with the objective of reducing sediment transport. Results from a modified paired watershed study design showed that the BMP reduced concentrations of phosphorus and suspended sediment during high-flow events - when average streamflow was greater than 3 cubic feet per second. While construction of the BMP did not reduce storm loads of phosphorus or suspended sediment, an evaluation of changes in slope of double-mass curves showing cumulative monthly streamflow plotted against cumulative monthly loads indicated a possible reduction in cumulative loads of phosphorus and suspended sediment after BMP construction. Results from the Little Otter Creek assessment of agricultural BMPs showed that concentrations of phosphorus ranged from 0.016 to 0.141 mg/L during base-flow and from 0.019 to 0.565 mg/L during high-flow conditions at the upstream monitoring station. Concentrations of suspended sediment ranged from 2 to 13 mg/L during base-flow and from 1 to 473 mg/L during high-flow conditions at the upstream monitoring station. Concentrations of phosphorus ranged from 0.018 to 0.233 mg/L during base-flow and from 0.019 to 1.95 mg/L during high-flow conditions at the downstream monitoring station. Concentrations of suspended sediment ranged from 10 to 132 mg/L during base-flow and from 8 to 1,190 mg/L during high-flow conditions at the downstream monitoring station. Annual loads of phosphorus at the downstream monitoring station were significantly larger than loads at the upstream monitoring station, and annual loads of suspended sediment at the downstream monitoring station were larger than loads at the upstream monitoring station for 4 out of 6 years. On a monthly basis, loads of phosphorus and suspended sediment at the downstream monitoring station were significantly larger than loads at the upstream monitoring station. Pairs of concentrations of phosphorus and monthly loads of phosphorus and suspended sediment from the upstream and downstream monitoring stations were evaluated using the paired watershed study design. The only significant reduction between the calibration and treatment periods was for monthly loads of phosphorus; all other evaluations showed no change between periods.
A Technique for Developing Probabilistic Properties of Earth Materials
1988-04-01
Department of Civil Engineering. Responsibility for coordi- nating this program was assigned to Mr. A. E . Jackson, Jr., GD, under the supervision of Dr...assuming deformation as a right circular cylinder E = expected value F = ratio of the between sample variance and the within sample variance F = area...radial strain = true radial strain rT e = axial strainz = number of increments in the covariance analysis VL = loading Poisson’s ratio VUN = unloading
DOT National Transportation Integrated Search
1999-05-01
The Federal Aviation Administration (FAA) has a continuing program to collect data and develop predictive methods for aircraft flight loads. Some of the most severe and potentially catastrophic flight loads are produced by separated flows. Structural...
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
Hydro and morphodynamic simulations for probabilistic estimates of munitions mobility
NASA Astrophysics Data System (ADS)
Palmsten, M.; Penko, A.
2017-12-01
Probabilistic estimates of waves, currents, and sediment transport at underwater munitions remediation sites are necessary to constrain probabilistic predictions of munitions exposure, burial, and migration. To address this need, we produced ensemble simulations of hydrodynamic flow and morphologic change with Delft3D, a coupled system of wave, circulation, and sediment transport models. We have set up the Delft3D model simulations at the Army Corps of Engineers Field Research Facility (FRF) in Duck, NC, USA. The FRF is the prototype site for the near-field munitions mobility model, which integrates far-field and near-field field munitions mobility simulations. An extensive array of in-situ and remotely sensed oceanographic, bathymetric, and meteorological data are available at the FRF, as well as existing observations of munitions mobility for model testing. Here, we present results of ensemble Delft3D hydro- and morphodynamic simulations at Duck. A nested Delft3D simulation runs an outer grid that extends 12-km in the along-shore and 3.7-km in the cross-shore with 50-m resolution and a maximum depth of approximately 17-m. The inner nested grid extends 3.2-km in the along-shore and 1.2-km in the cross-shore with 5-m resolution and a maximum depth of approximately 11-m. The inner nested grid initial model bathymetry is defined as the most recent survey or remotely sensed estimate of water depth. Delft3D-WAVE and FLOW is driven with spectral wave measurements from a Waverider buoy in 17-m depth located on the offshore boundary of the outer grid. The spectral wave output and the water levels from the outer grid are used to define the boundary conditions for the inner nested high-resolution grid, in which the coupled Delft3D WAVE-FLOW-MORPHOLOGY model is run. The ensemble results are compared to the wave, current, and bathymetry observations collected at the FRF.
Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, A. M.; McGhee, D. S.
2003-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.
Statistical Comparison and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; McGhee, David S.
2004-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield ultimate strength and high cycle fatigue capability. This paper examines the cumulative distribution function (CDF) percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematics is then used to calculate the combined value corresponding to any desired percentile along with a curve fit to this value. Another Excel macro is used to calculate the combination using a Monte Carlo simulation. Unlike the traditional techniques, these methods quantify the calculated load value with a Consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Also, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can lower the design loading substantially without losing any of the identified structural reliability.
NASA Technical Reports Server (NTRS)
Schmucker, R. H.
1983-01-01
Methods aimed at reduction of overexpansion and side load resulting from asymmetric flow separation for rocket nozzles with a high opening ratio are described. The methods employ additional measures for nozzles with a fixed opening ratio. The flow separation can be controlled by several types of nozzle inserts, the properties of which are discussed. Side loads and overexpansion can be reduced by adapting the shape of the nozzle and taking other additional measures for controlled separation of the boundary layer, such as trip wires.
Wang, Chih-Wei; Bains, Aman; Sinton, David; Moffitt, Matthew G
2013-07-02
We investigate the loading efficiencies of two chemically distinct hydrophobic fluorescent probes, pyrene and naphthalene, for self-assembly and loading of polystyrene-block-poly(acrylic acid) (PS-b-PAA) micelles in gas-liquid segmented microfluidic reactors under different chemical and flow conditions. On-chip loading efficiencies are compared to values obtained via off-chip dropwise water addition to a solution of copolymer and probe. On-chip, probe loading efficiencies depend strongly on the chemical probe, initial solvent, water content, and flow rate. For pyrene and naphthalene probes, maximum on-chip loading efficiencies of 73 ± 6% and 11 ± 3%, respectively, are obtained, in both cases using the more polar solvent (DMF), an intermediate water content (2 wt % above critical), and a low flow rate (∼5 μL/min); these values are compared to 81 ± 6% and 48 ± 2%, respectively, for off-chip loading. On-chip loading shows a significant improvement over the off-chip process where shear-induced formation of smaller micelles enables increased encapsulation of probe. As well, we show that on-chip loading allows off-chip release kinetics to be controlled via flow rate: compared to vehicles produced at ∼5 μL/min, pyrene release kinetics from vehicles produced at ∼50 μL/min showed a longer initial period of burst release, followed by slow release over a longer total period. These results demonstrate the necessity to match probes, solvents, and running conditions to achieve effective loading, which is essential information for further developing these on-chip platforms for manufacturing drug delivery formulations.
Exploring the role of flood transience in coarse bed load sediment transport
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Singer, M. B.; Hill, K. M.; Paola, C.
2015-12-01
The rate of bed load transport under steady flow is known to vary both spatially and temporally due to various hydrologic and granular phenomena. Grain size distributions and riverbed properties (packing, imbrication, etc.) are known to affect flux for a particular value of applied flow stress, while hydrology is mainly assumed to control the magnitude of the applied bed stress above the threshold for bed material entrainment. The prediction of bed load sediment transport in field settings is further complicated by the inherent transience in flood hydrology, but little is known about how such flood transience influences bed load flux over a range of applied bed stress. Here we investigate the role of flood transience for gravel bed load transport through controlled laboratory experiments in a 28 m long 0.5 meter wide flume. We explore transient flow as the combination of unsteady and intermittent flow, where unsteady flow varies in magnitude over a given duration, and intermittent flow is characterized by turning the flow on and off. We systematically vary these details of flood hydrographs from one experiment to the next, and monitor the bed load as it varies with water discharge in real time by measuring sediment flux and tracking particles. We find that even with a narrow unimodal grain size distribution and constant sediment supply we observe hysteresis in bed load flux, different thresholds for entrainment and distrainment for the rising and falling limbs of a flood, and a threshold of entrainment that can vary one flood hydrograph to the next. Despite these complex phenomena we find that the total bed load transported for each flood plots along a linear trend with the integrated excess stress, consistent with prior field results. These results suggest that while the effects of transient flow and the shape of the hydrograph are measurable, they are second-order compared to the integrated excess stress.
NASA Technical Reports Server (NTRS)
Schmucker, R. H.
1984-01-01
Methods for measuring the lateral forces, occurring as a result of asymmetric nozzle flow separation, are discussed. The effect of some parameters on the side load is explained. A new method was developed for calculation of the side load. The values calculated are compared with side load data of the J-2 engine. Results are used for predicting side loads of the space shuttle main engine.
NASA Technical Reports Server (NTRS)
Clothiaux, John D.; Dowling, Norman E.
1992-01-01
The suitability of using rain-flow reconstructions as an alternative to an original loading spectrum for component fatigue life testing is investigated. A modified helicopter maneuver history is used for the rain-flow cycle counting and history regenerations. Experimental testing on a notched test specimen over a wide range of loads produces similar lives for the original history and the reconstructions. The test lives also agree with a simplified local strain analysis performed on the specimen utilizing the rain-flow cycle count. The rain-flow reconstruction technique is shown to be a viable test spectrum alternative to storing the complete original load history, especially in saving computer storage space and processing time. A description of the regeneration method, the simplified life prediction analysis, and the experimental methods are included in the investigation.
Manorama, Abinand; Meyer, Ronald; Wiseman, Robert; Bush, Tamara Reid
2013-06-01
Forces applied to the skin cause a decrease in regional blood flow. This decrease in blood flow can cause tissue necrosis and lead to the formation of deep, penetrating wounds called pressure ulcers. These wounds are detrimental to individuals with compromised health, such as the elderly and spinal-cord injured. Although surface pressure is known to be a primary risk factor for developing a pressure ulcer, a seated individual rarely experiences pressure alone but rather combined loading which includes pressure as well as shear force on the skin. However, little research has been conducted to quantify the effects of shear forces on blood flow. Fifteen men were tested in a magnetic resonance imaging scanner under no load, a normal load, and a combination of normal and shear loads. Changes in arterial and venous blood flow in the forearm were measured using magnetic resonance angiography phase-contrast imaging. The blood flow in the anterior interosseous artery and basilic vein of the forearm decreased with the application of normal loads, and decreased further with the addition of shear loads. Marginal to significant differences at a 90% confidence level (P=0.08, 0.10) were observed, and medium to high effect sizes (0.3 to 0.5) were obtained. Based on these results, shear force is an important factor to consider in relation to pressure ulcer propagation and prevention, and hence, future prevention approaches should also focus on mitigating shear loads. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.
2005-05-01
Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.
An alternative method for centrifugal compressor loading factor modelling
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Masaki, Nami; Sugama, Junko; Okuwa, Mayumi; Inagaki, Misako; Matsuo, Junko; Nakatani, Tosio; Sanada, Hiromi
2013-07-01
The purpose of this study was to evaluate the differences in heel blood flow during loading and off-loading in bedridden adults older than 65 years. The patients were divided into three groups based on ankle-brachial pressure index (ABI) and transcutaneous oxygen tension (tcPO₂): (1) patients with an ABI ≥ 0.8 (Group A); (2) patients with an ABI < 0.8 and heel tcPO₂ ≥ 10 mmHg (Group B); and (3) patients with an ABI < 0.8 and heel tcPO₂ < 10 mmHg (Group C). Heel blood flow was monitored using tcPO₂ sensors. Data were collected with the heel (1) suspended above the bed surface (preload), (2) on the bed surface for 30 min (loading), and (3) again suspended above the bed surface for 60 min (off-loading). Heel blood flow during off-loading was assessed using three parameters: oxygen recovery index (ORI), total tcPO₂ for the first 10 min, and change in tcPO₂ after 60 min of off-loading. ORI in Group C (n = 8) was significantly shorter than in Groups A (n = 22) and B (n = 15). Total tcPO₂ for the first 10 min of off-loading in Group C was significantly less than that in Groups A and B. Change in tcPO₂ after 60 min of off-loading in Group C was less than in Group A. Based on these findings, additional preventive care against heel blood flow decrease in older adults with an ABI < 0.8 and heel tcPO₂ < 10 mmHg might be necessary after loading.
NASA Astrophysics Data System (ADS)
Grunthal, Gottfried; Stromeyer, Dietrich; Bosse, Christian; Cotton, Fabrice; Bindi, Dino
2017-04-01
The seismic load parameters for the upcoming National Annex to the Eurocode 8 result from the reassessment of the seismic hazard supported by the German Institution for Civil Engineering . This 2016 version of hazard assessment for Germany as target area was based on a comprehensive involvement of all accessible uncertainties in models and parameters into the approach and the provision of a rational framework for facilitating the uncertainties in a transparent way. The developed seismic hazard model represents significant improvements; i.e. it is based on updated and extended databases, comprehensive ranges of models, robust methods and a selection of a set of ground motion prediction equations of their latest generation. The output specifications were designed according to the user oriented needs as suggested by two review teams supervising the entire project. In particular, seismic load parameters were calculated for rock conditions with a vS30 of 800 ms-1 for three hazard levels (10%, 5% and 2% probability of occurrence or exceedance within 50 years) in form of, e.g., uniform hazard spectra (UHS) based on 19 sprectral periods in the range of 0.01 - 3s, seismic hazard maps for spectral response accelerations for different spectral periods or for macroseismic intensities. The developed hazard model consists of a logic tree with 4040 end branches and essential innovations employed to capture epistemic uncertainties and aleatory variabilities. The computation scheme enables the sound calculation of the mean and any quantile of required seismic load parameters. Mean, median and 84th percentiles of load parameters were provided together with the full calculation model to clearly illustrate the uncertainties of such a probabilistic assessment for a region of a low-to-moderate level of seismicity. The regional variations of these uncertainties (e.g. ratios between the mean and median hazard estimations) were analyzed and discussed.
Incorporating uncertainty in watershed management decision-making: A mercury TMDL case study
Labiosa, W.; Leckie, J.; Shachter, R.; Freyberg, D.; Rytuba, J.; ,
2005-01-01
Water quality impairment due to high mercury fish tissue concentrations and high mercury aqueous concentrations is a widespread problem in several sub-watersheds that are major sources of mercury to the San Francisco Bay. Several mercury Total Maximum Daily Load regulations are currently being developed to address this problem. Decisions about control strategies are being made despite very large uncertainties about current mercury loading behavior, relationships between total mercury loading and methyl mercury formation, and relationships between potential controls and mercury fish tissue levels. To deal with the issues of very large uncertainties, data limitations, knowledge gaps, and very limited State agency resources, this work proposes a decision analytical alternative for mercury TMDL decision support. The proposed probabilistic decision model is Bayesian in nature and is fully compatible with a "learning while doing" adaptive management approach. Strategy evaluation, sensitivity analysis, and information collection prioritization are examples of analyses that can be performed using this approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etingov, Pavel; Makarov, PNNL Yuri; Subbarao, PNNL Kris
RUT software is designed for use by the Balancing Authorities to predict and display additional requirements caused by the variability and uncertainty in load and generation. The prediction is made for the next operating hours as well as for the next day. The tool predicts possible deficiencies in generation capability and ramping capability. This deficiency of balancing resources can cause serious risks to power system stability and also impact real-time market energy prices. The tool dynamically and adaptively correlates changing system conditions with the additional balancing needs triggered by the interplay between forecasted and actual load and output of variablemore » resources. The assessment is performed using a specially developed probabilistic algorithm incorporating multiple sources of uncertainty including wind, solar and load forecast errors. The tool evaluates required generation for a worst case scenario, with a user-specified confidence level.« less
Probabilistic assessment of landslide tsunami hazard for the northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Pampell-Manis, A.; Horrillo, J.; Shigihara, Y.; Parambath, L.
2016-01-01
The devastating consequences of recent tsunamis affecting Indonesia and Japan have prompted a scientific response to better assess unexpected tsunami hazards. Although much uncertainty exists regarding the recurrence of large-scale tsunami events in the Gulf of Mexico (GoM), geological evidence indicates that a tsunami is possible and would most likely come from a submarine landslide triggered by an earthquake. This study customizes for the GoM a first-order probabilistic landslide tsunami hazard assessment. Monte Carlo Simulation (MCS) is employed to determine landslide configurations based on distributions obtained from observational submarine mass failure (SMF) data. Our MCS approach incorporates a Cholesky decomposition method for correlated landslide size parameters to capture correlations seen in the data as well as uncertainty inherent in these events. Slope stability analyses are performed using landslide and sediment properties and regional seismic loading to determine landslide configurations which fail and produce a tsunami. The probability of each tsunamigenic failure is calculated based on the joint probability of slope failure and probability of the triggering earthquake. We are thus able to estimate sizes and return periods for probabilistic maximum credible landslide scenarios. We find that the Cholesky decomposition approach generates landslide parameter distributions that retain the trends seen in observational data, improving the statistical validity and relevancy of the MCS technique in the context of landslide tsunami hazard assessment. Estimated return periods suggest that probabilistic maximum credible SMF events in the north and northwest GoM have a recurrence of 5000-8000 years, in agreement with age dates of observed deposits.
Spahr, Norman E.; Dubrovsky, Neil M.; Gronberg, JoAnn M.; Franke, O. Lehn; Wolock, David M.
2010-01-01
Hydrograph separation was used to determine the base-flow component of streamflow for 148 sites sampled as part of the National Water-Quality Assessment program. Sites in the Southwest and the Northwest tend to have base-flow index values greater than 0.5. Sites in the Midwest and the eastern portion of the Southern Plains generally have values less than 0.5. Base-flow index values for sites in the Southeast and Northeast are mixed with values less than and greater than 0.5. Hypothesized flow paths based on relative scaling of soil and bedrock permeability explain some of the differences found in base-flow index. Sites in areas with impermeable soils and bedrock (areas where overland flow may be the primary hydrologic flow path) tend to have lower base-flow index values than sites in areas with either permeable bedrock or permeable soils (areas where deep groundwater flow paths or shallow groundwater flow paths may occur). The percentage of nitrate load contributed by base flow was determined using total flow and base flow nitrate load models. These regression-based models were calibrated using available nitrate samples and total streamflow or base-flow nitrate samples and the base-flow component of total streamflow. Many streams in the country have a large proportion of nitrate load contributed by base flow: 40 percent of sites have more than 50 percent of the total nitrate load contributed by base flow. Sites in the Midwest and eastern portion of the Southern Plains generally have less than 50 percent of the total nitrate load contributed by base flow. Sites in the Northern Plains and Northwest have nitrate load ratios that generally are greater than 50 percent. Nitrate load ratios for sites in the Southeast and Northeast are mixed with values less than and greater than 50 percent. Significantly lower contributions of nitrate from base flow were found at sites in areas with impermeable soils and impermeable bedrock. These areas could be most responsive to nutrient management practices designed to reduce nutrient transport to streams by runoff. Conversely, sites with potential for shallow or deep groundwater contribution (some combination of permeable soils or permeable bedrock) had significantly greater contributions of nitrate from base flow. Effective nutrient management strategies would consider groundwater nitrate contributions in these areas. Mean annual base-flow nitrate concentrations were compared to shallow-groundwater nitrate concentrations for 27 sites. Concentrations in groundwater tended to be greater than base-flow concentrations for this group of sites. Sites where groundwater concentrations were much greater than base-flow concentrations were found in areas of high infiltration and oxic groundwater conditions. The lack of correspondingly high concentrations in the base flow of the paired surface-water sites may have multiple causes. In some settings, there has not been sufficient time for enough high-nitrate shallow groundwater to migrate to the nearby stream. In these cases, the stream nitrate concentrations lag behind those in the shallow groundwater, and concentrations may increase in the future as more high-nitrate groundwater reaches the stream. Alternatively, some of these sites may have processes that rapidly remove nitrate as water moves from the aquifer into the stream channel. Partitioning streamflow and nitrate load between the quick-flow and base-flow portions of the hydrograph coupled with relative scales of soil permeability can infer the importance of surface water compared to groundwater nitrate sources. Study of the relation of nitrate concentrations to base-flow index and the comparison of groundwater nitrate concentrations to stream nitrate concentrations during times when base-flow index is high can provide evidence of potential nitrate transport mechanisms. Accounting for the surface-water and groundwater contributions of nitrate is crucial to effective management and remediat
Merritt, D.M.; Scott, M.L.; Leroy, Poff N.; Auble, G.T.; Lytle, D.A.
2010-01-01
Riparian vegetation composition, structure and abundance are governed to a large degree by river flow regime and flow-mediated fluvial processes. Streamflow regime exerts selective pressures on riparian vegetation, resulting in adaptations (trait syndromes) to specific flow attributes. Widespread modification of flow regimes by humans has resulted in extensive alteration of riparian vegetation communities. Some of the negative effects of altered flow regimes on vegetation may be reversed by restoring components of the natural flow regime. 2. Models have been developed that quantitatively relate components of the flow regime to attributes of riparian vegetation at the individual, population and community levels. Predictive models range from simple statistical relationships, to more complex stochastic matrix population models and dynamic simulation models. Of the dozens of predictive models reviewed here, most treat one or a few species, have many simplifying assumptions such as stable channel form, and do not specify the time-scale of response. In many cases, these models are very effective in developing alternative streamflow management plans for specific river reaches or segments but are not directly transferable to other rivers or other regions. 3. A primary goal in riparian ecology is to develop general frameworks for prediction of vegetation response to changing environmental conditions. The development of riparian vegetation-flow response guilds offers a framework for transferring information from rivers where flow standards have been developed to maintain desirable vegetation attributes, to rivers with little or no existing information. 4. We propose to organise riparian plants into non-phylogenetic groupings of species with shared traits that are related to components of hydrologic regime: life history, reproductive strategy, morphology, adaptations to fluvial disturbance and adaptations to water availability. Plants from any river or region may be grouped into these guilds and related to hydrologic attributes of a specific class of river using probabilistic response curves. 5. Probabilistic models based on riparian response guilds enable prediction of the likelihood of change in each of the response guilds given projected changes in flow, and facilitate examination of trade-offs and risks associated with various flow management strategies. Riparian response guilds can be decomposed to the species level for individual projects or used to develop flow management guidelines for regional water management plans. ?? 2009 Published.
The flow field investigations of no load conditions in axial flow fixed-blade turbine
NASA Astrophysics Data System (ADS)
Yang, J.; Gao, L.; Wang, Z. W.; Zhou, X. Z.; Xu, H. X.
2014-03-01
During the start-up process, the strong instabilities happened at no load operation in a low head axial flow fixed-blade turbine, with strong pressure pulsation and vibration. The rated speed can not reach until guide vane opening to some extent, and stable operation could not be maintained under the rated speed at some head, which had a negative impact on the grid-connected operation of the unit. In order to find the reason of this phenomenon, the unsteady flow field of the whole flow passage at no load conditions was carried out to analyze the detailed fluid field characteristics including the pressure pulsation and force imposed on the runner under three typical heads. The main hydraulic cause of no load conditions instability was described. It is recommended that the power station should try to reduce the no-load running time and go into the high load operation as soon as possible when connected to grid at the rated head. Following the recommendations, the plant operation practice proved the unstable degree of the unit was reduced greatly during start up and connect to the power grid.
The effect of circumferential distortion on fan performance at two levels of blade loading
NASA Technical Reports Server (NTRS)
Hartmann, M. J.; Sanger, N. L.
1975-01-01
Single stage fans designed for two levels of pressure ratio or blade loading were subjected to screen-induced circumferential distortions of 90-degree extent. Both fan rotors were designed for a blade tip speed of 425 m/sec, blade solidity of 1.3 and a hub-to-tip radius ratio of 0.5. Circumferential measurements of total pressure, temperature, static pressure, and flow angle were obtained at the hub, mean and tip radii at five axial stations. Rotor loading level did not appear to have a significant influence on rotor response to distorted flow. Losses in overall pressure ratio due to distortion were most severe in the stator hub region of the more highly loaded stage. At the near stall operating condition tip and hub regions of (either) rotor demonstrated different response characteristics to the distorted flow. No effect of loading was apparent on interactions between rotor and upstream distorted flow fields.
Effect of load transients on SOFC operation—current reversal on loss of load
NASA Astrophysics Data System (ADS)
Gemmen, Randall S.; Johnson, Christopher D.
The dynamics of solid oxide fuel cell (SOFC) operation have been considered previously, but mainly through the use of one-dimensional codes applied to co-flow fuel cell systems. In this paper several geometries are considered, including cross-flow, co-flow, and counter-flow. The details of the model are provided, and the model is compared with some initial experimental data. For parameters typical of SOFC operation, a variety of transient cases are investigated, including representative load increase and decrease and system shutdown. Of particular note for large load decrease conditions (e.g., shutdown) is the occurrence of reverse current over significant portions of the cell, starting from the moment of load loss up to the point where equilibrated conditions again provide positive current. Consideration is given as to when such reverse current conditions might most significantly impact the reliability of the cell.
Methods of computing steady-state voltage stability margins of power systems
Chow, Joe Hong; Ghiocel, Scott Gordon
2018-03-20
In steady-state voltage stability analysis, as load increases toward a maximum, conventional Newton-Raphson power flow Jacobian matrix becomes increasingly ill-conditioned so power flow fails to converge before reaching maximum loading. A method to directly eliminate this singularity reformulates the power flow problem by introducing an AQ bus with specified bus angle and reactive power consumption of a load bus. For steady-state voltage stability analysis, the angle separation between the swing bus and AQ bus can be varied to control power transfer to the load, rather than specifying the load power itself. For an AQ bus, the power flow formulation is only made up of a reactive power equation, thus reducing the size of the Jacobian matrix by one. This reduced Jacobian matrix is nonsingular at the critical voltage point, eliminating a major difficulty in voltage stability analysis for power system operations.
A probabilistic bridge safety evaluation against floods.
Liao, Kuo-Wei; Muto, Yasunori; Chen, Wei-Lun; Wu, Bang-Ho
2016-01-01
To further capture the influences of uncertain factors on river bridge safety evaluation, a probabilistic approach is adopted. Because this is a systematic and nonlinear problem, MPP-based reliability analyses are not suitable. A sampling approach such as a Monte Carlo simulation (MCS) or importance sampling is often adopted. To enhance the efficiency of the sampling approach, this study utilizes Bayesian least squares support vector machines to construct a response surface followed by an MCS, providing a more precise safety index. Although there are several factors impacting the flood-resistant reliability of a bridge, previous experiences and studies show that the reliability of the bridge itself plays a key role. Thus, the goal of this study is to analyze the system reliability of a selected bridge that includes five limit states. The random variables considered here include the water surface elevation, water velocity, local scour depth, soil property and wind load. Because the first three variables are deeply affected by river hydraulics, a probabilistic HEC-RAS-based simulation is performed to capture the uncertainties in those random variables. The accuracy and variation of our solutions are confirmed by a direct MCS to ensure the applicability of the proposed approach. The results of a numerical example indicate that the proposed approach can efficiently provide an accurate bridge safety evaluation and maintain satisfactory variation.
Application of the mobility power flow approach to structural response from distributed loading
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
The problem of the vibration power flow through coupled substructures when one of the substructures is subjected to a distributed load is addressed. In all the work performed thus far, point force excitation was considered. However, in the case of the excitation of an aircraft fuselage, distributed loading on the whole surface of a panel can be as important as the excitation from directly applied forces at defined locations on the structures. Thus using a mobility power flow approach, expressions are developed for the transmission of vibrational power between two coupled plate substructures in an L configuration, with one of the surfaces of one of the plate substructures being subjected to a distributed load. The types of distributed loads that are considered are a force load with an arbitrary function in space and a distributed load similar to that from acoustic excitation.
Seismic Hazard analysis of Adjaria Region in Georgia
NASA Astrophysics Data System (ADS)
Jorjiashvili, Nato; Elashvili, Mikheil
2014-05-01
The most commonly used approach to determining seismic-design loads for engineering projects is probabilistic seismic-hazard analysis (PSHA). The primary output from a PSHA is a hazard curve showing the variation of a selected ground-motion parameter, such as peak ground acceleration (PGA) or spectral acceleration (SA), against the annual frequency of exceedance (or its reciprocal, return period). The design value is the ground-motion level that corresponds to a preselected design return period. For many engineering projects, such as standard buildings and typical bridges, the seismic loading is taken from the appropriate seismic-design code, the basis of which is usually a PSHA. For more important engineering projects— where the consequences of failure are more serious, such as dams and chemical plants—it is more usual to obtain the seismic-design loads from a site-specific PSHA, in general, using much longer return periods than those governing code based design. Calculation of Probabilistic Seismic Hazard was performed using Software CRISIS2007 by Ordaz, M., Aguilar, A., and Arboleda, J., Instituto de Ingeniería, UNAM, Mexico. CRISIS implements a classical probabilistic seismic hazard methodology where seismic sources can be modelled as points, lines and areas. In the case of area sources, the software offers an integration procedure that takes advantage of a triangulation algorithm used for seismic source discretization. This solution improves calculation efficiency while maintaining a reliable description of source geometry and seismicity. Additionally, supplementary filters (e.g. fix a sitesource distance that excludes from calculation sources at great distance) allow the program to balance precision and efficiency during hazard calculation. Earthquake temporal occurrence is assumed to follow a Poisson process, and the code facilitates two types of MFDs: a truncated exponential Gutenberg-Richter [1944] magnitude distribution and a characteristic magnitude distribution [Youngs and Coppersmith, 1985]. Notably, the software can deal with uncertainty in the seismicity input parameters such as maximum magnitude value. CRISIS offers a set of built-in GMPEs, as well as the possibility of defining new ones by providing information in a tabular format. Our study shows that in case of Ajaristkali HPP study area, significant contribution to Seismic Hazard comes from local sources with quite low Mmax values, thus these two attenuation lows give us quite different PGA and SA values.
Estimation of particulate nutrient load using turbidity meter.
Yamamoto, K; Suetsugi, T
2006-01-01
The "Nutrient Load Hysteresis Coefficient" was proposed to evaluate the hysteresis of the nutrient loads to flow rate quantitatively. This could classify the runoff patterns of nutrient load into 15 patterns. Linear relationships between the turbidity and the concentrations of particulate nutrients were observed. It was clarified that the linearity was caused by the influence of the particle size on turbidity output and accumulation of nutrients on smaller particles (diameter < 23 microm). The L-Q-Turb method, which is a new method for the estimation of runoff loads of nutrients using a regression curve between the turbidity and the concentrations of particulate nutrients, was developed. This method could raise the precision of the estimation of nutrient loads even if they had strong hysteresis to flow rate. For example, as for the runoff load of total phosphorus load on flood events in a total of eight cases, the averaged error of estimation of total phosphorus load by the L-Q-Turb method was 11%, whereas the averaged estimation error by the regression curve between flow rate and nutrient load was 28%.
Caudek, Corrado; Fantoni, Carlo; Domini, Fulvio
2011-01-01
We measured perceived depth from the optic flow (a) when showing a stationary physical or virtual object to observers who moved their head at a normal or slower speed, and (b) when simulating the same optic flow on a computer and presenting it to stationary observers. Our results show that perceived surface slant is systematically distorted, for both the active and the passive viewing of physical or virtual surfaces. These distortions are modulated by head translation speed, with perceived slant increasing directly with the local velocity gradient of the optic flow. This empirical result allows us to determine the relative merits of two alternative approaches aimed at explaining perceived surface slant in active vision: an “inverse optics” model that takes head motion information into account, and a probabilistic model that ignores extra-retinal signals. We compare these two approaches within the framework of the Bayesian theory. The “inverse optics” Bayesian model produces veridical slant estimates if the optic flow and the head translation velocity are measured with no error; because of the influence of a “prior” for flatness, the slant estimates become systematically biased as the measurement errors increase. The Bayesian model, which ignores the observer's motion, always produces distorted estimates of surface slant. Interestingly, the predictions of this second model, not those of the first one, are consistent with our empirical findings. The present results suggest that (a) in active vision perceived surface slant may be the product of probabilistic processes which do not guarantee the correct solution, and (b) extra-retinal signals may be mainly used for a better measurement of retinal information. PMID:21533197
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balkey, K.; Witt, F.J.; Bishop, B.A.
1995-06-01
Significant attention has been focused on the issue of reactor vessel pressurized thermal shock (PTS) for many years. Pressurized thermal shock transient events are characterized by a rapid cooldown at potentially high pressure levels that could lead to a reactor vessel integrity concern for some pressurized water reactors. As a result of regulatory and industry efforts in the early 1980`s, a probabilistic risk assessment methodology has been established to address this concern. Probabilistic fracture mechanics analyses are performed as part of this methodology to determine conditional probability of significant flaw extension for given pressurized thermal shock events. While recent industrymore » efforts are underway to benchmark probabilistic fracture mechanics computer codes that are currently used by the nuclear industry, Part I of this report describes the comparison of two independent computer codes used at the time of the development of the original U.S. Nuclear Regulatory Commission (NRC) pressurized thermal shock rule. The work that was originally performed in 1982 and 1983 to compare the U.S. NRC - VISA and Westinghouse (W) - PFM computer codes has been documented and is provided in Part I of this report. Part II of this report describes the results of more recent industry efforts to benchmark PFM computer codes used by the nuclear industry. This study was conducted as part of the USNRC-EPRI Coordinated Research Program for reviewing the technical basis for pressurized thermal shock (PTS) analyses of the reactor pressure vessel. The work focused on the probabilistic fracture mechanics (PFM) analysis codes and methods used to perform the PTS calculations. An in-depth review of the methodologies was performed to verify the accuracy and adequacy of the various different codes. The review was structured around a series of benchmark sample problems to provide a specific context for discussion and examination of the fracture mechanics methodology.« less
Software defined network architecture based research on load balancing strategy
NASA Astrophysics Data System (ADS)
You, Xiaoqian; Wu, Yang
2018-05-01
As a new type network architecture, software defined network has the key idea of separating the control place of the network from the transmission plane, to manage and control the network in a concentrated way; in addition, the network interface is opened on the control layer and the data layer, so as to achieve programmable control of the network. Considering that only the single shortest route is taken into the calculation of traditional network data flow transmission, and congestion and resource consumption caused by excessive load of link circuits are ignored, a link circuit load based flow media business QoS gurantee system is proposed in this article to divide the flow in the network into ordinary data flow and QoS flow. In this way, it supervises the link circuit load with the controller so as to calculate reasonable route rapidly and issue the flow table to the exchanger, to finish rapid data transmission. In addition, it establishes a simulation platform to acquire optimized result through simulation experiment.
NASA Astrophysics Data System (ADS)
Jawitz, J. W.
2011-12-01
What are the relative contributions of climatic variability, land management, and local geomorphology in determining the temporal dynamics of streamflow and the export of solutes from watersheds to receiving water bodies? A simple analytical framework is introduced for characterizing the temporal inequality of stream discharge and solute export from catchments using Lorenz diagrams and the associated Gini coefficient. These descriptors are used to illustrate a broad range of observed flow variability with a synthesis of multi-decadal flow data from 22 rivers in Florida. The analytical framework is extended to comprehensively link variability in flows and loads to climatically-driven inputs in terms of these inequality-based metrics. Further, based on a synthesis of data from the basins of the Baltic Sea, the Mississippi River, the Kissimmee River and other tributaries to Lake Okeechobee, FL, it is shown that inter-annual variations in exported loads for geogenic constituents, and for total N and total P, are dominantly controlled by discharge. Emergence of this consistent pattern across diverse managed catchments is attributed to the anthropogenic legacy of accumulated nutrient sources generating memory, similar to ubiquitously present sources for geogenic constituents. Multi-decadal phosphorus load data from 4 of the primary tributaries to Lake Okeechobee and sodium and nitrate load data from 9 of the Hubbard Brook, NH long-term study site catchments are used to examine the relation between inequality of climatic inputs, river flows and catchment loads. The intra-annual loads to Lake Okeechobee are shown to be highly unequal, such that 90% of annual load is delivered in as little as 15% of the time. Analytic expressions are developed for measures of inequality in terms of parameters of the lognormal distribution under general conditions that include intermittency. In cases where climatic variability is high compared to that of concentrations (chemostatic conditions), such as for P in the Lake Okeechobee basin and Na in Hubbard Brook, the temporal inequality of rainfall and flow are strong surrogates for load inequality. However, in cases where variability of concentrations is high compared to that of flows (chemodynamic conditions), such as for nitrate in the Hubbard Brook catchments, load inequality is greater than rainfall or flow inequality. The measured degree of correspondence between climatic, flow, and load inequality for these data sets are shown to be well described using the general inequality framework introduced here. Important implications are that (1) variations in hydro-climatic or anthropogenic forcing can be used to robustly predict inter-annual variations in flows and loads, (2) water quality problems in receiving inland and coastal waters may persist until the accumulated storages of nutrients have been substantially depleted, and (3) remedial measures designed to intercept or capture exported flows and loads must be designed with consideration of the intra-annual inequality.
Carbon dioxide fluid-flow modeling and injectivity calculations
Burke, Lauri
2011-01-01
These results were used to classify subsurface formations into three permeability classifications for the probabilistic calculations of storage efficiency and containment risk of the U.S. Geological Survey geologic carbon sequestration assessment methodology. This methodology is currently in use to determine the total carbon dioxide containment capacity of the onshore and State waters areas of the United States.
A probabilistic approach to modeling postfire erosion after the 2009 Australian bushfires
P. R. Robichaud; W. J. Elliot; F. B. Pierson; D. E. Hall; C. A. Moffet
2009-01-01
Major concerns after bushfires and wildfires include increased flooding, erosion and debris flows due to loss of the protective forest floor layer, loss of water storage, and creation of water repellent soil conditions. To assist postfire assessment teams in their efforts to evaluate fire effects and make postfire treatment decisions, a web-based Erosion Risk...
Effects of residence time on summer nitrate uptake in Mississippi River flow-regulated backwaters
James, W.F.; Richardson, W.B.; Soballe, D.M.
2008-01-01
Nitrate uptake may be improved in regulated floodplain rivers by increasing hydrological connectivity to backwaters. We examined summer nitrate uptake in a series of morphologically similar backwaters on the Upper Mississippi River receiving flow-regulated nitrate loads via gated culverts. Flows into individual backwaters were held constant over a summer period but varied in the summers of 2003 and 2004 to provide a range of hydraulic loads and residence times (??). The objectives were to determine optimum loading and ?? for maximum summer uptake. Higher flow adjustment led to increased loading but lower ?? and contact time for uptake. For highest flows, ?? was less than 1 day resulting in lower uptake rates (Unet, 4000 m). For low flows, ?? was greater than 5 days and U% approached 100%, but Unet was 200 mg m-2 day-1. Snet was < half the length of the backwaters under these conditions indicating that most of the load was assimilated in the upper reaches, leading to limited delivery to lower portions. Unet was maximal (384-629 mg m-2 day-1) for intermediate flows and ?? ranging between 1 and 1.5 days. Longer Snet (2000-4000 m) and lower U% (20-40%) reflected limitation of uptake in upper reaches by contact time, leading to transport to lower reaches for additional uptake. Uptake by ???10 000 ha of reconnected backwaters along the Upper Mississippi River (13% of the total backwater surface area) at a Unet of ???630 mg m-2 day-1 would be the equivalent of ???40% of the summer nitrate load (155 mg day-1) discharged from Lock and Dam 4. These results indicate that backwater nitrate uptake can play an important role in reducing nitrate loading to the Gulf of Mexico. Copyright ?? 2008 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Komar, P. D.
1980-01-01
The paper discusses application to Martian water flows of the criteria that determine which grain-size ranges are transported as bed load, suspension, and wash load. The results show nearly all sand-sized material and finer would have been transported as wash load and that basalt pebbles and even cobbles could have been transported at rapid rates of suspension. An analysis of the threshold of sediment motion on Mars further indicates that the flows would have been highly competent, the larger flows having been able to transport boulder-sized material. Comparisons with terrestrial rivers which transport hyperconcentration levels of sediments suggest that the Martian water flows could have achieved sediment concentrations up to 70% in weight. Although it is possible that flows could have picked up enough sediment to convert to pseudolaminar mud flows, they probably remained at hyperconcentration levels and fully turbulent in flow character.
Probabilistic structural analysis by extremum methods
NASA Technical Reports Server (NTRS)
Nafday, Avinash M.
1990-01-01
The objective is to demonstrate discrete extremum methods of structural analysis as a tool for structural system reliability evaluation. Specifically, linear and multiobjective linear programming models for analysis of rigid plastic frames under proportional and multiparametric loadings, respectively, are considered. Kinematic and static approaches for analysis form a primal-dual pair in each of these models and have a polyhedral format. Duality relations link extreme points and hyperplanes of these polyhedra and lead naturally to dual methods for system reliability evaluation.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1990-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
Parallel processing methods for space based power systems
NASA Technical Reports Server (NTRS)
Berry, F. C.
1993-01-01
This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.
Flow resistance under conditions of intense gravel transport
Pitlick, John
1992-01-01
A study of flow resistance was undertaken in a channelized reach of the North Fork Toutle River, downstream of Mount St. Helens, Washington. Hydraulic and sediment transport data were collected in flows with velocities up to 3 m/s and shear stresses up to 7 times the critical value needed for bed load transport. Details of the flow structure as revealed in vertical velocity profiles indicate that weak bed load transport over a plane gravel bed has little effect on flow resistance. The plane gravel bed persists up to stresses ∼3 times critical, at which point, irregular bed forms appear. Bed forms greatly increase flow resistance and cause velocity profiles to become distorted. The latter arises as an effect of flows becoming depth-limited as bed form amplitude increases. At very high rates of bed load transport, an upper stage plane bed appeared. Velocity profiles measured in these flows match the law of the wall closely, with the equivalent roughness being well represented by ks = 3D84 of the bed load. The effects noted here will be important in very large floods or in rivers that are not free to widen, such as those cut into bedrock.
Novel Method for Loading Microporous Ceramics Bone Grafts by Using a Directional Flow
Seidenstuecker, Michael; Kissling, Steffen; Ruehe, Juergen; Suedkamp, Norbert P.; Mayr, Hermann O.; Bernstein, Anke
2015-01-01
The aim of this study was the development of a process for filling the pores of a β-tricalcium phosphate ceramic with interconnected porosity with an alginate hydrogel. For filling of the ceramics, solutions of alginate hydrogel precursors with suitable viscosity were chosen as determined by rheometry. For loading of the porous ceramics with the gel the samples were placed at the flow chamber and sealed with silicone seals. By using a vacuum induced directional flow, the samples were loaded with alginate solutions. The loading success was controlled by ESEM and fluorescence imaging using a fluorescent dye (FITC) for staining of the gel. After loading of the pores, the alginate is transformed into a hydrogel through crosslinking with CaCl2 solution. The biocompatibility of the obtained composite material was tested with a live dead cell staining by using MG-63 Cells. The loading procedure via vacuum assisted directional flow allowed complete filling of the pores of the ceramics within a few minutes (10 ± 3 min) while loading through simple immersion into the polymer solution or through a conventional vacuum method only gave incomplete filling. PMID:26703749
NASA Astrophysics Data System (ADS)
Harrington, Seán T.; Harrington, Joseph R.
2013-03-01
This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on either river. Historic load estimates (with a 95% confidence interval) were hindcast from the flow record and average annual loads of 7253 ± 673 tonnes on the River Bandon and 1935 ± 325 tonnes on the River Owenabue were estimated to be passing the gauging stations.
Fatigue loading history reconstruction based on the rain-flow technique
NASA Technical Reports Server (NTRS)
Khosrovaneh, A. K.; Dowling, N. E.
1989-01-01
Methods are considered for reducing a non-random fatigue loading history to a concise description and then for reconstructing a time history similar to the original. In particular, three methods of reconstruction based on a rain-flow cycle counting matrix are presented. A rain-flow matrix consists of the numbers of cycles at various peak and valley combinations. Two methods are based on a two dimensional rain-flow matrix, and the third on a three dimensional rain-flow matrix. Histories reconstructed by any of these methods produce a rain-flow matrix identical to that of the original history, and as a result the resulting time history is expected to produce a fatigue life similar to that for the original. The procedures described allow lengthy loading histories to be stored in compact form.
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
40 CFR 92.107 - Fuel flow measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...
Dynamic Probabilistic Modeling of Environmental Emissions of Engineered Nanomaterials.
Sun, Tian Yin; Bornhöft, Nikolaus A; Hungerbühler, Konrad; Nowack, Bernd
2016-05-03
The need for an environmental risk assessment for engineered nanomaterials (ENM) necessitates the knowledge about their environmental concentrations. Despite significant advances in analytical methods, it is still not possible to measure the concentrations of ENM in natural systems. Material flow and environmental fate models have been used to provide predicted environmental concentrations. However, almost all current models are static and consider neither the rapid development of ENM production nor the fact that many ENM are entering an in-use stock and are released with a lag phase. Here we use dynamic probabilistic material flow modeling to predict the flows of four ENM (nano-TiO2, nano-ZnO, nano-Ag and CNT) to the environment and to quantify their amounts in (temporary) sinks such as the in-use stock and ("final") environmental sinks such as soil and sediment. Caused by the increase in production, the concentrations of all ENM in all compartments are increasing. Nano-TiO2 had far higher concentrations than the other three ENM. Sediment showed in our worst-case scenario concentrations ranging from 6.7 μg/kg (CNT) to about 40 000 μg/kg (nano-TiO2). In most cases the concentrations in waste incineration residues are at the "mg/kg" level. The flows to the environment that we provide will constitute the most accurate and reliable input of masses for environmental fate models which are using process-based descriptions of the fate and behavior of ENM in natural systems and rely on accurate mass input parameters.
NASA Astrophysics Data System (ADS)
Lazzaro, G.; Soulsby, C.; Tetzlaff, D.; Botter, G.
2017-03-01
Atlantic salmon is an economically and ecologically important fish species, whose survival is dependent on successful spawning in headwater rivers. Streamflow dynamics often have a strong control on spawning because fish require sufficiently high discharges to move upriver and enter spawning streams. However, these streamflow effects are modulated by biological factors such as the number and the timing of returning fish in relation to the annual spawning window in the fall/winter. In this paper, we develop and apply a novel probabilistic approach to quantify these interactions using a parsimonious outflux-influx model linking the number of female salmon emigrating (i.e., outflux) and returning (i.e., influx) to a spawning stream in Scotland. The model explicitly accounts for the interannual variability of the hydrologic regime and the hydrological connectivity of spawning streams to main rivers. Model results are evaluated against a detailed long-term (40 years) hydroecological data set that includes annual fluxes of salmon, allowing us to explicitly assess the role of discharge variability. The satisfactory model results show quantitatively that hydrologic variability contributes to the observed dynamics of salmon returns, with a good correlation between the positive (negative) peaks in the immigration data set and the exceedance (nonexceedance) probability of a threshold flow (0.3 m3/s). Importantly, model performance deteriorates when the interannual variability of flow regime is disregarded. The analysis suggests that flow thresholds and hydrological connectivity for spawning return represent a quantifiable and predictable feature of salmon rivers, which may be helpful in decision making where flow regimes are altered by water abstractions.
NASA Astrophysics Data System (ADS)
Miller, Matthew P.; Tesoriero, Anthony J.; Hood, Krista; Terziotti, Silvia; Wolock, David M.
2017-12-01
The myriad hydrologic and biogeochemical processes taking place in watersheds occurring across space and time are integrated and reflected in the quantity and quality of water in streams and rivers. Collection of high-frequency water quality data with sensors in surface waters provides new opportunities to disentangle these processes and quantify sources and transport of water and solutes in the coupled groundwater-surface water system. A new approach for separating the streamflow hydrograph into three components was developed and coupled with high-frequency nitrate data to estimate time-variable nitrate loads from chemically dilute quick flow, chemically concentrated quick flow, and slowflow groundwater end-member pathways for periods of up to 2 years in a groundwater-dominated and a quick-flow-dominated stream in central Wisconsin, using only streamflow and in-stream water quality data. The dilute and concentrated quick flow end-members were distinguished using high-frequency specific conductance data. Results indicate that dilute quick flow contributed less than 5% of the nitrate load at both sites, whereas 89 ± 8% of the nitrate load at the groundwater-dominated stream was from slowflow groundwater, and 84 ± 25% of the nitrate load at the quick-flow-dominated stream was from concentrated quick flow. Concentrated quick flow nitrate concentrations varied seasonally at both sites, with peak concentrations in the winter that were 2-3 times greater than minimum concentrations during the growing season. Application of this approach provides an opportunity to assess stream vulnerability to nonpoint source nitrate loading and expected stream responses to current or changing conditions and practices in watersheds.
Environmental impact of irrigation in la violada district (Spain): I. Salt export patterns.
Isidoro, D; Quílez, D; Aragüés, R
2006-01-01
Salt loading in irrigation return flows contributes to the salinization of the receiving water bodies, particularly when originated in salt-affected areas as frequently found in the middle Ebro River basin (Spain). We determined the salt loading in La Violada Gully from the total dissolved solids (TDS) and flows (Q) during the 1995 to 1998 hydrological years. Since this gully collects flows from various sources, an end-member mixing analysis (EMMA) was performed to quantify the drainage flow from La Violada Irrigation District (VID). Three flow components were identified in La Violada Gully: drainage waters from VID (Qd); tail-waters from irrigation ditches, spill-over, and seepage from the Monegros Canal (Qo); and ground water inflows (Qg) originating in the dryland watershed. Gypsum in the soils of VID was the main source for salts in La Violada Gully (flow-weighted mean TDS=1720 mg L-1, dominated by sulfate and calcium). The contribution of Qg to the total gully flow during the 1996 irrigation season was low (6.5% of the total flow). The 1995 to 1998 annual salt load average in La Violada Gully was 78 628 Mg, 71% of which was exported during the irrigation season. The 1995 to 1998 irrigation season salt load average in Qd was 43 015 Mg (77% of the total load). Thus, irrigated agriculture in VID was the main source of salt loading in this gully, with a yield of 11.1 Mg of salts per hectare of irrigated land for the irrigation season. Efficient irrigation systems and irrigation management practices that reduce Qd are key factors for controlling off-site salt pollution of these gypsum-rich irrigated areas.
Alameddine, Ibrahim; Qian, Song S; Reckhow, Kenneth H
2011-01-01
In-stream nutrient concentrations are well known to exhibit a strong relationship with river flow. The use of flow measurements to predict nutrient concentrations and subsequently nutrient loads is common in water quality modeling. Nevertheless, most adopted models assume that the relationship between flow and concentration is fixed across time as well as across different flow regimes. In this study, we developed a Bayesian changepoint-threshold model that relaxes these constraints and allows for the identification and quantification of any changes in the underlying flow-concentration relationship across time. The results from our study support the occurrence of a changepoint in time around the year 1999, which coincided with the period of implementing nitrogen control measures as part of the TMDL program developed for the Neuse Estuary in North Carolina. The occurrence of the changepoint challenges the underlying assumption of temporal invariance in the flow-concentrations relationship. The model results also point towards a transition in the river nitrogen delivery system from a point source dominated loading system towards a more complicated nonlinear system, where non-point source nutrient delivery plays a major role. Moreover, we use the developed model to assess the effectiveness of the nitrogen reduction measures in achieving a 30% drop in loading. The results indicate that while there is a strong evidence of a load reduction, there still remains a high level of uncertainty associated with the mean nitrogen load reduction. We show that the level of uncertainty around the estimated load reduction is not random but is flow related. Copyright © 2010 Elsevier Ltd. All rights reserved.
Impact of Probabilistic Weather on Flight Routing Decisions
NASA Technical Reports Server (NTRS)
Sheth, Kapil; Sridhar, Banavar; Mulfinger, Daniel
2006-01-01
Flight delays in the United States have been found to increase year after year, along with the increase in air traffic. During the four-month period from May through August of 2005, weather related delays accounted for roughly 70% of all reported delays, The current weather prediction in tactical (within 2 hours) timeframe is at manageable levels, however, the state of forecasting weather for strategic (2-6 hours) timeframe is still not dependable for long-term planning. In the absence of reliable severe weather forecasts, the decision-making for flights longer than two hours is challenging. This paper deals with an approach of using probabilistic weather prediction for Traffic Flow Management use, and a general method using this prediction for estimating expected values of flight length and delays in the National Airspace System (NAS). The current state-of-the-art convective weather forecasting is employed to aid the decision makers in arriving at decisions for traffic flow and flight planing. The six-agency effort working on the Next Generation Air Transportation System (NGATS) have considered weather-assimilated decision-making as one of the principal foci out of a list of eight. The weather Integrated Product Team has considered integrated weather information and improved aviation weather forecasts as two of the main efforts (Ref. 1, 2). Recently, research has focused on the concept of operations for strategic traffic flow management (Ref. 3) and how weather data can be integrated for improved decision-making for efficient traffic management initiatives (Ref. 4, 5). An overview of the weather data needs and benefits of various participants in the air traffic system along with available products can be found in Ref. 6. Previous work related to use of weather data in identifying and categorizing pilot intrusions into severe weather regions (Ref. 7, 8) has demonstrated a need for better forecasting in the strategic planning timeframes and moving towards a probabilistic description of weather (Ref. 9). This paper focuses on. specified probability in a local region for flight intrusion/deviation decision-making. The process uses a probabilistic weather description, implements that in a air traffic assessment system to study trajectories of aircraft crossing a cut-off probability contour. This value would be useful for meteorologists in creating optimum distribution profiles for severe weather, Once available, the expected values of flight path and aggregate delays are calculated for efficient operations. The current research, however, does not deal with the issue of multiple cell encounters, as well as echo tops, and will be a topic of future work.
Buck, Stephanie D.
2014-01-01
The Poteau Valley Improvement Authority uses Wister Lake in southeastern Oklahoma as a public water supply. Total phosphorus, total nitrogen, and suspended sediments from agricultural runoff and discharges from wastewater treatment plants and other sources have degraded water quality in the lake. As lake-water quality has degraded, water-treatment cost, chemical usage, and sludge production have increased for the Poteau Valley Improvement Authority. The U.S. Geological Survey (USGS), in cooperation with the Poteau Valley Improvement Authority, investigated and summarized concentrations of total phosphorus, total nitrogen, suspended sediment, and bacteria (Escherichia coli and Enterococcus sp.) in surface water flowing to Wister Lake. Estimates of total phosphorus, total nitrogen, and suspended sediment loads, yields, and flow-weighted mean concentrations of total phosphorus and total nitrogen concentrations were made for the Wister Lake Basin for a 3-year period from October 2010 through September 2013. Data from water samples collected at fixed time increments during base-flow conditions and during runoff conditions at the Poteau River at Loving, Okla. (USGS station 07247015), the Poteau River near Heavener, Okla. (USGS station 07247350), and the Fourche Maline near Leflore, Okla. (USGS station 07247650), water-quality stations were used to evaluate water quality over the range of streamflows in the basin. These data also were collected to estimate annual constituent loads and yields by using regression models. At the Poteau River stations, total phosphorus, total nitrogen, and suspended sediment concentrations in surface-water samples were significantly larger in samples collected during runoff conditions than in samples collected during base-flow conditions. At the Fourche Maline station, in contrast, concentrations of these constituents in water samples collected during runoff conditions were not significantly larger than concentrations during base-flow conditions. Flow-weighted mean total phosphorus concentrations at all three stations from 2011 to 2013 were several times larger than the Oklahoma State Standard for Scenic Rivers (0.037 milligrams per liter [mg/L]), with the largest flow-weighted phosphorus concentrations typically being measured at the Poteau River at Loving, Okla., station. Flow-weighted mean total nitrogen concentrations did not vary substantially between the Poteau River stations and the Fourche Maline near Leflore, Okla., station. At all of the sampled water-quality stations, bacteria (Escherichia coli and Enterococcus sp.) concentrations were substantially larger in water samples collected during runoff conditions than in water samples collected during base-flow conditions from 2011 to 2013. Estimated annual loads of total phosphorus, total nitrogen, and suspended sediment in the Poteau River stations during runoff conditions ranged from 82 to 98 percent of the total annual loads of those constituents. Estimated annual loads of total phosphorus, total nitrogen, and suspended sediment in the Fourche Maline during runoff conditions ranged from 86 to nearly 100 percent of the total annual loads. Estimated seasonal total phosphorus loads generally were smallest during base-flow and runoff conditions in autumn. Estimated seasonal total phosphorus loads during base-flow conditions tended to be largest in winter and during runoff conditions tended to be largest in the spring. Estimated seasonal total nitrogen loads tended to be smallest in autumn during base-flow and runoff conditions and largest in winter during runoff conditions. Estimated seasonal suspended sediment loads tended to be smallest during base-flow conditions in the summer and smallest during runoff conditions in the autumn. The largest estimated seasonal suspended sediment loads during runoff conditions typically were in the spring. The estimated mean annual total phosphorus yield was largest at the Poteau River at Loving, Okla., water-quality station. The estimated mean annual total phosphorus yield was largest during base flow at the Poteau River at Loving, Okla., water-quality station and at both of the Poteau River water-quality stations during runoff conditions. The estimated mean annual total nitrogen yields were largest at the Poteau River water-quality stations. Estimated mean annual total nitrogen yields were largest during base-flow and runoff conditions at the Poteau River at Loving, Okla., water-quality station. The estimated mean annual suspended sediment yield was largest at the Poteau River near Heavener, Okla., water-quality station during base-flow and runoff conditions. Flow-weighted mean concentrations indicated that total phosphorus inputs from the Poteau River Basin in the Wister Lake Basin were larger than from the Fourche Maline Basin. Flow-weighted mean concentrations of total nitrogen did not vary spatially in a consistent manner. The Poteau River and the Fourche Maline contributed estimated annual total phosphorus loads of 137 to 278 tons per year (tons/yr) to Wister Lake. Between 89 and 95 percent of the annual total phosphorus loads were transported to Wister Lake during runoff conditions. The Poteau River and the Fourche Maline contributed estimated annual total nitrogen loads of 657 to 1,294 tons/yr, with 86 to 94 percent of the annual total nitrogen loads being transported to Wister Lake during runoff conditions. The Poteau River and the Fourche Maline contributed estimated annual total suspended sediment loads of 110,919 to 234,637 tons/yr, with 94 to 99 percent of the annual suspended sediment loads being transported to Wister Lake during runoff conditions. Most of the total phosphorus and suspended sediment were delivered to Wister Lake during runoff conditions in the spring. The majority of the total nitrogen was delivered to Wister Lake during runoff conditions in winter.
Aerodynamic Characteristics of Controls.
1979-09-01
efforts. CONTENT 1. Introduction 2. Subsonic attached flow 3. Transonic attached flow 4. Supersonic attached flow 5. Leading edge vortex flow 6... introduction of these loading functions the integral-equation is reduced to a system of linear equations where the scale factors of the loading... introduction of different regions of influence for the subsonic and the supersonic case 1511. In the unsteady case this brings no difficulties since these
Predicted reliability of aerospace electronics: Application of two advanced probabilistic concepts
NASA Astrophysics Data System (ADS)
Suhir, E.
Two advanced probabilistic design-for-reliability (PDfR) concepts are addressed and discussed in application to the prediction, quantification and assurance of the aerospace electronics reliability: 1) Boltzmann-Arrhenius-Zhurkov (BAZ) model, which is an extension of the currently widely used Arrhenius model and, in combination with the exponential law of reliability, enables one to obtain a simple, easy-to-use and physically meaningful formula for the evaluation of the probability of failure (PoF) of a material or a device after the given time in operation at the given temperature and under the given stress (not necessarily mechanical), and 2) Extreme Value Distribution (EVD) technique that can be used to assess the number of repetitive loadings that result in the material/device degradation and eventually lead to its failure by closing, in a step-wise fashion, the gap between the bearing capacity (stress-free activation energy) of the material or the device and the demand (loading). It is shown that the material degradation (aging, damage accumulation, flaw propagation, etc.) can be viewed, when BAZ model is considered, as a Markovian process, and that the BAZ model can be obtained as the ultimate steady-state solution to the well-known Fokker-Planck equation in the theory of Markovian processes. It is shown also that the BAZ model addresses the worst, but a reasonably conservative, situation. It is suggested therefore that the transient period preceding the condition addressed by the steady-state BAZ model need not be accounted for in engineering evaluations. However, when there is an interest in understanding the transient degradation process, the obtained solution to the Fokker-Planck equation can be used for this purpose. As to the EVD concept, it attributes the degradation process to the accumulation of damages caused by a train of repetitive high-level loadings, while loadings of levels that are considerably lower than their extreme values do not contribute- appreciably to the finite lifetime of a material or a device. In our probabilistic risk management (PRM) based analysis we treat the stress-free activation energy (capacity) as a normally distributed random variable, and choose, for the sake of simplicity, the (single-parametric) Rayleigh law as the basic distribution underlying the EVD. The general concepts addressed and discussed are illustrated by numerical examples. It is concluded that the application of the PDfR approach and particularly the above two advanced models should be considered as a natural, physically meaningful, informative, comprehensive, and insightful technique that reflects well the physics underlying the degradation processes in materials, devices and systems. It is the author's belief that they will be widely used in engineering practice, when high reliability is imperative, and the ability to quantify it is highly desirable.
Multiscale Sediment-Laden Flow Theory and Its Application in Flood Risk Management
NASA Astrophysics Data System (ADS)
Cao, Z. X.; Pender, G.; Hu, P.
2011-09-01
Sediment-laden flows over erodible bed normally feature multiple time scales. The time scales of sediment transport and bed deformation relative to the flow essentially measure how fast sediment transport adapts to capacity regime in line with local flow scenario and the bed deforms as compared to the flow, which literally dictate if a capacity based and/or decoupled model is justified. This paper synthesizes the recently developed multiscale theory for sediment-laden flows over erodible bed, with bed load and suspended load transport respectively. It is unravelled that bed load transport can adapt to capacity sufficiently rapidly even under highly unsteady flows and thus a capacity model is mostly applicable, whereas a non-capacity model is critical for suspended sediment because of the lower rate of adaptation to capacity. Physically coupled modeling is critical for cases characterized by rapid bed variation. Applications are outlined on flash floods and landslide dam break floods.
Dynamic Load Predictions for Launchers Using Extra-Large Eddy Simulations X-Les
NASA Astrophysics Data System (ADS)
Maseland, J. E. J.; Soemarwoto, B. I.; Kok, J. C.
2005-02-01
Flow-induced unsteady loads can have a strong impact on performance and flight characteristics of aerospace vehicles and therefore play a crucial role in their design and operation. Complementary to costly flight tests and delicate wind-tunnel experiments, unsteady loads can be calculated using time-accurate Computational Fluid Dynamics. A capability to accurately predict the dynamic loads on aerospace structures at flight Reynolds numbers can be of great value for the design and analysis of aerospace vehicles. Advanced space launchers are subject to dynamic loads in the base region during the ascent to space. In particular the engine and nozzle experience aerodynamic pressure fluctuations resulting from massive flow separations. Understanding these phenomena is essential for performance enhancements for future launchers which operate a larger nozzle. A new hybrid RANS-LES turbulence modelling approach termed eXtra-Large Eddy Simulations (X-LES) holds the promise to capture the flow structures associated with massive separations and enables the prediction of the broad-band spectrum of dynamic loads. This type of method has become a focal point, reducing the cost of full LES, driven by the demand for their applicability in an industrial environment. The industrial feasibility of X-LES simulations is demonstrated by computing the unsteady aerodynamic loads on the main-engine nozzle of a generic space launcher configuration. The potential to calculate the dynamic loads is qualitatively assessed for transonic flow conditions in a comparison to wind-tunnel experiments. In terms of turn-around-times, X-LES computations are already feasible within the time-frames of the development process to support the structural design. Key words: massive separated flows; buffet loads; nozzle vibrations; space launchers; time-accurate CFD; composite RANS-LES formulation.
Tomlinson, Ryan E.; Silva, Matthew J.; Shoghi, Kooresh I.
2013-01-01
Purpose Blood flow is an important factor in bone production and repair, but its role in osteogenesis induced by mechanical loading is unknown. Here, we present techniques for evaluating blood flow and fluoride metabolism in a pre-clinical stress fracture model of osteogenesis in rats. Procedures Bone formation was induced by forelimb compression in adult rats. 15O water and 18F fluoride PET imaging were used to evaluate blood flow and fluoride kinetics 7 days after loading. 15O water was modeled using a one-compartment, two-parameter model, while a two-compartment, three-parameter model was used to model 18F fluoride. Input functions were created from the heart, and a stochastic search algorithm was implemented to provide initial parameter values in conjunction with a Levenberg–Marquardt optimization algorithm. Results Loaded limbs are shown to have a 26% increase in blood flow rate, 113% increase in fluoride flow rate, 133% increase in fluoride flux, and 13% increase in fluoride incorporation into bone as compared to non-loaded limbs (p < 0.05 for all results). Conclusions The results shown here are consistent with previous studies, confirming this technique is suitable for evaluating the vascular response and mineral kinetics of osteogenic mechanical loading. PMID:21785919
NASA Technical Reports Server (NTRS)
Yunis, Isam S.; Carney, Kelly S.
1993-01-01
A new aerospace application of structural reliability techniques is presented, where the applied forces depend on many probabilistic variables. This application is the plume impingement loading of the Space Station Freedom Photovoltaic Arrays. When the space shuttle berths with Space Station Freedom it must brake and maneuver towards the berthing point using its primary jets. The jet exhaust, or plume, may cause high loads on the photovoltaic arrays. The many parameters governing this problem are highly uncertain and random. An approach, using techniques from structural reliability, as opposed to the accepted deterministic methods, is presented which assesses the probability of failure of the array mast due to plume impingement loading. A Monte Carlo simulation of the berthing approach is used to determine the probability distribution of the loading. A probability distribution is also determined for the strength of the array. Structural reliability techniques are then used to assess the array mast design. These techniques are found to be superior to the standard deterministic dynamic transient analysis, for this class of problem. The results show that the probability of failure of the current array mast design, during its 15 year life, is minute.
NASA Technical Reports Server (NTRS)
Mulder, Andrew; Skelley, Stephen
2011-01-01
Fluctuating pressure data from water flow testing of an unshrouded two blade inducer revealed a cavitation induced oscillation with the potential to induce a radial load on the turbopump shaft in addition to other more traditionally analyzed radial loads. Subsequent water flow testing of the inducer with a rotating force measurement system confirmed that the cavitation induced oscillation did impart a radial load to the inducer. After quantifying the load in a baseline configuration, two inducer shroud treatments were selected and tested to reduce the cavitation induced load. The first treatment was to increase the tip clearance, and the second was to introduce a circumferential groove near the inducer leading edge. Increasing the clearance resulted in a small decrease in radial load along with some steady performance degradation. The groove greatly reduced the hydrodynamic load with little to no steady performance loss. The groove did however generate some new, relatively high frequency, spatially complex oscillations to the flow environment.
2016-11-09
the model does not become a full probabilistic attack graph analysis of the network , whose data requirements are currently unrealistic. The second...flow. – Untrustworthy persons may intentionally try to exfiltrate known sensitive data to ex- ternal networks . People may also unintentionally leak...section will provide details on the components, procedures, data requirements, and parameters required to instantiate the network porosity model. These
A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling
Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang
2013-01-01
The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP. PMID:24453841
Load flow and state estimation algorithms for three-phase unbalanced power distribution systems
NASA Astrophysics Data System (ADS)
Madvesh, Chiranjeevi
Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.
Jet Flap Stator Blade Test in the High Reaction Turbine Blade Cascade Tunnel
1970-03-21
A researcher examines the setup of a jet flap blade in the High Reaction Turbine Blade Cascade Tunnel at the National Aeronautics and Space Administration (NASA) Lewis Research Center. Lewis researchers were seeking ways to increase turbine blade loading on aircraft engines in an effort to reduce the overall size and weight of engines. The ability of each blade to handle higher loads meant that fewer stages and fewer blades were required. This study analyzed the performance of a turbine blade using a jet flap and high loading. A jet of air was injected into the main stream from the pressure surface near the trailing edge. The jet formed an aerodynamic flap which deflected the flow and changed the circulation around the blade and thus increased the blade loading. The air jet also reduced boundary layer thickness. The jet-flap blade design was appealing because the cooling air may also be used for the jet. The performance was studied in a two-dimensional cascade including six blades. The researcher is checking the jet flat cascade with an exit survey probe. The probe measured the differential pressure that was proportional to the flow angle. The blades were tested over a range of velocity ratios and three jet flow conditions. Increased jet flow improved the turning and decreased both the weight flow and the blade loading. However, high blade loadings were obtained at all jet flow conditions.
Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Hilburger, Mark W.
2003-01-01
A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.
NASA Technical Reports Server (NTRS)
Runyan, Harry L; Woolston, Donald S
1957-01-01
A method is presented for calculating the loading on a finite wing oscillating in subsonic or sonic flow. The method is applicable to any plan form and may be used for determining the loading on deformed wings. The procedure is approximate and requires numerical integration over the wing surface.
Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.
1988-01-01
The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.
2007-06-01
of subsurface mechanism occurring with decreasing stress. Szczepanski, et al . [21] show that this trend continues into the 107 – 109 cycles regime...close to maximum shear, i.e., slip deformation. 10 20 30 40 50 60 1x105 1x106 Microstructure A Microstructure B An gl e of fa ce t n or m al w .r. t...Szczepanski, et al [22] have also identified this as the predominant subsurface crack initiation mechanism at ultrasonic loading frequencies. The
Probabilistic Description of Fatigue Crack Growth Under Constant-and Variable-Amplitude Loading
1989-03-01
plane, see figure 14. The length of the defected crack component and its angle, b and q, respectively, in Figure 15 were found to depend on the crack...length at which the defection occurs; as the crack length increases, b increases while q decreases. Due to the orientation of the deflected component...Breakpoint Voltage to Fun. Generator Output Setpoint Voltage Take Function Generator Gate High Start Test LNext page 153 Q! ~From last ag lastr DMAe 70
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu
Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less
Acoustic emission based damage localization in composites structures using Bayesian identification
NASA Astrophysics Data System (ADS)
Kundu, A.; Eaton, M. J.; Al-Jumali, S.; Sikdar, S.; Pullin, R.
2017-05-01
Acoustic emission based damage detection in composite structures is based on detection of ultra high frequency packets of acoustic waves emitted from damage sources (such as fibre breakage, fatigue fracture, amongst others) with a network of distributed sensors. This non-destructive monitoring scheme requires solving an inverse problem where the measured signals are linked back to the location of the source. This in turn enables rapid deployment of mitigative measures. The presence of significant amount of uncertainty associated with the operating conditions and measurements makes the problem of damage identification quite challenging. The uncertainties stem from the fact that the measured signals are affected by the irregular geometries, manufacturing imprecision, imperfect boundary conditions, existing damages/structural degradation, amongst others. This work aims to tackle these uncertainties within a framework of automated probabilistic damage detection. The method trains a probabilistic model of the parametrized input and output model of the acoustic emission system with experimental data to give probabilistic descriptors of damage locations. A response surface modelling the acoustic emission as a function of parametrized damage signals collected from sensors would be calibrated with a training dataset using Bayesian inference. This is used to deduce damage locations in the online monitoring phase. During online monitoring, the spatially correlated time data is utilized in conjunction with the calibrated acoustic emissions model to infer the probabilistic description of the acoustic emission source within a hierarchical Bayesian inference framework. The methodology is tested on a composite structure consisting of carbon fibre panel with stiffeners and damage source behaviour has been experimentally simulated using standard H-N sources. The methodology presented in this study would be applicable in the current form to structural damage detection under varying operational loads and would be investigated in future studies.
Accounting for Uncertainties in Strengths of SiC MEMS Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Evans, Laura; Beheim, Glen; Trapp, Mark; Jadaan, Osama; Sharpe, William N., Jr.
2007-01-01
A methodology has been devised for accounting for uncertainties in the strengths of silicon carbide structural components of microelectromechanical systems (MEMS). The methodology enables prediction of the probabilistic strengths of complexly shaped MEMS parts using data from tests of simple specimens. This methodology is intended to serve as a part of a rational basis for designing SiC MEMS, supplementing methodologies that have been borrowed from the art of designing macroscopic brittle material structures. The need for this or a similar methodology arises as a consequence of the fundamental nature of MEMS and the brittle silicon-based materials of which they are typically fabricated. When tested to fracture, MEMS and structural components thereof show wide part-to-part scatter in strength. The methodology involves the use of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) software in conjunction with the ANSYS Probabilistic Design System (PDS) software to simulate or predict the strength responses of brittle material components while simultaneously accounting for the effects of variability of geometrical features on the strength responses. As such, the methodology involves the use of an extended version of the ANSYS/CARES/PDS software system described in Probabilistic Prediction of Lifetimes of Ceramic Parts (LEW-17682-1/4-1), Software Tech Briefs supplement to NASA Tech Briefs, Vol. 30, No. 9 (September 2006), page 10. The ANSYS PDS software enables the ANSYS finite-element-analysis program to account for uncertainty in the design-and analysis process. The ANSYS PDS software accounts for uncertainty in material properties, dimensions, and loading by assigning probabilistic distributions to user-specified model parameters and performing simulations using various sampling techniques.
Probabilistic design of fibre concrete structures
NASA Astrophysics Data System (ADS)
Pukl, R.; Novák, D.; Sajdlová, T.; Lehký, D.; Červenka, J.; Červenka, V.
2017-09-01
Advanced computer simulation is recently well-established methodology for evaluation of resistance of concrete engineering structures. The nonlinear finite element analysis enables to realistically predict structural damage, peak load, failure, post-peak response, development of cracks in concrete, yielding of reinforcement, concrete crushing or shear failure. The nonlinear material models can cover various types of concrete and reinforced concrete: ordinary concrete, plain or reinforced, without or with prestressing, fibre concrete, (ultra) high performance concrete, lightweight concrete, etc. Advanced material models taking into account fibre concrete properties such as shape of tensile softening branch, high toughness and ductility are described in the paper. Since the variability of the fibre concrete material properties is rather high, the probabilistic analysis seems to be the most appropriate format for structural design and evaluation of structural performance, reliability and safety. The presented combination of the nonlinear analysis with advanced probabilistic methods allows evaluation of structural safety characterized by failure probability or by reliability index respectively. Authors offer a methodology and computer tools for realistic safety assessment of concrete structures; the utilized approach is based on randomization of the nonlinear finite element analysis of the structural model. Uncertainty of the material properties or their randomness obtained from material tests are accounted in the random distribution. Furthermore, degradation of the reinforced concrete materials such as carbonation of concrete, corrosion of reinforcement, etc. can be accounted in order to analyze life-cycle structural performance and to enable prediction of the structural reliability and safety in time development. The results can serve as a rational basis for design of fibre concrete engineering structures based on advanced nonlinear computer analysis. The presented methodology is illustrated on results from two probabilistic studies with different types of concrete structures related to practical applications and made from various materials (with the parameters obtained from real material tests).
Accelerated fatigue testing of dentin-composite bond with continuously increasing load.
Li, Kai; Guo, Jiawen; Li, Yuping; Heo, Young Cheul; Chen, Jihua; Xin, Haitao; Fok, Alex
2017-06-01
The aim of this study was to evaluate an accelerated fatigue test method that used a continuously increasing load for testing the dentin-composite bond strength. Dentin-composite disks (ϕ5mm×2mm) made from bovine incisor roots were subjected to cyclic diametral compression with a continuously increasingly load amplitude. Two different load profiles, linear and nonlinear with respect to the number of cycles, were considered. The data were then analyzed by using a probabilistic failure model based on the Weakest-Link Theory and the classical stress-life function, before being transformed to simulate clinical data of direct restorations. All the experimental data could be well fitted with a 2-parameter Weibull function. However, a calibration was required for the effective stress amplitude to account for the difference between static and cyclic loading. Good agreement was then obtained between theory and experiments for both load profiles. The in vitro model also successfully simulated the clinical data. The method presented will allow tooth-composite interfacial fatigue parameters to be determined more efficiently. With suitable calibration, the in vitro model can also be used to assess composite systems in a more clinically relevant manner. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodriguez Pretelin (1), Abelardo; Nowak (1), Wolfgang
2017-04-01
Well head protection areas (WHPAs) are frequently used as safety measures for drinking water wells, preventing them from being polluted by restricting land use activities in their proximities. Two sources of uncertainty are involved during delineation: 1) uncertainty in aquifer parameters and 2) time-varying groundwater flow scenarios and their own inherent uncertainties. The former has been studied by Enzenhoefer et al (2012 [1] and 2014 [2]) as probabilistic risk version of WHPA delineation. The latter is frequently neglected and replaced by steady-state assumptions; thereby ignoring time-variant flow conditions triggered either by anthropogenic causes or climatic conditions. In this study we analyze the influence of transient flow considerations in WHPA delineation, following annual seasonality behavior; with transiency represented by four transient conditions: (I) regional groundwater flow direction, (II) strength of the regional hydraulic gradient, (III) natural recharge to the groundwater and (IV) pumping rate. Addressing WHPA delineation in transient flow scenarios is computationally expensive. Thus, we develop an efficient method using a dynamic superposition of steady-state flow solutions coupled with a reversed formulation of advective-dispersive transport based on a Lagrangian particle tracking with continuous injection. This analysis results in a time-frequency map of pixel-wise membership to the well catchment. Additional to transient flow conditions, we recognize two sources of uncertainty, inexact knowledge of transient drivers and parameters. The uncertainties are accommodated through Monte Carlo simulation. With the help of a global sensitivity analysis, we investigate the impact of transiency in WHPA solutions. In particular, we evaluate: (1) Among all considered transients, which ones are the most influential. (2) How influential in WHPA delineation is the transience-related uncertainty compared to aquifer parameter uncertainty. Literature [1] R. Enzenhoefer, W. Nowak, and R. Helmig. Probabilistic exposure risk assessment with advective-dispersive well vulnerability criteria. Advances in Water Resources, 36:121-132, 2012. [2] R. Enzenhoefer, T. Bunk, and W. Nowak. Nine steps to risk-informed wellhead protection and management: a case study. Ground water, 52:161-174, 2014.
The effects of particle loading on turbulence structure and modelling
NASA Technical Reports Server (NTRS)
Squires, Kyle D.; Eaton, J. K.
1989-01-01
The objective of the present research was to extend the Direct Numerical Simulation (DNS) approach to particle-laden turbulent flows using a simple model of particle/flow interaction. The program addressed the simplest type of flow, homogeneous, isotropic turbulence, and examined interactions between the particles and gas phase turbulence. The specific range of problems examined include those in which the particle is much smaller than the smallest length scales of the turbulence yet heavy enough to slip relative to the flow. The particle mass loading is large enough to have a significant impact on the turbulence, while the volume loading was small enough such that particle-particle interactions could be neglected. Therefore, these simulations are relevant to practical problems involving small, dense particles conveyed by turbulent gas flows at moderate loadings. A sample of the results illustrating modifications of the particle concentration field caused by the turbulence structure is presented and attenuation of turbulence by the particle cloud is also illustrated.
NASA Astrophysics Data System (ADS)
Xian, Benzhong; Wang, Junhui; Gong, Chenglin; Yin, Yu; Chao, Chuzhi; Liu, Jianping; Zhang, Guodong; Yan, Qi
2018-06-01
Subaquatic channels are known as active conduits for the delivery of terrigenous sediments into related marine and lacustrine basins, as well as important targets for hydrocarbon exploration. Compared to submarine channels, lacustrine subaqueous channels created by hyperpycnal flows are understudied. Using well-exposed outcrops collected from three different locations in the southern Ordos Basin, central China, morphologies and architecture of a channelized hyperpycnal system were studied and classified. Six facies associations represent sedimentary processes from strong erosion by bedload dominated hyperpycnal flows, to transitional deposition jointly controlled by bedload and suspended-load dominated hyperpycnal flows, finally to deposition from suspended-load dominated hyperpycnal flows. On the basis of channel morphologies, infilling sediments and sedimentary processes, the documented channels can be classified into four main categories, which are erosional, bedload dominated, suspended-load dominated, and depositional channels. In very proximal and very distal locations, erosional channels and depositional channels serve as two end-members, while in middle areas, bedload-dominated channels and suspended-load dominated channels are transitional types. Erosional channels, as a response to strong erosion from bedload dominated hyperpycnal flows on upper slope, were mainly filled by mud interbedded with thin sand beds. As flow energy decreases, bedload dominated channels develop on middle slopes, which are characterized mainly by under- to balanced sediment infillings with cross-bedded sandstones and/or minor massive sandstones. Compared to bedload dominated channels, suspended-load dominated channels mainly develop in deeper water, and were filled mainly by massive or planar-laminated sandstones. Depositional channels, as a response to suspended-load dominated hyperpycnal flows in deep-water areas, are characterized by thin-medium bed classical turbidites with Bouma sequences and thin- to thick massive sandstones. Such evolution patterns of hyperpycnal channel systems are ascribed to the progressive decrease in flow capacity of hyperpycnal flows, and provide an adequate explanation for the basinward channelization behavior of hyperpycnal systems.
Reliability of a Parallel Pipe Network
NASA Technical Reports Server (NTRS)
Herrera, Edgar; Chamis, Christopher (Technical Monitor)
2001-01-01
The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.
Predictability of short-range forecasting: a multimodel approach
NASA Astrophysics Data System (ADS)
García-Moya, Jose-Antonio; Callado, Alfons; Escribà, Pau; Santos, Carlos; Santos-Muñoz, Daniel; Simarro, Juan
2011-05-01
Numerical weather prediction (NWP) models (including mesoscale) have limitations when it comes to dealing with severe weather events because extreme weather is highly unpredictable, even in the short range. A probabilistic forecast based on an ensemble of slightly different model runs may help to address this issue. Among other ensemble techniques, Multimodel ensemble prediction systems (EPSs) are proving to be useful for adding probabilistic value to mesoscale deterministic models. A Multimodel Short Range Ensemble Prediction System (SREPS) focused on forecasting the weather up to 72 h has been developed at the Spanish Meteorological Service (AEMET). The system uses five different limited area models (LAMs), namely HIRLAM (HIRLAM Consortium), HRM (DWD), the UM (UKMO), MM5 (PSU/NCAR) and COSMO (COSMO Consortium). These models run with initial and boundary conditions provided by five different global deterministic models, namely IFS (ECMWF), UM (UKMO), GME (DWD), GFS (NCEP) and CMC (MSC). AEMET-SREPS (AE) validation on the large-scale flow, using ECMWF analysis, shows a consistent and slightly underdispersive system. For surface parameters, the system shows high skill forecasting binary events. 24-h precipitation probabilistic forecasts are verified using an up-scaling grid of observations from European high-resolution precipitation networks, and compared with ECMWF-EPS (EC).
Reliability assessment of slender concrete columns at the stability failure
NASA Astrophysics Data System (ADS)
Valašík, Adrián; Benko, Vladimír; Strauss, Alfred; Täubling, Benjamin
2018-01-01
The European Standard for designing concrete columns within the use of non-linear methods shows deficiencies in terms of global reliability, in case that the concrete columns fail by the loss of stability. The buckling failure is a brittle failure which occurs without warning and the probability of its formation depends on the columns slenderness. Experiments with slender concrete columns were carried out in cooperation with STRABAG Bratislava LTD in Central Laboratory of Faculty of Civil Engineering SUT in Bratislava. The following article aims to compare the global reliability of slender concrete columns with slenderness of 90 and higher. The columns were designed according to methods offered by EN 1992-1-1 [1]. The mentioned experiments were used as basis for deterministic nonlinear modelling of the columns and subsequent the probabilistic evaluation of structural response variability. Final results may be utilized as thresholds for loading of produced structural elements and they aim to present probabilistic design as less conservative compared to classic partial safety factor based design and alternative ECOV method.
Probabilistic Seismic Hazard Assessment for Iraq
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onur, Tuna; Gok, Rengin; Abdulnaby, Wathiq
Probabilistic Seismic Hazard Assessments (PSHA) form the basis for most contemporary seismic provisions in building codes around the world. The current building code of Iraq was published in 1997. An update to this edition is in the process of being released. However, there are no national PSHA studies in Iraq for the new building code to refer to for seismic loading in terms of spectral accelerations. As an interim solution, the new draft building code was considering to refer to PSHA results produced in the late 1990s as part of the Global Seismic Hazard Assessment Program (GSHAP; Giardini et al.,more » 1999). However these results are: a) more than 15 years outdated, b) PGA-based only, necessitating rough conversion factors to calculate spectral accelerations at 0.3s and 1.0s for seismic design, and c) at a probability level of 10% chance of exceedance in 50 years, not the 2% that the building code requires. Hence there is a pressing need for a new, updated PSHA for Iraq.« less
Tortorelli, Robert L.
2006-01-01
The City of Tulsa, Oklahoma, uses Lake Eucha and Spavinaw Lake in the Eucha-Spavinaw basin in northwestern Arkansas and northeastern Oklahoma for public water supply. Taste and odor problems in the water attributable to blue-green algae have increased in frequency over time. Changes in the algae community in the lakes may be attributable to increases in nutrient levels in the lakes, and in the waters feeding the lakes. The U.S. Geological Survey, in cooperation with the City of Tulsa, conducted an investigation to summarize nitrogen and phosphorus concentrations and provide estimates of nitrogen and phosphorus loads, yields, and flow-weighted concentrations in the Eucha-Spavinaw basin for a 3-year period from January 2002 through December 2004. This report provides information needed to advance knowledge of the regional hydrologic system and understanding of hydrologic processes, and provides hydrologic data and results useful to multiple parties for interstate compacts. Nitrogen and phosphorus concentrations were significantly greater in runoff samples than in base-flow samples at Spavinaw Creek near Maysville, Arkansas; Spavinaw Creek near Colcord, Oklahoma, and Beaty Creek near Jay, Oklahoma. Runoff concentrations were not significantly greater than in base-flow samples at Spavinaw Creek near Cherokee, Arkansas; and Spavinaw Creek near Sycamore, Oklahoma. Nitrogen concentrations in base-flow samples significantly increased in the downstream direction in Spavinaw Creek from the Maysville to Sycamore stations then significantly decreased from the Sycamore to the Colcord stations. Nitrogen in base-flow samples from Beaty Creek was significantly less than in those from Spavinaw Creek. Phosphorus concentrations in base-flow samples significantly increased from the Maysville to Cherokee stations in Spavinaw Creek, probably due to a point source between those stations, then significantly decreased downstream from the Cherokee to Colcord stations. Phosphorus in base-flow samples from Beaty Creek was significantly less than phosphorus in base-flow samples from Spavinaw Creek downstream from the Maysville station. Nitrogen concentrations in runoff samples were not significantly different among the stations on Spavinaw Creek; however, the concentrations at Beaty Creek were significantly less than at all other stations. Phosphorus concentrations in runoff samples were not significantly different among the three downstream stations on Spavinaw Creek, and not significantly different at the Maysville station on Spavinaw Creek and the Beaty Creek station. Phosphorus and nitrogen concentrations in runoff samples from all stations generally increased with increasing streamflow. Estimated mean annual nitrogen total loads from 2002-2004 were substantially greater at the Spavinaw Creek stations than at Beaty Creek and increased in a downstream direction from Maysville to Colcord in Spavinaw Creek, with the load at the Colcord station about 2 times that of Maysville station. Estimated mean annual nitrogen base-flow loads at the Spavinaw Creek stations were about 5 to 11 times greater than base-flow loads at Beaty Creek. The runoff component of the annual nitrogen total load for Beaty Creek was 85 percent, whereas, at the Spavinaw Creek stations, the range in the runoff component was 60 to 66 percent. Estimated mean annual phosphorus total loads from 2002-2004 were greater at the Spavinaw Creek stations from Cherokee to Colcord than at Beaty Creek and increased in a downstream direction from Maysville to Colcord in Spavinaw Creek, with the load at the Colcord station about 2.5 times that of Maysville station. Estimated mean annual phosphorus base-flow loads at the Spavinaw Creek stations were about 2.5 to 19 times greater than at Beaty Creek. Phosphorus base-flow loads increased about 8 times from Maysville to Cherokee in Spavinaw Creek; the base-flow loads were about the same at the three downstream stations. The runoff component
Numerical analysis of rotating stall instabilities of a pump- turbine in pump mode
NASA Astrophysics Data System (ADS)
Xia, L. S.; Cheng, Y. G.; Zhang, X. X.; Yang, J. D.
2014-03-01
Rotating stall may occur at part load flow of a pump-turbine in pump mode. Unstable flow structures developing under stall condition can lead to a sudden drop of efficiency, high dynamic load and even cavitation. CFD simulations on a pump-turbine model in pump mode were carried out to reveal the onset and developed mechanisms of these unstable flow phenomena at part load. The simulation results of energy-discharge and efficiency characteristics are in good agreement with those obtained by experiments. The more deviate from design conditions with decreasing flow rate, the more flow separations within the vanes. Under specific conditions, four stationary separation zones begin to progress on the circumference, rotating at a fraction of the impeller rotation rate. Rotating stalls lead to the flow in the vane diffuser channels alternating between outward jet flow and blockage. Strong jets impact the spiral casing wall causing high pressure pulsations. Severe separations of the stall cells disturb the flow inducing periodical large amplitude pressure fluctuations, of which the intensity at different span wise of the guide vanes is different. The enforced rotating nonuniform pressure distributions on the circumference lead to dynamic uniform forces on the impeller and guide vanes. The results show that the CFD simulations are capable to gain the complicated flow structure information for analysing the unstable characteristics of the pump mode at part load.
Hatzell, Kelsey B; Hatzell, Marta C; Cook, Kevin M; Boota, Muhammad; Housel, Gabrielle M; McBride, Alexander; Kumbur, E Caglan; Gogotsi, Yury
2015-03-03
Flow electrode deionization (FCDI) is an emerging area for continuous and scalable deionization, but the electrochemical and flow properties of the flow electrode need to be improved to minimize energy consumption. Chemical oxidation of granular activated carbon (AC) was examined here to study the role of surface heteroatoms on rheology and electrochemical performance of a flow electrode (carbon slurry) for deionization processes. Moreover, it was demonstrated that higher mass densities could be used without increasing energy for pumping when using oxidized active material. High mass-loaded flow electrodes (28% carbon content) based on oxidized AC displayed similar viscosities (∼21 Pa s) to lower mass-loaded flow electrodes (20% carbon content) based on nonoxidized AC. The 40% increased mass loading (from 20% to 28%) resulted in a 25% increase in flow electrode gravimetric capacitance (from 65 to 83 F g(-1)) without sacrificing flowability (viscosity). The electrical energy required to remove ∼18% of the ions (desalt) from of the feed solution was observed to be significantly dependent on the mass loading and decreased (∼60%) from 92 ± 7 to 28 ± 2.7 J with increased mass densities from 5 to 23 wt %. It is shown that the surface chemistry of the active material in a flow electrode effects the electrical and pumping energy requirements of a FCDI system.
Hatzell, Kelsey B.; Hatzell, Marta C.; Cook, Kevin M.; ...
2015-01-29
Flow electrode deionization (FCDI) is an emerging area for continuous and scalable deionization, but the electrochemical and flow properties of the flow electrode need to be improved to minimize energy consumption. We examine chemical oxidation of granular activated carbon (AC) here to study the role of surface heteroatoms on rheology and electrochemical performance of a flow electrode (carbon slurry) for deionization processes. Moreover, it was demonstrated that higher mass densities could be used without increasing energy for pumping when using oxidized active material. High mass-loaded flow electrodes (28% carbon content) based on oxidized AC displayed similar viscosities (~21 Pa s)more » to lower mass-loaded flow electrodes (20% carbon content) based on nonoxidized AC. The 40% increased mass loading (from 20% to 28%) resulted in a 25% increase in flow electrode gravimetric capacitance (from 65 to 83 F g –1) without sacrificing flowability (viscosity). The electrical energy required to remove ~18% of the ions (desalt) from of the feed solution was observed to be significantly dependent on the mass loading and decreased (~60%) from 92 ± 7 to 28 ± 2.7 J with increased mass densities from 5 to 23 wt %. Finally, it is shown that the surface chemistry of the active material in a flow electrode effects the electrical and pumping energy requirements of a FCDI system.« less
Marine Propulsion Load Emulation.
1985-06-01
single-entry centrifugal compressor mechanically coupled to a single-stage axial - flow turbine , two cross- connected can type combustion chambers, and...an accessory- drive section. The power output section incorporates a second axial - flow turbine , reduction gears and output shaft, and is driven by the... Flow .... ............. ... 36 4.7 Load Valve Characteristics ... ............. .38 4.8 Photograph of Turbine Test gell .......... 39, * 4.9
A Mechanism for Stratifying Lava Flows
NASA Astrophysics Data System (ADS)
Rice, A.
2005-12-01
Relict lava flows (e.g., komatiites) are often reported to be zoned in the vertical, each zone separated by a sharp contact. Such stratifications in igneous flows, both intrusive and extrusive, can be treated as analogues of suspended loads of sediments in rivers and streams, and hence amenable to quantitative treatment derived for the hydraulic environment as long as dynamic similitude is assured. Situations typically encountered in the hydraulic environment are streams carrying a bed load at the bottom of the stream, the bed load separated by a sharp horizon from a sediment load carried above it. This sediment load may be topped by others of decreasing density as one moves to the surface of the flow, with perhaps the uppermost layer clear of any suspended matter. Rules exist for estimating the thickness D of these loads: one of them is given by D ~ 4.4V3/rgcvs where V is the shear velocity or average velocity of the flow, r = (ρs - ρl)/ρl where ρs is the density of the suspended solid matter, ρl the density of the fluid, g the acceleration of gravity, c the concentration of the particulate content and vs the settling velocity. The settling velocity is secured through Stoke's Law and the velocity of the flow is given by V = R2/3S1/2/n where R is the hydraulic radius, S the gradient along which the fluid flows and n is the Manning Coefficient. In the igneous case, the bed load would be composed of primocrysts, i.e., of the first crystals to come out of solution as the flow cools along its run. This would leave the upper portions of the flow more evolved except perhaps for a quenched crust riding atop the flow. As the viscosity of the flow is dependent not only on temperature but on composition and crystal content, the mean velocity of each layer will be different from the layer above and below it. This requires shear at the interface of adjoining stratifications, which brings into play another mechanism: dispersive pressure (the Bagnold effect). Dispersive pressure will drive primocrysts into boundary layers such as that attending the bottom of the flow and at those separating stratifications. For instance, if the primocrysts were spinals, then a Cr high might be expected at the interfaces separating stratifications. Since the melt throughout is evolving as it moves down stream, compositional variations along strike (as well is in the vertical) might be expected. Application of the above notions falls within the confines of field observation.
Nitrate and phosphorus transport through subsurface drains under free and controlled drainage.
Saadat, Samaneh; Bowling, Laura; Frankenberger, Jane; Kladivko, Eileen
2018-05-28
Controlled drainage (CD) is a structural conservation practice in which the drainage outlet is managed in order to reduce drain flow volume and nutrient loads to water bodies. The goal of this study was to evaluate the potential of CD to improve water quality for two different seasons and levels of outlet control, using ten years of data collected from an agricultural drained field in eastern Indiana with two sets of paired plots. The Rank Sum test was used to quantify the impact of CD on cumulative annual drain flow and nitrate-N and phosphorus loads. CD plots had a statistically significant (at 5% level) lower annual drain flow (eastern pair: 39%; western pair: 25%) and nitrate load (eastern pair: 43%; western pair: 26%) compared to free draining (FD) plots, while annual soluble reactive phosphorus (SRP) and total phosphorus (TP) loads were not significantly different. An ANCOVA model was used to evaluate the impact of CD on daily drain flow, nitrate-N, SRP and TP concentrations and loads during the two different periods of control. The average percent reduction of daily drain flow was 68% in the eastern pair and 58% in the western pair during controlled drainage at the higher outlet level (winter) and 64% and 58% at the lower outlet level (summer) in the eastern and western pairs, respectively. Nitrate load reduction was similar to drain flow reduction, while the effect of CD on SRP and TP loads was not significant except for the increase in SRP in one pair. These results from a decade-long field monitoring and two different statistical methods enhance our knowledge about water quality impacts of CD system and support this management practice as a reliable system for reducing nitrate loss through subsurface drains, mainly caused by flow reduction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Commercial absorption chiller models for evaluation of control strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koeppel, E.A.; Klein, S.A.; Mitchell, J.W.
1995-08-01
A steady-state computer simulation model of a direct fired double-effect water-lithium bromide absorption chiller in the parallel-flow configuration was developed from first principles. Unknown model parameters such as heat transfer coefficients were determined by matching the model`s calculated state points and coefficient of performance (COP) against nominal full-load operating data and COPs obtained from a manufacturer`s catalog. The model compares favorably with the manufacturer`s performance ratings for varying water circuit (chilled and cooling) temperatures at full load conditions and for chiller part-load performance. The model was used (1) to investigate the effect of varying the water circuit flow rates withmore » the chiller load and (2) to optimize chiller part-load performance with respect to the distribution and flow of the weak solution.« less
Probabilistic Multi-Hazard Assessment of Dry Cask Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bencturk, Bora; Padgett, Jamie; Uddin, Rizwan
systems the concrete shall not only provide shielding but insures stability of the upright canister, facilitates anchoring, allows ventilation, and provides physical protection against theft, severe weather and natural (seismic) as well as man-made events (blast incidences). Given the need to remain functional for 40 years or even longer in case of interim storage, the concrete outerpack and the internal canister components need to be evaluated with regard to their long-term ability to perform their intended design functions. Just as evidenced by deteriorating concrete bridges, there are reported visible degradation mechanisms of dry storage systems especially when high corrosive environmentsmore » are considered in maritime locations. The degradation of reinforced concrete is caused by multiple physical and chemical mechanisms, which may be summarized under the heading of environmental aging. The underlying hygro-thermal transport processes are accelerated by irradiation effects, hence creep and shrinkage need to include the effect of chloride penetration, alkali aggregate reaction as well as corrosion of the reinforcing steel. In light of the above, the two main objectives of this project are to (1) develop a probabilistic multi-hazard assessment framework, and (2) through experimental and numerical research perform a comprehensive assessment under combined earthquake loads and aging induced deterioration, which will also provide data for the development and validation of the probabilistic framework.« less
Clogging and jamming transitions in periodic obstacle arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Hong; Reichhardt, Charles; Olson Reichhardt, Cynthia Jane
2017-03-29
We numerically examine clogging transitions for bidisperse disks flowing through a two-dimensional periodic obstacle array. Here, we show that clogging is a probabilistic event that occurs through a transition from a homogeneous flowing state to a heterogeneous or phase-separated jammed state where the disks form dense connected clusters. The probability for clogging to occur during a fixed time increases with increasing particle packing and obstacle number. For driving at different angles with respect to the symmetry direction of the obstacle array, we show that certain directions have a higher clogging susceptibility. It is also possible to have a size-specific cloggingmore » transition in which one disk size becomes completely immobile while the other disk size continues to flow.« less
Jack Lewis; Sylvia R. Mori; Elizabeth T. Keppeler; Robert R. Ziemer
2001-01-01
Abstract - Models are fit to 11 years of storm peak flows, flow volumes, and suspended sediment loads on a network of 14 stream gaging stations in the North Fork Caspar Creek, a 473-ha coastal watershed bearing a second-growth forest of redwood and Douglas-fir. For the first 4 years of monitoring, the watershed was in a relatively undisturbed state, having last been...
Steady internal flow and aerodynamic loads analysis of shuttle thermal protection system
NASA Technical Reports Server (NTRS)
Petley, D. H.; Alexander, W., Jr.; Ivey, G. W., Jr.; Kerr, P. A.
1984-01-01
An analytical model for calculation of ascent steady state tile loading was developed and validated with wind tunnel data. The analytical model is described and results are given. Results are given for loading due to shocks and skin friction. The analysis included calculation of internal flow (porous media flow and channel flow) to obtain pressures and integration of the pressures to obtain forces and moments on an insulation tile. A heat transfer program was modified by using analogies between heat transfer and fluid flow so that it could be used for internal flow calculation. The type of insulation tile considered was undensified reusable surface insulation (RSI) without gap fillers, and the location studied was the lower surface of the orbiter. Force and moment results are reported for parameter variations on surface pressure distribution, gap sizes, insulation permeability, and tile thickness.
Baker, Ronald J.; Wieben, Christine M.; Lathrop, Richard G.; Nicholson, Robert S.
2014-01-01
Concentrations, loads, and yields of nutrients (total nitrogen and total phosphorus) were calculated for the Barnegat Bay-Little Egg Harbor (BB-LEH) watershed for 1989–2011 at annual and seasonal (growing and nongrowing) time scales. Concentrations, loads, and yields were calculated at three spatial scales: for each of the 81 subbasins specified by 14-digit hydrologic unit codes (HUC-14s); for each of the three BB-LEH watershed segments, which coincide with segmentation of the BB-LEH estuary; and for the entire BB-LEH watershed. Base-flow and runoff values were calculated separately and were combined to provide total values. Available surface-water-quality data for all streams in the BB-LEH watershed for 1980–2011 were compiled from existing datasets and quality assured. Precipitation and streamflow data were used to distinguish between water-quality samples that were collected during base-flow conditions and those that were collected during runoff conditions. Base-flow separation of hydrographs of six streams in the BB-LEH watershed indicated that base flow accounts for about 72 to 94 percent of total flow in streams in the watershed. Base-flow mean concentrations (BMCs) of total nitrogen (TN) and total phosphorus (TP) for each HUC-14 subbasin were calculated from relations between land use and measured base-flow concentrations. These relations were developed from multiple linear regression models determined from water-quality data collected at sampling stations in the BB-LEH watershed under base-flow conditions and land-use percentages in the contributing drainage basins. The total watershed base-flow volume was estimated for each year and season from continuous streamflow records for 1989–2011 and relations between precipitation and streamflow during base-flow conditions. For each year and season, the base-flow load and yield were then calculated for each HUC-14 subbasin from the BMCs, total base-flow volume, and drainage area. The watershed-loading application PLOAD was used to calculate runoff concentrations, loads, and yields of TN and TP at the HUC-14 scale. Flow-weighted event-mean concentrations (EMCs) for runoff were developed for each major land-use type in the watershed using storm sampling data from four streams in the BB-LEH watershed and three streams outside the watershed. The EMCs were developed separately for the growing and nongrowing seasons, and were typically greater during the growing season. The EMCs, along with annual and seasonal precipitation amounts and percent imperviousness associated with land-use types, were used as inputs to PLOAD to calculate annual and seasonal runoff concentrations, loads, and yields at the HUC-14 scale. Over the period of study (1989–2011), total surface-water loads (base flow plus runoff) for the entire BB-LEH watershed for TN ranged from about 455,000 kilograms (kg) as N (1995) to 857,000 kg as N (2010). For TP, total loads for the watershed ranged from about 17,000 (1995) to 32,000 kg as P (2010). On average, the north segment accounted for about 66 percent of the annual TN load and 63 percent of the annual TP load, and the central and south segments each accounted for less than 20 percent of the nutrient loads. Loads and yields were strongly associated with precipitation patterns, ensuing hydrologic conditions, and land use. HUC-14 subbasins with the highest yields of nutrients are concentrated in the northern part of the watershed, and have the highest percentages of urban or agricultural land use. Subbasins with the lowest TN and TP yields are dominated by forest cover. Percentages of turf (lawn) cover and nonturf cover were estimated for the watershed. Of the developed land in the watershed, nearly one quarter (24.9 percent) was mapped as turf cover. Because there is a strong relation between percent turf and percent developed land, percent turf in the watershed typically increases with percent development, and the amount of development can be considered a reasonable predictor of the amount of turf cover in the watershed. In the BB-LEH watershed, calculated concentrations of TN and TP were greater for developed–turf areas than for developed–nonturf areas, which, in turn, were greater than those for undeveloped areas.
NASA Astrophysics Data System (ADS)
Rozemeijer, J.; Jansen, S.; de Jonge, H.; Lindblad Vendelboe, A.
2014-12-01
Considering their crucial role in water and solute transport, enhanced monitoring and modeling of agricultural subsurface tube drain systems is important for adequate water quality management. For example, previous work in lowland agricultural catchments has shown that subsurface tube drain effluent contributed up to 80% of the annual discharge and 90-92% of the annual NO3 loads from agricultural fields towards the surface water. However, existing monitoring techniques for flow and contaminant loads from tube drains are expensive and labor-intensive. Therefore, despite the unambiguous relevance of this transport route, tube drain monitoring data are scarce. The presented study aimed developing a cheap, simple, and robust method to monitor loads from tube drains. We are now ready to introduce the Flowcap that can be attached to the outlet of tube drains and is capable of registering total flow, contaminant loads, and flow-averaged concentrations. The Flowcap builds on the existing SorbiCells, a modern passive sampling technique that measures average concentrations over longer periods of time (days to months) for various substances. By mounting SorbiCells in our Flowcap, a flow-proportional part of the drain effluent is sampled from the main stream. Laboratory testing yielded good linear relations (R-squared of 0.98) between drainage flow rates and sampling rates. The Flowcap was tested in practice for measuring NO3 loads from two agricultural fields and one glasshouse in the Netherlands. The Flowcap registers contaminant loads from tube drains without any need for housing, electricity, or maintenance. This enables large-scale monitoring of non-point contaminant loads via tube drains, which would facilitate the improvement of contaminant transport models and would yield valuable information for the selection and evaluation of mitigation options to improve water quality.
Huffman, Brad A.; Hazell, William F.; Oblinger, Carolyn J.
2017-09-06
Federal, State, and local agencies and organizations have expressed concerns regarding the detrimental effects of excessive sediment transport on aquatic resources and endangered species populations in the upper Little Tennessee River and some of its tributaries. In addition, the storage volume of Lake Emory, which is necessary for flood control and power generation, has been depleted by sediment deposition. To help address these concerns, a 2-year study was conducted in the upper Little Tennessee River Basin to characterize the ambient suspended-sediment concentrations and suspended-sediment loads upstream and downstream from Lake Emory in Franklin, North Carolina. The study was conducted by the U.S. Geological Survey in cooperation with Duke Energy. Suspended-sediment samples were collected periodically, and time series of stage and turbidity data were measured from December 2013 to January 2016 upstream and downstream from Lake Emory. The stage data were used to compute time-series streamflow. Suspended-sediment samples, along with time-series streamflow and turbidity data, were used to develop regression models that were used to estimate time-series suspended-sediment concentrations for the 2014 and 2015 calendar years. These concentrations, along with streamflow data, were used to compute suspended-sediment loads. Selected suspended-sediment samples were collected for analysis of particle-size distribution, with emphasis on high-flow events. Bed-load samples were also collected upstream from Lake Emory.The estimated annual suspended-sediment loads (yields) for the upstream site for the 2014 and 2015 calendar years were 27,000 short tons (92 short tons per square mile) and 63,300 short tons (215 short tons per square mile), respectively. The annual suspended-sediment loads (yields) for the downstream site for 2014 and 2015 were 24,200 short tons (75 short tons per square mile) and 94,300 short tons (292 short tons per square mile), respectively. Overall, the suspended-sediment load at the downstream site was about 28,300 short tons greater than the upstream site over the study period.As expected, high-flow events (the top 5 percent of daily mean flows) accounted for the majority of the sediment load; 80 percent at the upstream site and 90 percent at the downstream site. A similar relation between turbidity (the top 5 percent of daily mean turbidity) and high loads was also noted. In general, when instantaneous streamflows at the upstream site exceeded 5,000 cubic feet per second, increased daily loads were computed at the downstream site. During low to moderate flows, estimated suspended-sediment loads were lower at the downstream site when compared to the upstream site, which suggests that sediment deposition may be occurring in the intervening reach during those conditions. During the high-flow events, the estimated suspended-sediment loads were higher at the downstream site; however, it is impossible to say with certainty whether the increase in loading was due to scouring of lake sediment, contributions from the additional source area, model error, or a combination of one or more of these factors. The computed loads for a one-week period (December 24–31, 2015), during which the two largest high-flow events of the study period occurred, were approximately 52 percent of the 2015 annual sediment load (36 percent of 2-year load) at the upstream site and approximately 72 percent of the 2015 annual sediment load (57 percent of 2-year load) at the downstream site. Six bedload samples were collected during three events; two high-flow events and one base-flow event. The contribution of bedload to the total sediment load was determined to be insignificant for sampled flows. In general, streamflows for long-term streamgages in the study area were below normal for the majority of the study period; however, flows during the last 3 months of the study period were above normal, including the extreme events during the last week of the study period.
Lee, C H; Sapuan, S M; Lee, J H; Hassan, M R
2016-01-01
A study of the melt volume flow rate (MVR) and the melt flow rate (MFR) of kenaf fibre (KF) reinforced Floreon (FLO) and magnesium hydroxide (MH) biocomposites under different temperatures (160-180 °C) and weight loadings (2.16, 5, 10 kg) is presented in this paper. FLO has the lowest values of MFR and MVR. The increment of the melt flow properties (MVR and MFR) has been found for KF or MH insertion due to the hydrolytic degradation of the polylactic acid in FLO. Deterioration of the entanglement density at high temperature, shear thinning and wall slip velocity were the possible causes for the higher melt flow properties. Increasing the KF loadings caused the higher melt flow properties while the higher MH contents created stronger bonding for higher macromolecular chain flow resistance, hence lower melt flow properties were recorded. However, the complicated melt flow behaviour of the KF reinforced FLO/MH biocomposites was found in this study. The high probability of KF-KF and KF-MH collisions was expected and there were more collisions for higher fibre and filler loading causing lower melt flow properties.
Clark, David W.; Skinner, Kenneth D.; Pollock, David W.
2006-01-01
A flow and transport model was created with a graphical user interface to simplify the evaluation of nitrogen loading and nitrate transport in the mid-Snake region in south-central Idaho. This model and interface package, the Snake River Nitrate Scenario Simulator, uses the U.S. Geological Survey's MODFLOW 2000 and MOC3D models. The interface, which is enabled for use with geographic information systems (GIS), was created using ESRI's royalty-free MapObjects LT software. The interface lets users view initial nitrogen-loading conditions (representing conditions as of 1998), alter the nitrogen loading within selected zones by specifying a multiplication factor and applying it to the initial condition, run the flow and transport model, and view a graphical representation of the modeling results. The flow and transport model of the Snake River Nitrate Scenario Simulator was created by rediscretizing and recalibrating a clipped portion of an existing regional flow model. The new subregional model was recalibrated with newly available water-level data and spring and ground-water nitrate concentration data for the study area. An updated nitrogen input GIS layer controls the application of nitrogen to the flow and transport model. Users can alter the nitrogen application to the flow and transport model by altering the nitrogen load in predefined spatial zones contained within similar political, hydrologic, and size-constrained boundaries.
Evidence of accumulated stress in Achilles and anterior knee tendons in elite badminton players.
Boesen, Anders Ploug; Boesen, Morten Ilum; Koenig, Merete Juhl; Bliddal, Henning; Torp-Pedersen, Soren; Langberg, Henning
2011-01-01
Tendon-related injuries are a major problem, but the aetiology of tendinopathies is unknown. In tendinopathies as well as during unaccustomed loading, intra-tendinous flow can be detected indicating that extensive loading can provoke intra-tendinous flow. The aim of present study is to evaluate the vascular response as indicated by colour Doppler (CD) activity in both the Achilles and patella tendon after loading during high-level badminton matches. The Achilles tendon was subdivided into a mid-tendon, pre-insertional, and insertional region and the anterior knee tendons into a quadriceps-, patella- and tuberositas region. Intra-tendinous flow was measured using both a semi-quantitative grading system (CD grading) and a quantitative scoring system (CF) on colour Doppler. Intra-tendinous flow in the Achilles and anterior knee tendons was examined in fourteen single players before tournament and after 1st and 2nd match, respectively on both the dominant and non-dominant side. All players had abnormal intra-tendinous flow (Colour Doppler ≥ grade 2) in at least one tendon in at least one scan during the tournament. At baseline, only two of the 14 players had normal flow in all the tendons examined. After 1st match, tendencies to higher intra-tendinous flow were observed in both the dominant patella tendon and non-dominant quadriceps tendon (P-values n.s.). After 2nd match, intra-tendinous flow was significant increased in the dominant patella tendon (P = 0.009). In all other locations, there was a trend towards a stepwise increase in intra-tendinous flow. The preliminary results indicate that high amount of intra-tendinous flow was found in elite badminton players at baseline and was increased after repetitive loading, especially in the patella tendon (dominant leg). The colour Doppler measurement can be used to determine changes in intra-tendinous flow after repetitive loading.
Logistics Modeling for Lunar Exploration Systems
NASA Technical Reports Server (NTRS)
Andraschko, Mark R.; Merrill, R. Gabe; Earle, Kevin D.
2008-01-01
The extensive logistics required to support extended crewed operations in space make effective modeling of logistics requirements and deployment critical to predicting the behavior of human lunar exploration systems. This paper discusses the software that has been developed as part of the Campaign Manifest Analysis Tool in support of strategic analysis activities under the Constellation Architecture Team - Lunar. The described logistics module enables definition of logistics requirements across multiple surface locations and allows for the transfer of logistics between those locations. A key feature of the module is the loading algorithm that is used to efficiently load logistics by type into carriers and then onto landers. Attention is given to the capabilities and limitations of this loading algorithm, particularly with regard to surface transfers. These capabilities are described within the context of the object-oriented software implementation, with details provided on the applicability of using this approach to model other human exploration scenarios. Some challenges of incorporating probabilistics into this type of logistics analysis model are discussed at a high level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
2017-10-03
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
Grubbs, J.W.; Pittman, J.R.
1997-01-01
Water flow and quality data were collected from December 1994 to September 1995 to evaluate variations in discharge, water quality, and chemical fluxes (loads) through Perdido Bay, Florida. Data were collected at a cross section parallel to the U.S. Highway 98 bridge. Discharges measured with an acoustic Doppler current profiler (ADCP) and computed from stage-area and velocity ratings varied roughly between + or - 10,000 cubic feet per second during a typical tidal cycle. Large reversals in flow direction occurred rapidly (less than 1 hour), and complete reversals (resulting in near peak net-upstream or downstream discharges) occurred within a few hours of slack water. Observations of simultaneous upstream and downstream flow (bidirectional flow) were quite common in the ADCP measurements, with opposing directions of flow occurring predominantly in vertical layers. Continuous (every 15 minutes) discharge data were computed for the period from August 18, 1995, to September 28, 1995, and filtered daily mean discharge values were computed for the period from August 19 to September 26, 1995. Data were not computed prior to August 18, 1995, either because of missing data or because the velocity rating was poorly defined (because of insufficient data) for the period prior to landfall of hurricane Erin (August 3, 1995). The results of the study indicate that acoustical techniques can yield useful estimates of continuous (instantaneous) discharge in Perdido Bay. Useful estimates of average daily net flow rates can also be obtained, but the accuracy of these estimates will be limited by small rating shifts that introduce bias into the instantaneous values that are used to compute the net flows. Instantaneous loads of total nitrogen ranged from -180 to 220 grams per second for the samples collected during the study, and instantaneous loads of total phosphorous ranged from -10 to 11 grams per second (negative loads indicate net upstream transport). The chloride concentrations from the water samples collected from Perdido Bay indicated a significant amount of mixing of saltwater and freshwater. Mixing effects could greatly reduce the accuracy of estimates of net loads of nutrients or other substances. The study results indicate that acoustical techniques can yield acceptable estimates of instantaneous loads in Perdido Bay. However, estimates of net loads should be interpreted with great caution and may have unacceptably large errors, especially when saltwater and freshwater concentrations differ greatly.
NASA Astrophysics Data System (ADS)
Kim, Seokpum; Wei, Yaochi; Horie, Yasuyuki; Zhou, Min
2018-05-01
The design of new materials requires establishment of macroscopic measures of material performance as functions of microstructure. Traditionally, this process has been an empirical endeavor. An approach to computationally predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs) using mesoscale simulations is developed. The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific mechanisms tracked include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to directly mimic relevant experiments for quantification of statistical variations of material behavior due to inherent material heterogeneities. The particular thresholds and ignition probabilities predicted are expressed in James type and Walker-Wasley type relations, leading to the establishment of explicit analytical expressions for the ignition probability as function of loading. Specifically, the ignition thresholds corresponding to any given level of ignition probability and ignition probability maps are predicted for PBX 9404 for the loading regime of Up = 200-1200 m/s where Up is the particle speed. The predicted results are in good agreement with available experimental measurements. A parametric study also shows that binder properties can significantly affect the macroscopic ignition behavior of PBXs. The capability to computationally predict the macroscopic engineering material response relations out of material microstructures and basic constituent and interfacial properties lends itself to the design of new materials as well as the analysis of existing materials.
Water Flow Testing and Unsteady Pressure Analysis of a Two-Bladed Liquid Oxidizer Pump Inducer
NASA Technical Reports Server (NTRS)
Schwarz, Jordan B.; Mulder, Andrew; Zoladz, Thomas
2011-01-01
The unsteady fluid dynamic performance of a cavitating two-bladed oxidizer turbopump inducer was characterized through sub-scale water flow testing. While testing a novel inlet duct design that included a cavitation suppression groove, unusual high-frequency pressure oscillations were observed. With potential implications for inducer blade loads, these high-frequency components were analyzed extensively in order to understand their origins and impacts to blade loading. Water flow testing provides a technique to determine pump performance without the costs and hazards associated with handling cryogenic propellants. Water has a similar density and Reynolds number to liquid oxygen. In a 70%-scale water flow test, the inducer-only pump performance was evaluated. Over a range of flow rates, the pump inlet pressure was gradually reduced, causing the flow to cavitate near the pump inducer. A nominal, smooth inducer inlet was tested, followed by an inlet duct with a circumferential groove designed to suppress cavitation. A subsequent 52%-scale water flow test in another facility evaluated the combined inducer-impeller pump performance. With the nominal inlet design, the inducer showed traditional cavitation and surge characteristics. Significant bearing loads were created by large side loads on the inducer during synchronous cavitation. The grooved inlet successfully mitigated these loads by greatly reducing synchronous cavitation, however high-frequency pressure oscillations were observed over a range of frequencies. Analytical signal processing techniques showed these oscillations to be created by a rotating, multi-celled train of pressure pulses, and subsequent CFD analysis suggested that such pulses could be created by the interaction of rotating inducer blades with fluid trapped in a cavitation suppression groove. Despite their relatively low amplitude, these high-frequency pressure oscillations posed a design concern due to their sensitivity to flow conditions and test scale. The amplitude and frequency of oscillations varied considerably over the pump s operating space, making it difficult to predict blade loads.
Flow Separation Side Loads Excitation of Rocket Nozzle FEM
NASA Technical Reports Server (NTRS)
Smalley, Kurt B.; Brown, Andrew; Ruf, Joseph; Gilbert, John
2007-01-01
Modern rocket nozzles are designed to operate over a wide range of altitudes, and are also built with large aspect ratios to enable high efficiencies. Nozzles designed to operate over specific regions of a trajectory are being replaced in modern launch vehicles by those that are designed to operate from earth to orbit. This is happening in parallel with modern manufacturing and wall cooling techniques allowing for larger aspect ratio nozzles to be produced. Such nozzles, though operating over a large range of altitudes and ambient pressures, are typically designed for one specific altitude. Above that altitude the nozzle flow is 'underexpanded' and below that altitude, the nozzle flow is 'overexpanded'. In both conditions the nozzle produces less than the maximum possible thrust at that altitude. Usually the nozzle design altitude is well above sea level, leaving the nozzle flow in an overexpanded state for its start up as well as for its ground testing where, if it is a reusable nozzle such as the Space Shuttle Main Engine (SSME), the nozzle will operate for the majority of its life. Overexpansion in a rocket nozzle presents the critical, and sometimes design driving, problem of flow separation induced side loads. To increase their understanding of nozzle side loads, engineers at MSFC began an investigation in 2000 into the phenomenon through a task entitled "Characterization and Accurate Modeling of Rocket Engine Nozzle Side Loads", led by A. Brown. The stated objective of this study was to develop a methodology to accurately predict the character and magnitude of nozzle side loads. The study included further hot-fire testing of the MC-l engine, cold flow testing of subscale nozzles, CFD analyses of both hot-fire and cold flow nozzle testing, and finite element (fe.) analysis of the MC-1 engine and cold flow tested nozzles. A follow on task included an effort to formulate a simplified methodology for modeling a side load during a two nodal diameter fluid/structure interaction for a single moment in time.
Transient Side Load Analysis of Out-of-Round Film-Cooled Nozzle Extensions
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Lin, Jeff; Ruf, Joe; Guidos, Mike
2012-01-01
There was interest in understanding the impact of out-of-round nozzle extension on the nozzle side load during transient startup operations. The out-of-round nozzle extension could be the result of asymmetric internal stresses, deformation induced by previous tests, and asymmetric loads induced by hardware attached to the nozzle. The objective of this study was therefore to computationally investigate the effect of out-of-round nozzle extension on the nozzle side loads during an engine startup transient. The rocket engine studied encompasses a regeneratively cooled chamber and nozzle, along with a film cooled nozzle extension. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and transient inlet boundary flow properties derived from an engine system simulation. Six three-dimensional cases were performed with the out-of-roundness achieved by three different degrees of ovalization, elongated on lateral y and z axes: one slightly out-of-round, one more out-of-round, and one significantly out-of-round. The results show that the separation line jump was the primary source of the peak side loads. Comparing to the peak side load of the perfectly round nozzle, the peak side loads increased for the slightly and more ovalized nozzle extensions, and either increased or decreased for the two significantly ovalized nozzle extensions. A theory based on the counteraction of the flow destabilizing effect of an exacerbated asymmetrical flow caused by a lower degree of ovalization, and the flow stabilizing effect of a more symmetrical flow, created also by ovalization, is presented to explain the observations obtained in this effort.
An Examination of Game-Based Learning from Theories of Flow Experience and Cognitive Load
ERIC Educational Resources Information Center
Lai, Chih-Hung; Chu, Chih-Ming; Liu, Hsiang-Hsuan; Yang, Shun-Bo; Chen, Wei-Hsuan
2013-01-01
This study aims to discuss whether game-based learning with the integration of games and digital learning could enhance not only the flow experience in learning but achieve the same flow experience in pure games. In addition, the authors discovered that whether the game-based learning could make learners to reveal higher cognitive load. The…
Simulation of systems for shock wave/compression waves damping in technological plants
NASA Astrophysics Data System (ADS)
Sumskoi, S. I.; Sverchkov, A. M.; Lisanov, M. V.; Egorov, A. F.
2016-09-01
At work of pipeline systems, flow velocity decrease can take place in the pipeline as a result of the pumps stop, the valves shutdown. As a result, compression waves appear in the pipeline systems. These waves can propagate in the pipeline system, leading to its destruction. This phenomenon is called water hammer (water hammer flow). The most dangerous situations occur when the flow is stopped quickly. Such urgent flow cutoff often takes place in an emergency situation when liquid hydrocarbons are being loaded into sea tankers. To prevent environment pollution it is necessary to stop the hydrocarbon loading urgently. The flow in this case is cut off within few seconds. To prevent an increase in pressure in a pipeline system during water hammer flow, special protective systems (pressure relief systems) are installed. The approaches to systems of protection against water hammer (pressure relief systems) modeling are described in this paper. A model of certain pressure relief system is considered. It is shown that in case of an increase in the intensity of hydrocarbons loading at a sea tanker, presence of the pressure relief system allows to organize safe mode of loading.
NASA Astrophysics Data System (ADS)
Mottyll, S.; Skoda, R.
2015-12-01
A compressible inviscid flow solver with barotropic cavitation model is applied to two different ultrasonic horn set-ups and compared to hydrophone, shadowgraphy as well as erosion test data. The statistical analysis of single collapse events in wall-adjacent flow regions allows the determination of the flow aggressiveness via load collectives (cumulative event rate vs collapse pressure), which show an exponential decrease in agreement to studies on hydrodynamic cavitation [1]. A post-processing projection of event rate and collapse pressure on a reference grid reduces the grid dependency significantly. In order to evaluate the erosion-sensitive areas a statistical analysis of transient wall loads is utilised. Predicted erosion sensitive areas as well as temporal pressure and vapour volume evolution are in good agreement to the experimental data.
Phosphorus and suspended sediment load estimates for the Lower Boise River, Idaho, 1994-2002
Donato, Mary M.; MacCoy, Dorene E.
2004-01-01
The U.S. Geological Survey used LOADEST, newly developed load estimation software, to develop regression equations and estimate loads of total phosphorus (TP), dissolved orthophosphorus (OP), and suspended sediment (SS) from January 1994 through September 2002 at four sites on the lower Boise River: Boise River below Diversion Dam near Boise, Boise River at Glenwood Bridge at Boise, Boise River near Middleton, and Boise River near Parma. The objective was to help the Idaho Department of Environmental Quality develop and implement total maximum daily loads (TMDLs) by providing spatial and temporal resolution for phosphorus and sediment loads and enabling load estimates made by mass balance calculations to be refined and validated. Regression models for TP and OP generally were well fit on the basis of regression coefficients of determination (R2), but results varied in quality from site to site. The TP and OP results for Glenwood probably were affected by the upstream wastewater-treatment plant outlet, which provides a variable phosphorus input that is unrelated to river discharge. Regression models for SS generally were statistically well fit. Regression models for Middleton for all constituents, although statistically acceptable, were of limited usefulness because sparse and intermittent discharge data at that site caused many gaps in the resulting estimates. Although the models successfully simulated measured loads under predominant flow conditions, errors in TP and SS estimates at Middleton and in TP estimates at Parma were larger during high- and low-flow conditions. This shortcoming might be improved if additional concentration data for a wider range of flow conditions were available for calibrating the model. The average estimated daily TP load ranged from less than 250 pounds per day (lb/d) at Diversion to nearly 2,200 lb/d at Parma. Estimated TP loads at all four sites displayed cyclical variations coinciding with seasonal fluctuations in discharge. Estimated annual loads of TP ranged from less than 8 tons at Diversion to 570 tons at Parma. Annual loads of dissolved OP peaked in 1997 at all sites and were consistently higher at Parma than at the other sites. The ratio of OP to TP varied considerably throughout the year at all sites. Peaks in the OP:TP ratio occurred primarily when flows were at their lowest annual stages; estimated seasonal OP:TP ratios were highest in autumn at all sites. Conversely, when flows were high, the ratio was low, reflecting increased TP associated with particulate matter during high flows. Parma exhibited the highest OP:TP ratio during all seasons, at least 0.60 in spring and nearly 0.90 in autumn. Similar OP:TP ratios were estimated at Glenwood. Whereas the OP:TP ratio for Parma and Glenwood peaked in November or December, decreased from January through May, and increased again after June, estimates for Diversion showed nearly the opposite pattern ? ratios were highest in July and lowest in January and February. This difference might reflect complex biological and geochemical processes involving nutrient cycling in Lucky Peak Lake, but further data are needed to substantiate this hypothesis. Estimated monthly average SS loads were highest at Diversion, about 400 tons per day (ton/d). Average annual loads from 1994 through 2002 were 144,000 tons at Diversion, 33,000 tons at Glenwood, and 88,000 tons at Parma. Estimated SS loads peaked in the spring at all sites, coinciding with high flows. Increases in TP in the reach from Diversion to Glenwood ranged from 200 to 350 lb/d. Decreases in TP were small in this reach only during high flows in January and February 1997. Decreases in SS, were large during high-flow conditions indicating sediment deposition in the reach. Intermittent data at Middleton indicated that increases and decreases in TP in the reach from Glenwood to Middleton were during low- and high-flow conditions, respectively. All constituents increased in the r
Magnus effects at high angles of attack and critical Reynolds numbers
NASA Technical Reports Server (NTRS)
Seginer, A.; Ringel, M.
1983-01-01
The Magnus force and moment experienced by a yawed, spinning cylinder were studied experimentally in low speed and subsonic flows at high angles of attack and critical Reynolds numbers. Flow-field visualization aided in describing a flow model that divides the Magnus phenomenon into a subcritical region, where reverse Magnus loads are experienced, and a supercritical region where these loads are not encountered. The roles of the spin rate, angle of attack, and crossflow Reynolds number in determining the boundaries of the subcritical region and the variations of the Magnus loads were studied.
Solid-loaded flows: applications in technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molerus, O.
1983-01-01
The evaluation of experiments and the representation of the resulting data by nondimensional groups defined ad hoc largely governs the treatment of problems arising with solid-loaded flows in practice. Without doubt, this is a result of the very complex nature of solid-loaded flows and, consequently, empiricism tends to prevail, more or less. To overcome this situation, two sets of nondimensional groups, which take into consideration the translatory, as well as the rotary, motion of particles suspended in a fluid, are derived from the equations of motion of a solid body. The intuitive meaning of these nondimensional groups arises from theirmore » derivation. With respect to applications in engineering, the influence of the rotary motion of a particle on the motion of its center of gravity can thus be taken into account. As such, a common basis for the representation of the different phenomena observed with solid-loaded flows is established. The application of the above concepts to fluidization and hydraulic and pneumatic conveying proves their usefulness. New insights into well-known facts as well as new results demonstrate that taking the real nature of solid particles (i.e., those of finite dimensions) into consideration will provide a common and profound basis for the representation of different phenomena observed with solid-loaded flows in practice.« less
Thiros, Susan A.
2017-03-23
The U.S. Geological Survey (USGS), in cooperation with the Colorado River Basin Salinity Control Forum, studied trends in dissolved-solids loads at selected sites in and near the Uinta Basin, Utah. The Uinta Basin study area includes the Duchesne River Basin and the Middle Green River Basin in Utah from below Flaming Gorge Reservoir to the town of Green River.Annual dissolved-solids loads for water years (WY) 1989 through 2013 were estimated for 16 gaging stations in the study area using streamflow and water-quality data from the USGS National Water Information System database. Eight gaging stations that monitored catchments with limited or no agricultural land use (natural subbasins) were used to assess loads from natural sources. Four gaging stations that monitored catchments with agricultural land in the Duchesne River Basin were used to assess loads from agricultural sources. Four other gaging stations were included in the dissolved-solids load and trend analysis to help assess the effects of agricultural areas that drain to the Green River in the Uinta Basin, but outside of the Duchesne River Basin.Estimated mean annual dissolved-solids loads for WY 1989–2013 ranged from 1,520 tons at Lake Fork River above Moon Lake, near Mountain Home, Utah (UT), to 1,760,000 tons at Green River near Green River, UT. The flow-normalized loads at gaging stations upstream of agricultural activities showed no trend or a relatively small change. The largest net change in modeled flow-normalized load was -352,000 tons (a 17.8-percent decrease) at Green River near Green River, UT.Annual streamflow and modeled dissolved-solids loads at the gaging stations were balanced between upstream and downstream sites to determine how much water and dissolved solids were transported to the Duchesne River and a section of the Green River, and how much was picked up in each drainage area. Mass-balance calculations of WY 1989–2013 mean annual dissolved-solids loads at the studied sites show that Green River near Jensen, UT, accounts for 64 percent of the load in the river at Green River, UT, while the Duchesne River and White River contribute 10 and 13 percent, respectively.Annual streamflow and modeled dissolved-solids loads at the gaging stations were balanced between upstream and downstream sites to determine how much water and dissolved solids were transported to the Duchesne River and a section of the Green River, and how much was picked up in each drainage area. Mass-balance calculations of WY 1989–2013 mean annual dissolved-solids loads at the studied sites show that Green River near Jensen, UT, accounts for 64 percent of the load in the river at Green River, UT, while the Duchesne River and White River contribute 10 and 13 percent, respectively.The flow-normalized dissolved-solids loads estimated at Duchesne River near Randlett, UT, and White River near Watson, UT, decreased by 68,000 and 55,300 tons, or 27.8 and 20.8 percent respectively, when comparing 1989 to 2013. The drainage basins for both rivers have undergone salinity-control projects since the early 1980s to reduce the dissolved-solids load entering the Colorado River. Approximately 19 percent of the net change in flow-normalized load at Green River at Green River, UT, is from changes in load modeled at Duchesne River near Randlett, UT, and 16 percent from changes in load modeled at White River near Watson, UT. The net change in flow-normalized load estimated at Green River near Greendale, UT, for WY 1989–2013 accounts for about 45 percent of the net change estimated at Green River at Green River, UT.Mass-balance calculations of WY 1989–2013 mean annual dissolved-solids loads at the studied sites in the Duchesne River Basin show that 75,400 tons or 44 percent of the load at the Duchesne River near Randlett, UT, gaging station was not accounted for at any of the upstream gages. Most of this unmonitored load is derived from tributary inflow, groundwater discharge, unconsumed irrigation water, and irrigation tail water.A mass balance of WY 1989–2013 flow-normalized loads estimated at sites in the Duchesne River Basin indicates that the flow-normalized load of unmonitored inflow to the Duchesne River between the Myton and Randlett gaging stations decreased by 38 percent. The total net decrease in flow-normalized load calculated for unmonitored inflow in the drainage basin accounts for 94 percent of the decrease in WY 1989–2013 flow-normalized load modeled at the Duchesne River near Randlett, UT, gaging station. Irrigation improvements in the drainage basin have likely contributed to the decrease in flow-normalized load.Reductions in dissolved-solids load estimated by the Natural Resources Conservation Service (NRCS) and the Bureau of Reclamation (Reclamation) from on- and off-farm improvements in the Uinta Basin totaled about 135,000 tons in 2013 (81,900 tons from on-farm improvements and 53,300 tons from off-farm improvements). The reduction in dissolved-solids load resulting from on- and off-farm improvements facilitated by the NRCS and Reclamation in the Price River Basin from 1989 to 2013 was estimated to be 64,800 tons.The amount of sprinkler-irrigated land mapped in the drainage area or subbasin area for a gaging station was used to estimate the reduction in load resulting from the conversion from flood to sprinkler irrigation. Sprinkler-irrigated land mapped in the Uinta Basin totaled 109,630 acres in 2012. Assuming conversion to wheel-line sprinklers, a reduction in dissolved-solids load in the Uinta Basin of 95,800 tons in 2012 was calculated using the sprinkler-irrigation acreage and a pre-salinity-control project dissolved-solids yield of 1.04 tons per acre.A reduction of 72,800 tons in dissolved-solids load from irrigation improvements was determined from sprinkler-irrigated lands in the Ashley Valley and Jensen, Pelican Lake, and Pleasant Valley areas (mapped in 2012); and in the Price River Basin (mapped in 2011). This decrease in dissolved-solids load is 8,800 tons more than the decrease in unmonitored flow-normalized dissolved-solids load (-64,000 tons) determined for the Green River between the Jensen and Green River gaging stations.The net WY 1989–2013 change in flow-normalized dissolved-solids load at the Duchesne River near Randlett, UT, and the Green River between the Jensen and Green River, UT, gaging stations determined from mass-balance calculations was compared to reported reductions in dissolved-solids load from on- and off-farm improvements and estimated reductions in load determined from mapped sprinkler-irrigated areas in the Duchesne River Basin and the area draining to the Green River between the Jensen and Green River gaging stations. The combined NRCS and Reclamation estimates of reduction in dissolved-solids load from on- and off-farm improvements in the study area (200,000 tons) is more than the reduction in load estimated using the acreage with sprinkler improvements (136,000 tons) or the mass-balance of flow-normalized load (132,000 tons).
Shanley, J.B.; Kram, P.; Hruska, J.; Bullen, T.D.
2004-01-01
Much of the biogeochemical cycling research in catchments in the past 25 years has been driven by acid deposition research funding. This research has focused on vulnerable base-poor systems; catchments on alkaline lithologies have received little attention. In regions of high acid loadings, however, even well-buffered catchments are susceptible to forest decline and episodes of low alkalinity in streamwater. As part of a collaboration between the Czech and U.S. Geological Surveys, we compared biogeochemical patterns in two well-studied, well-buffered catchments: Pluhuv Bor in the western Czech Republic, which has received high loading of atmospheric acidity, and Sleepers River Research Watershed in Vermont, U.S.A., where acid loading has been considerably less. Despite differences in lithology, wetness, forest type, and glacial history, the catchments displayed similar patterns of solute concentrations and flow. At both catchments, base cation and alkalinity diluted with increasing flow, whereas nitrate and dissolved organic carbon increased with increasing flow. Sulfate diluted with increasing flow at Sleepers River, while at Pluhuv Bor the sulfate-flow relation shifted from positive to negative as atmospheric sulfur (S) loadings decreased and soil S pools were depleted during the 1990s. At high flow, alkalinity decreased to near 100 ??eq L-1 at Pluhuv Bor compared to 400 ??eq L-1 at Sleepers River. Despite the large amounts of S flushed from Pluhuv Bor soils, these alkalinity declines were caused solely by dilution, which was greater at Pluhuv Bor relative to Sleepers River due to greater contributions from shallow flow paths at high flow. Although the historical high S loading at Pluhuv Bor has caused soil acidification and possible forest damage, it has had little effect on the acid/base status of streamwater in this well-buffered catchment. ?? 2004 Kluwer Academic Publishers.
Cavitation erosion prediction based on analysis of flow dynamics and impact load spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihatsch, Michael S., E-mail: michael.mihatsch@aer.mw.tum.de; Schmidt, Steffen J.; Adams, Nikolaus A.
2015-10-15
Cavitation erosion is the consequence of repeated collapse-induced high pressure-loads on a material surface. The present paper assesses the prediction of impact load spectra of cavitating flows, i.e., the rate and intensity distribution of collapse events based on a detailed analysis of flow dynamics. Data are obtained from a numerical simulation which employs a density-based finite volume method, taking into account the compressibility of both phases, and resolves collapse-induced pressure waves. To determine the spectrum of collapse events in the fluid domain, we detect and quantify the collapse of isolated vapor structures. As reference configuration we consider the expansion ofmore » a liquid into a radially divergent gap which exhibits unsteady sheet and cloud cavitation. Analysis of simulation data shows that global cavitation dynamics and dominant flow events are well resolved, even though the spatial resolution is too coarse to resolve individual vapor bubbles. The inviscid flow model recovers increasingly fine-scale vapor structures and collapses with increasing resolution. We demonstrate that frequency and intensity of these collapse events scale with grid resolution. Scaling laws based on two reference lengths are introduced for this purpose. We show that upon applying these laws impact load spectra recorded on experimental and numerical pressure sensors agree with each other. Furthermore, correlation between experimental pitting rates and collapse-event rates is found. Locations of high maximum wall pressures and high densities of collapse events near walls obtained numerically agree well with areas of erosion damage in the experiment. The investigation shows that impact load spectra of cavitating flows can be inferred from flow data that captures the main vapor structures and wave dynamics without the need for resolving all flow scales.« less
Study on casing treatment and stator matching on multistage fan
NASA Astrophysics Data System (ADS)
Wu, Chuangliang; Yuan, Wei; Deng, Zhe
2017-10-01
Casing treatments are required for expanding the stall margin of multi-stage high-load turbofans designed with high blade-tip Mach numbers and high leakage flow. In the case of a low mass flow, the casing treatment effectively reduces the blockages caused by the leakage flow and enlarges the stall margin. However, in the case of a high mass flow, the casing treatment affects the overall flow capacity of the fan, the thrust when operating at the high speeds usually required by design-point specifications. Herein, we study a two-stage high-load fan with three-dimensional numerical simulations. We use the simulation results to propose a scheme that enlarges the stall margin of multistage high-load fans without sacrificing the flow capacity when operating with a large mass flow. Furthermore, a circumferential groove casing treatment is used and adjustments are made to the upstream stator angle to match the casing treatment. The stall margin is thus increased to 16.3%, with no reduction in the maximum mass flow rate or the design thrust performance.
Probabilistic Modeling of Settlement Risk at Land Disposal Facilities - 12304
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foye, Kevin C.; Soong, Te-Yang
2012-07-01
The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass (caused by inconsistent compaction, void space distribution, debris-soil mix ratio, waste material stiffness, time-dependent primary compression of the fine-grained soil matrix, long-term creep settlement of the soil matrix and the debris, etc.) at most land disposal facilities. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the wastemore » mass and sub-grade properties which control differential settlement. An alternative, probabilistic solution is to use random fields to model the waste and sub-grade properties. The modeling effort informs the design, construction, operation, and maintenance of land disposal facilities. A probabilistic method to establish design criteria for waste placement and compaction is introduced using the model. Random fields are ideally suited to problems of differential settlement modeling of highly heterogeneous foundations, such as waste. Random fields model the seemingly random spatial distribution of a design parameter, such as compressibility. When used for design, the use of these models prompts the need for probabilistic design criteria. It also allows for a statistical approach to waste placement acceptance criteria. An example design evaluation was performed, illustrating the use of the probabilistic differential settlement simulation methodology to assemble a design guidance chart. The purpose of this design evaluation is to enable the designer to select optimal initial combinations of design slopes and quality control acceptance criteria that yield an acceptable proportion of post-settlement slopes meeting some design minimum. For this specific example, relative density, which can be determined through field measurements, was selected as the field quality control parameter for waste placement. This technique can be extended to include a rigorous performance-based methodology using other parameters (void space criteria, debris-soil mix ratio, pre-loading, etc.). As shown in this example, each parameter range, or sets of parameter ranges can be selected such that they can result in an acceptable, long-term differential settlement according to the probabilistic model. The methodology can also be used to re-evaluate the long-term differential settlement behavior at closed land disposal facilities to identify, if any, problematic facilities so that remedial action (e.g., reinforcement of upper and intermediate waste layers) can be implemented. Considering the inherent spatial variability in waste and earth materials and the need for engineers to apply sound quantitative practices to engineering analysis, it is important to apply the available probabilistic techniques to problems of differential settlement. One such method to implement probability-based differential settlement analyses for the design of landfill final covers has been presented. The design evaluation technique presented is one tool to bridge the gap from deterministic practice to probabilistic practice. (authors)« less
McDonough, Kathleen; Casteel, Kenneth; Zoller, Ann; Wehmeyer, Kenneth; Hulzebos, Etje; Rila, Jean-Paul; Salvito, Daniel; Federle, Thomas
2017-01-01
OTNE [1-(1,2,3,4,5,6,7,8-octahydro-2,3,8,8-tetramethyl-2-naphthyl)ethan-1-one; trade name Iso E Super] is a fragrance ingredient commonly used in consumer products which are disposed down the drain. This research measured effluent and sludge concentrations of OTNE at 44 US wastewater treatment plants (WWTP). The mean effluent and sludge concentrations were 0.69 ± 0.65 μg/L and 20.6 ± 33.8 mg/kg dw respectively. Distribution of OTNE effluent concentrations and dilution factors were used to predict surface water and sediment concentrations and distributions of OTNE sludge concentrations and loading rates were used to predict terrestrial concentrations. The 90th percentile concentration of OTNE in US WWTP mixing zones was predicted to be 0.04 and 0.85 μg/L under mean and 7Q10 low flow (lowest river flow occurring over a 7 day period every 10 years) conditions respectively. The 90th percentile sediment concentrations under mean and 7Q10 low flow conditions were predicted to be 0.081 and 1.6 mg/kg dw respectively. Based on current US sludge application practices, the 90th percentile OTNE terrestrial concentration was 1.38 mg/kg dw. The probability of OTNE concentrations being below the predicted no effect concentration (PNEC) for the aquatic and sediment compartments was greater than 99%. For the terrestrial compartment, the probability of OTNE concentrations being lower than the PNEC was 97% for current US sludge application practices. Based on the results of this study, OTNE concentrations in US WWTP effluent and sludge do not pose an ecological risk to aquatic, sediment and terrestrial organisms. Copyright © 2016 Elsevier Ltd. All rights reserved.
An extravehicular suit impact load attenuation study to improve astronaut bone fracture prediction.
Sulkowski, Christina M; Gilkey, Kelly M; Lewandowski, Beth E; Samorezov, Sergey; Myers, Jerry G
2011-04-01
Understanding the contributions to the risk of bone fracture during spaceflight is essential for mission success. A pressurized extravehicular activity (EVA) suit analogue test bed was developed, impact load attenuation data were obtained, and the load at the hip of an astronaut who falls to the side during an EVA was characterized. Offset (representing the gap between the EVA suit and the astronaut's body), impact load magnitude, and EVA suit operating pressure were factors varied in the study. The attenuation data were incorporated into a probabilistic model of bone fracture risk during spaceflight, replacing the previous load attenuation value that was based on commercial hip protector data. Load attenuation was more dependent on offset than on pressurization or load magnitude, especially at small offset values. Load attenuation factors for offsets between 0.1-1.5 cm were 0.69 +/- 0.15, 0.49 +/- 0.22, and 0.35 +/- 0.18 for mean impact forces of 4827, 6400, and 8467 N, respectively. Load attenuation factors for offsets of 2.8-5.3 cm were 0.93 +/- 0.2, 0.94 +/- 0.1, and 0.84 +/- 0.5 for the same mean impact forces. The mean and 95th percentile bone fracture risk index predictions were each reduced by 65-83%. The mean and 95th percentile bone fracture probability predictions were both reduced approximately 20-50%. The reduction in uncertainty and improved confidence in bone fracture predictions increased the fidelity and credibility of the fracture risk model and its benefit to mission design and in-flight operational decisions.
NASA Astrophysics Data System (ADS)
Omira, Rachid; Baptista, Maria Ana; Matias, Luis
2015-04-01
This study constitutes the first assessment of probabilistic tsunami inundation in the NE Atlantic region, using an event-tree approach. It aims to develop a probabilistic tsunami inundation approach for the NE Atlantic coast with an application to two test sites of ASTARTE project, Tangier-Morocco and Sines-Portugal. Only tsunamis of tectonic origin are considered here, taking into account near-, regional- and far-filed sources. The multidisciplinary approach, proposed here, consists of an event-tree method that gathers seismic hazard assessment, tsunami numerical modelling, and statistical methods. It presents also a treatment of uncertainties related to source location and tidal stage in order to derive the likelihood of tsunami flood occurrence and exceedance of a specific near-shore wave height during a given return period. We derive high-resolution probabilistic maximum wave heights and flood distributions for both test-sites Tangier and Sines considering 100-, 500-, and 1000-year return periods. We find that the probability that a maximum wave height exceeds 1 m somewhere along the Sines coasts reaches about 55% for 100-year return period, and is up to 100% for 1000-year return period. Along Tangier coast, the probability of inundation occurrence (flow depth > 0m) is up to 45% for 100-year return period and reaches 96% in some near-shore costal location for 500-year return period. Acknowledgements: This work is funded by project ASTARTE - Assessment, STrategy And Risk Reduction for Tsunamis in Europe. Grant 603839, 7th FP (ENV.2013.6.4-3 ENV.2013.6.4-3).
Multi-parametric variational data assimilation for hydrological forecasting
NASA Astrophysics Data System (ADS)
Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.
2017-12-01
Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.
Dynamic load balance scheme for the DSMC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jin; Geng, Xiangren; Jiang, Dingwu
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less
Green, W. Reed; Haggard, Brian E.
2001-01-01
Water-quality sampling consisting of every other month (bimonthly) routine sampling and storm event sampling (six storms annually) is used to estimate annual phosphorus and nitrogen loads at Illinois River south of Siloam Springs, Arkansas. Hydrograph separation allowed assessment of base-flow and surfacerunoff nutrient relations and yield. Discharge and nutrient relations indicate that water quality at Illinois River south of Siloam Springs, Arkansas, is affected by both point and nonpoint sources of contamination. Base-flow phosphorus concentrations decreased with increasing base-flow discharge indicating the dilution of phosphorus in water from point sources. Nitrogen concentrations increased with increasing base-flow discharge, indicating a predominant ground-water source. Nitrogen concentrations at higher base-flow discharges often were greater than median concentrations reported for ground water (from wells and springs) in the Springfield Plateau aquifer. Total estimated phosphorus and nitrogen annual loads for calendar year 1997-1999 using the regression techniques presented in this paper (35 samples) were similar to estimated loads derived from integration techniques (1,033 samples). Flow-weighted nutrient concentrations and nutrient yields at the Illinois River site were about 10 to 100 times greater than national averages for undeveloped basins and at North Sylamore Creek and Cossatot River (considered to be undeveloped basins in Arkansas). Total phosphorus and soluble reactive phosphorus were greater than 10 times and total nitrogen and dissolved nitrite plus nitrate were greater than 10 to 100 times the national and regional averages for undeveloped basins. These results demonstrate the utility of a strategy whereby samples are collected every other month and during selected storm events annually, with use of regression models to estimate nutrient loads. Annual loads of phosphorus and nitrogen estimated using regression techniques could provide similar results to estimates using integration techniques, with much less investment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard
In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, themore » proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.« less
An approach for the regularization of a power flow solution around the maximum loading point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kataoka, Y.
1992-08-01
In the conventional power flow solution, the boundary conditions are directly specified by active power and reactive power at each node, so that the singular point coincided with the maximum loading point. For this reason, the computations are often disturbed by ill-condition. This paper proposes a new method for getting the wide-range regularity by giving some modifications to the conventional power flow solution method, thereby eliminating the singular point or shifting it to the region with the voltage lower than that of the maximum loading point. Then, the continuous execution of V-P curves including maximum loading point is realized. Themore » efficiency and effectiveness of the method are tested in practical 598-nodes system in comparison with the conventional method.« less
Tortorelli, Robert L.; Pickup, Barbara E.
2006-01-01
The Illinois River and tributaries, Flint Creek and Baron Fork, are designated scenic rivers in Oklahoma. Recent phosphorus levels in streams in the basin have resulted in the growth of excess algae, which have limited the aesthetic benefits of water bodies in the basin, especially the Illinois River and Lake Tenkiller. The Oklahoma Water Resources Board has established a standard for total phosphorus not to exceed the 30-day geometric mean concentration of 0.037 milligram per liter in Oklahoma Scenic Rivers. The U.S. Geological Survey, in cooperation with the Oklahoma Water Resources Board, conducted an investigation to summarize phosphorus concentrations and provide estimates of phosphorus loads, yields, and flow-weighted concentrations in the Illinois River and tributaries from January 2000 through December 2004. Data from water-quality samples collected from 2000 to 2004 were used to summarize phosphorus concentrations and estimate phosphorus loads, yields, and mean flow-weighted concentrations in the Illinois River basin for three 3-year periods - 2000-2002, 2001-2003, and 2002-2004, to update a previous report that used data from water-quality samples from 1997 to 2001. This report provides information needed to advance knowledge of the regional hydrologic system and understanding of hydrologic processes, and provides hydrologic data and results useful to multiple parties for interstate compacts. Phosphorus concentrations in the Illinois River basin were significantly greater in runoff samples than in base-flow samples. Phosphorus concentrations generally decreased with increasing base flow, from dilution, and decreased in the downstream direction in the Illinois River from the Watts to Tahlequah stations. Phosphorus concentrations generally increased with runoff, possibly because of phosphorus resuspension, stream bank erosion, and the addition of phosphorus from nonpoint sources. Estimated mean annual phosphorus loads were greater at the Illinois River stations than at Flint Creek and Baron Fork. Annual total loads in the Illinois River from Watts to Tahlequah, increased slightly for the period 2000-2002 and decreased slightly for the periods 2001-2003 and 2002-2004. Estimated mean annual base-flow loads at stations on the Illinois River were about 11 to 20 times greater than base-flow loads at the station on Baron Fork and 4 to 10 times greater than base-flow loads at the station on Flint Creek. Estimated mean annual runoff loads ranged from 68 to 96 percent of the estimated mean annual total phosphorus loads from 2000-2004. Estimated mean seasonal base-flow loads were generally greatest in spring (March through May) and were least in fall (September through November). Estimated mean seasonal runoff loads generally were greatest in summer (June through August) for the period 2000-2002, but were greatest in winter (December through February) for the period 2001-2003, and greatest in spring for the period 2002-2004. Estimated mean total yields of phosphorus ranged from 192 to 811 pounds per year per square mile, with greatest yields being reported for Illinois River near Watts (576 to 811 pounds per year per square mile), and the least yields being reported for Baron Fork at Eldon for the periods 2000-2002 and 2001-2003 (501 and 192 pounds per year per square mile) and for Illinois River near Tahlequah for the period 2002-2004 (370 pounds per year per square mile). Estimated mean flow-weighted concentrations were more than 10 times greater than the median (0.022 milligram per liter) and were consistently greater than the 75th percentile of flow-weighted phosphorus concentrations in samples collected at relatively undeveloped basins of the United States (0.037 milligram per liter). In addition, flow-weighted phosphorus concentrations in 2000-2002 at all Illinois River stations and at Flint Creek near Kansas were equal to or greater than the 75th percentile of all National Water-Quality Assessment Program station
Fault current limiter and alternating current circuit breaker
Boenig, Heinrich J.
1998-01-01
A solid-state circuit breaker and current limiter for a load served by an alternating current source having a source impedance, the solid-state circuit breaker and current limiter comprising a thyristor bridge interposed between the alternating current source and the load, the thyristor bridge having four thyristor legs and four nodes, with a first node connected to the alternating current source, and a second node connected to the load. A coil is connected from a third node to a fourth node, the coil having an impedance of a value calculated to limit the current flowing therethrough to a predetermined value. Control means are connected to the thyristor legs for limiting the alternating current flow to the load under fault conditions to a predetermined level, and for gating the thyristor bridge under fault conditions to quickly reduce alternating current flowing therethrough to zero and thereafter to maintain the thyristor bridge in an electrically open condition preventing the alternating current from flowing therethrough for a predetermined period of time.
Fault current limiter and alternating current circuit breaker
Boenig, H.J.
1998-03-10
A solid-state circuit breaker and current limiter are disclosed for a load served by an alternating current source having a source impedance, the solid-state circuit breaker and current limiter comprising a thyristor bridge interposed between the alternating current source and the load, the thyristor bridge having four thyristor legs and four nodes, with a first node connected to the alternating current source, and a second node connected to the load. A coil is connected from a third node to a fourth node, the coil having an impedance of a value calculated to limit the current flowing therethrough to a predetermined value. Control means are connected to the thyristor legs for limiting the alternating current flow to the load under fault conditions to a predetermined level, and for gating the thyristor bridge under fault conditions to quickly reduce alternating current flowing therethrough to zero and thereafter to maintain the thyristor bridge in an electrically open condition preventing the alternating current from flowing therethrough for a predetermined period of time. 9 figs.
Loading-rate-independent delay of catastrophic avalanches in a bulk metallic glass
Chen, S. H.; Chan, K. C.; Wang, G.; ...
2016-02-25
The plastic flow of bulk metallic glasses (BMGs) is characterized by intermittent bursts of avalanches, and this trend results in disastrous failures of BMGs. In the present work, a double-side-notched BMG specimen is designed, which exhibits chaotic plastic flows consisting of several catastrophic avalanches under the applied loading. The disastrous shear avalanches have, then, been delayed by forming a stable plastic-flow stage in the specimens with tailored distances between the bottoms of the notches, where the distribution of a complex stress field is acquired. Differing from the conventional compressive testing results, such a delaying process is independent of loading rate.more » The statistical analysis shows that in the specimens with delayed catastrophic failures, the plastic flow can evolve to a critical dynamics, making the catastrophic failure more predictable than the ones with chaotic plastic flows. Lastly, the findings are of significance in understanding the plastic-flow mechanisms in BMGs and controlling the avalanches in relating solids.« less
Nonlinear analysis of NPP safety against the aircraft attack
DOE Office of Scientific and Technical Information (OSTI.GOV)
Králik, Juraj, E-mail: juraj.kralik@stuba.sk; Králik, Juraj, E-mail: kralik@fa.stuba.sk
The paper presents the nonlinear probabilistic analysis of the reinforced concrete buildings of nuclear power plant under the aircraft attack. The dynamic load is defined in time on base of the airplane impact simulations considering the real stiffness, masses, direction and velocity of the flight. The dynamic response is calculated in the system ANSYS using the transient nonlinear analysis solution method. The damage of the concrete wall is evaluated in accordance with the standard NDRC considering the spalling, scabbing and perforation effects. The simple and detailed calculations of the wall damage are compared.
Bypass transition and spot nucleation in boundary layers
NASA Astrophysics Data System (ADS)
Kreilos, Tobias; Khapko, Taras; Schlatter, Philipp; Duguet, Yohann; Henningson, Dan S.; Eckhardt, Bruno
2016-08-01
The spatiotemporal aspects of the transition to turbulence are considered in the case of a boundary-layer flow developing above a flat plate exposed to free-stream turbulence. Combining results on the receptivity to free-stream turbulence with the nonlinear concept of a transition threshold, a physically motivated model suggests a spatial distribution of spot nucleation events. To describe the evolution of turbulent spots a probabilistic cellular automaton is introduced, with all parameters directly obtained from numerical simulations of the boundary layer. The nucleation rates are then combined with the cellular automaton model, yielding excellent quantitative agreement with the statistical characteristics for different free-stream turbulence levels. We thus show how the recent theoretical progress on transitional wall-bounded flows can be extended to the much wider class of spatially developing boundary-layer flows.
NASA Astrophysics Data System (ADS)
König, Diethard; Mahmoudi, Elham; Khaledi, Kavan; von Blumenthal, Achim; Schanz, Tom
2016-04-01
The excess electricity produced by renewable energy sources available during off-peak periods of consumption can be used e.g. to produce and compress hydrogen or to compress air. Afterwards the pressurized gas is stored in the rock salt cavities. During this process, thermo-mechanical cyclic loading is applied to the rock salt surrounding the cavern. Compared to the operation of conventional storage caverns in rock salt the frequencies of filling and discharging cycles and therefore the thermo-mechanical loading cycles are much higher, e.g. daily or weekly compared to seasonally or yearly. The stress strain behavior of rock salt as well as the deformation behavior and the stability of caverns in rock salt under such loading conditions are unknown. To overcome this, existing experimental studies have to be supplemented by exploring the behavior of rock salt under combined thermo-mechanical cyclic loading. Existing constitutive relations have to be extended to cover degradation of rock salt under thermo-mechanical cyclic loading. At least the complex system of a cavern in rock salt under these loading conditions has to be analyzed by numerical modeling taking into account the uncertainties due to limited access in large depth to investigate material composition and properties. An interactive evolution concept is presented to link the different components of such a study - experimental modeling, constitutive modeling and numerical modeling. A triaxial experimental setup is designed to characterize the cyclic thermo-mechanical behavior of rock salt. The imposed boundary conditions in the experimental setup are assumed to be similar to the stress state obtained from a full-scale numerical simulation. The computational model relies primarily on the governing constitutive model for predicting the behavior of rock salt cavity. Hence, a sophisticated elasto-viscoplastic creep constitutive model is developed to take into account the dilatancy and damage progress, as well as the temperature effects. The contributed input parameters in the constitutive model are calibrated using the experimental measurements. In the following, the initial numerical simulation is modified based on the introduced constitutive model implemented in a finite element code. However, because of the significant levels of uncertainties involved in the design procedure of such structures, a reliable design can be achieved by employing probabilistic approaches. Therefore, the numerical calculation is extended by statistical tools such as sensitivity analysis, probabilistic analysis and robust reliability-based design. Uncertainties e.g. due to limited site investigation, which is always fragmentary within these depths, can be compensated by using data sets of field measurements for back calculation of input parameters with the developed numerical model. Monitoring concepts can be optimized by identifying sensor localizations e.g. using sensitivity analyses.
NASA Astrophysics Data System (ADS)
He, Jingjing; Wang, Dengjiang; Zhang, Weifang
2015-03-01
This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.
Definition of hydraulic stability of KVGM-100 hot-water boiler and minimum water flow rate
NASA Astrophysics Data System (ADS)
Belov, A. A.; Ozerov, A. N.; Usikov, N. V.; Shkondin, I. A.
2016-08-01
In domestic power engineering, the methods of quantitative and qualitative-quantitative adjusting the load of the heat supply systems are widely distributed; furthermore, during the greater part of the heating period, the actual discharge of network water is less than estimated values when changing to quantitative adjustment. Hence, the hydraulic circuits of hot-water boilers should ensure the water velocities, minimizing the scale formation and excluding the formation of stagnant zones. The results of the calculations of hot-water KVGM-100 boiler and minimum water flow rate for the basic and peak modes at the fulfillment of condition of the lack of surface boil are presented in the article. The minimal flow rates of water at its underheating to the saturation state and the thermal flows in the furnace chamber were defined. The boiler hydraulic calculation was performed using the "Hydraulic" program, and the analysis of permissible and actual velocities of the water movement in the pipes of the heating surfaces was carried out. Based on the thermal calculations of furnace chamber and thermal- hydraulic calculations of heating surfaces, the following conclusions were drawn: the minimum velocity of water movement (by condition of boiling surface) at lifting movement of environment increases from 0.64 to 0.79 m/s; it increases from 1.14 to 1.38 m/s at down movement of environmental; the minimum water flow rate by the boiler in the basic mode (by condition of the surface boiling) increased from 887 t/h at the load of 20% up to 1074 t/h at the load of 100%. The minimum flow rate is 1074 t/h at nominal load and is achieved at the pressure at the boiler outlet equal to 1.1 MPa; the minimum water flow rate by the boiler in the peak mode by condition of surface boiling increases from 1669 t/h at the load of 20% up to 2021 t/h at the load of 100%.
Comparison of Contaminant Transport in Agricultural Drainage Water and Urban Stormwater Runoff
Ranaivoson, Andry Z.; Feyereisen, Gary W.; Rosen, Carl J.; Moncrief, John F.
2016-01-01
Transport of nitrogen and phosphorus from agricultural and urban landscapes to surface water bodies can cause adverse environmental impacts. The main objective of this long-term study was to quantify and compare contaminant transport in agricultural drainage water and urban stormwater runoff. We measured flow rate and contaminant concentration in stormwater runoff from Willmar, Minnesota, USA, and in drainage water from subsurface-drained fields with surface inlets, namely, Unfertilized and Fertilized Fields. Commercial fertilizer and turkey litter manure were applied to the Fertilized Field based on agronomic requirements. Results showed that the City Stormwater transported significantly higher loads per unit area of ammonium, total suspended solids (TSS), and total phosphorus (TP) than the Fertilized Field, but nitrate load was significantly lower. Nitrate load transport in drainage water from the Unfertilized Field was 58% of that from the Fertilized Field. Linear regression analysis indicated that a 1% increase in flow depth resulted in a 1.05% increase of TSS load from the City Stormwater, a 1.07% increase in nitrate load from the Fertilized Field, and a 1.11% increase in TP load from the Fertilized Field. This indicates an increase in concentration with a rise in flow depth, revealing that concentration variation was a significant factor influencing the dynamics of load transport. Further regression analysis showed the importance of targeting high flows to reduce contaminant transport. In conclusion, for watersheds similar to this one, management practices should be directed to load reduction of ammonium and TSS from urban areas, and nitrate from cropland while TP should be a target for both. PMID:27930684
Comparison of Contaminant Transport in Agricultural Drainage Water and Urban Stormwater Runoff.
Ghane, Ehsan; Ranaivoson, Andry Z; Feyereisen, Gary W; Rosen, Carl J; Moncrief, John F
2016-01-01
Transport of nitrogen and phosphorus from agricultural and urban landscapes to surface water bodies can cause adverse environmental impacts. The main objective of this long-term study was to quantify and compare contaminant transport in agricultural drainage water and urban stormwater runoff. We measured flow rate and contaminant concentration in stormwater runoff from Willmar, Minnesota, USA, and in drainage water from subsurface-drained fields with surface inlets, namely, Unfertilized and Fertilized Fields. Commercial fertilizer and turkey litter manure were applied to the Fertilized Field based on agronomic requirements. Results showed that the City Stormwater transported significantly higher loads per unit area of ammonium, total suspended solids (TSS), and total phosphorus (TP) than the Fertilized Field, but nitrate load was significantly lower. Nitrate load transport in drainage water from the Unfertilized Field was 58% of that from the Fertilized Field. Linear regression analysis indicated that a 1% increase in flow depth resulted in a 1.05% increase of TSS load from the City Stormwater, a 1.07% increase in nitrate load from the Fertilized Field, and a 1.11% increase in TP load from the Fertilized Field. This indicates an increase in concentration with a rise in flow depth, revealing that concentration variation was a significant factor influencing the dynamics of load transport. Further regression analysis showed the importance of targeting high flows to reduce contaminant transport. In conclusion, for watersheds similar to this one, management practices should be directed to load reduction of ammonium and TSS from urban areas, and nitrate from cropland while TP should be a target for both.
Markov model of fatigue of a composite material with the poisson process of defect initiation
NASA Astrophysics Data System (ADS)
Paramonov, Yu.; Chatys, R.; Andersons, J.; Kleinhofs, M.
2012-05-01
As a development of the model where only one weak microvolume (WMV) and only a pulsating cyclic loading are considered, in the current version of the model, we take into account the presence of several weak sites where fatigue damage can accumulate and a loading with an arbitrary (but positive) stress ratio. The Poisson process of initiation of WMVs is considered, whose rate depends on the size of a specimen. The cumulative distribution function (cdf) of the fatigue life of every individual WMV is calculated using the Markov model of fatigue. For the case where this function is approximated by a lognormal distribution, a formula for calculating the cdf of fatigue life of the specimen (modeled as a chain of WMVs) is obtained. Only a pulsating cyclic loading was considered in the previous version of the model. Now, using the modified energy method, a loading cycle with an arbitrary stress ratio is "transformed" into an equivalent cycle with some other stress ratio. In such a way, the entire probabilistic fatigue diagram for any stress ratio with a positive cycle stress can be obtained. Numerical examples are presented.
Advanced Software for Analysis of High-Speed Rolling-Element Bearings
NASA Technical Reports Server (NTRS)
Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.
2003-01-01
COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Tae-Hyuk; Sandu, Adrian; Watson, Layne T.
2015-08-01
Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, andmore » where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model are consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25 %, and the total processor idle time by 85 %.« less
From the track to the ocean: Using flow control to improve marine bio-logging tags for cetaceans
Fiore, Giovani; Anderson, Erik; Garborg, C. Spencer; Murray, Mark; Johnson, Mark; Moore, Michael J.; Howle, Laurens
2017-01-01
Bio-logging tags are an important tool for the study of cetaceans, but superficial tags inevitably increase hydrodynamic loading. Substantial forces can be generated by tags on fast-swimming animals, potentially affecting behavior and energetics or promoting early tag removal. Streamlined forms have been used to reduce loading, but these designs can accelerate flow over the top of the tag. This non-axisymmetric flow results in large lift forces (normal to the animal) that become the dominant force component at high speeds. In order to reduce lift and minimize total hydrodynamic loading this work presents a new tag design (Model A) that incorporates a hydrodynamic body, a channel to reduce fluid speed differences above and below the housing and wing to redirect flow to counter lift. Additionally, three derivatives of the Model A design were used to examine the contribution of individual flow control features to overall performance. Hydrodynamic loadings of four models were compared using computational fluid dynamics (CFD). The Model A design eliminated all lift force and generated up to ~30 N of downward force in simulated 6 m/s aligned flow. The simulations were validated using particle image velocimetry (PIV) to experimentally characterize the flow around the tag design. The results of these experiments confirm the trends predicted by the simulations and demonstrate the potential benefit of flow control elements for the reduction of tag induced forces on the animal. PMID:28196148
NASA Astrophysics Data System (ADS)
Datta, T. S.; Kar, S.; Kumar, M.; Choudhury, A.; Chacko, J.; Antony, J.; Babu, S.; Sahu, S. K.
2015-12-01
Five beam line cryomodules with total 27 superconducting Radio Frequency (RF) cavities are installed and commissioned at IUAC to enhance the energy of heavy ion from 15 UD Pelletron. To reduce the heat load at 4.2 K, liquid nitrogen (LN2) cooled intermediate thermal shield is used for all these cryomodules. For three linac cryomodules, concept of forced flow LN2 cooling is used and for superbuncher and rebuncher, thermo-siphon cooling is incorporated. It is noticed that the shield temperature of superbuncher varies from 90 K to 110 K with respect to liquid nitrogen level. The temperature difference can't be explained by using the basic concept of thermo-siphon with the heat load on up flow line. A simple thermo-siphon experimental set up is developed to simulate the thermal shield temperature profile. Mass flow rate of liquid nitrogen is measured with different heat load on up flow line for different liquid levels. It is noticed that small amount of heat load on down flow line have a significant effect on mass flow rate. The present paper will be investigating the data generated from the thermosiphon experimental set up and a theoretical analysis will be presented here to validate the measured temperature profile of the cryomodule shield.
Dall'Osso, F.; Dominey-Howes, D.; Moore, C.; Summerhayes, S.; Withycombe, G.
2014-01-01
Approximately 85% of Australia's population live along the coastal fringe, an area with high exposure to extreme inundations such as tsunamis. However, to date, no Probabilistic Tsunami Hazard Assessments (PTHA) that include inundation have been published for Australia. This limits the development of appropriate risk reduction measures by decision and policy makers. We describe our PTHA undertaken for the Sydney metropolitan area. Using the NOAA NCTR model MOST (Method for Splitting Tsunamis), we simulate 36 earthquake-generated tsunamis with annual probabilities of 1:100, 1:1,000 and 1:10,000, occurring under present and future predicted sea level conditions. For each tsunami scenario we generate a high-resolution inundation map of the maximum water level and flow velocity, and we calculate the exposure of buildings and critical infrastructure. Results indicate that exposure to earthquake-generated tsunamis is relatively low for present events, but increases significantly with higher sea level conditions. The probabilistic approach allowed us to undertake a comparison with an existing storm surge hazard assessment. Interestingly, the exposure to all the simulated tsunamis is significantly lower than that for the 1:100 storm surge scenarios, under the same initial sea level conditions. The results have significant implications for multi-risk and emergency management in Sydney. PMID:25492514
NASA Astrophysics Data System (ADS)
Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun
2014-11-01
This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.
Dall'Osso, F; Dominey-Howes, D; Moore, C; Summerhayes, S; Withycombe, G
2014-12-10
Approximately 85% of Australia's population live along the coastal fringe, an area with high exposure to extreme inundations such as tsunamis. However, to date, no Probabilistic Tsunami Hazard Assessments (PTHA) that include inundation have been published for Australia. This limits the development of appropriate risk reduction measures by decision and policy makers. We describe our PTHA undertaken for the Sydney metropolitan area. Using the NOAA NCTR model MOST (Method for Splitting Tsunamis), we simulate 36 earthquake-generated tsunamis with annual probabilities of 1:100, 1:1,000 and 1:10,000, occurring under present and future predicted sea level conditions. For each tsunami scenario we generate a high-resolution inundation map of the maximum water level and flow velocity, and we calculate the exposure of buildings and critical infrastructure. Results indicate that exposure to earthquake-generated tsunamis is relatively low for present events, but increases significantly with higher sea level conditions. The probabilistic approach allowed us to undertake a comparison with an existing storm surge hazard assessment. Interestingly, the exposure to all the simulated tsunamis is significantly lower than that for the 1:100 storm surge scenarios, under the same initial sea level conditions. The results have significant implications for multi-risk and emergency management in Sydney.
Probabilistic Risk Assessment for Astronaut Post Flight Bone Fracture
NASA Technical Reports Server (NTRS)
Lewandowski, Beth; Myers, Jerry; Licata, Angelo
2015-01-01
Introduction: Space flight potentially reduces the loading that bone can resist before fracture. This reduction in bone integrity may result from a combination of factors, the most common reported as reduction in astronaut BMD. Although evaluating the condition of bones continues to be a critical aspect of understanding space flight fracture risk, defining the loading regime, whether on earth, in microgravity, or in reduced gravity on a planetary surface, remains a significant component of estimating the fracture risks to astronauts. This presentation summarizes the concepts, development, and application of NASA's Bone Fracture Risk Module (BFxRM) to understanding pre-, post, and in mission astronaut bone fracture risk. The overview includes an assessment of contributing factors utilized in the BFxRM and illustrates how new information, such as biomechanics of space suit design or better understanding of post flight activities may influence astronaut fracture risk. Opportunities for the bone mineral research community to contribute to future model development are also discussed. Methods: To investigate the conditions in which spaceflight induced changes to bone plays a critical role in post-flight fracture probability, we implement a modified version of the NASA Bone Fracture Risk Model (BFxRM). Modifications included incorporation of variations in physiological characteristics, post-flight recovery rate, and variations in lateral fall conditions within the probabilistic simulation parameter space. The modeled fracture probability estimates for different loading scenarios at preflight and at 0 and 365 days post-flight time periods are compared. Results: For simple lateral side falls, mean post-flight fracture probability is elevated over mean preflight fracture probability due to spaceflight induced BMD loss and is not fully recovered at 365 days post-flight. In the case of more energetic falls, such as from elevated heights or with the addition of lateral movement, the contribution of space flight quality changes is much less clear, indicating more granular assessments, such as Finite Element modeling, may be needed to further assess the risks in these scenarios.
Computational Design Tool for Bridge Hydrodynamic Loading in Inundated Flows of Midwest Rivers
DOT National Transportation Integrated Search
2009-12-01
The hydraulic forces experienced by an inundated bridge deck have great importance in the design of bridges. The proper estimation of loading exerted by the flow on the structure is important for design plans and is pertinent for evaluating its vulne...
Trementozzi, Andrea N; Leung, Cheuk-Yui; Osei-Yeboah, Frederick; Irdam, Erwin; Lin, Yiqing; MacPhee, J Michael; Boulas, Pierre; Karki, Shyam B; Zawaneh, Peter N
2017-05-15
Optimizing powder flow and compaction properties are critical for ensuring a robust tablet manufacturing process. The impact of flow and compaction properties of the active pharmaceutical ingredient (API) becomes progressively significant for higher drug load formulations, and for scaling up manufacturing processes. This study demonstrated that flow properties of a powder blend can be improved through API particle engineering, without critically impacting blend tabletability at elevated drug loadings. In studying a jet milled API (D 50 =24μm) and particle engineered wet milled API (D 50 =70μm and 90μm), flow functions of all API lots were similarly poor despite the vast difference in average particle size (ff c <4). This finding strays from the common notion that powder flow properties are directly correlated to particle size distribution. Upon adding excipients, however, clear trends in flow functions based on API particle size were observed. Wet milled API blends had a much improved flow function (ff c >10) compared with the jet milled API blends. Investigation of the compaction properties of both wet and jet milled powder blends also revealed that both jet and wet milled material produced robust tablets at the drug loadings used. The ability to practically demonstrate this uncommon observation that similarly poor flowing APIs can lead to a marked difference upon blending is important for pharmaceutical development. It is especially important in early phase development during API selection, and is advantageous particularly when material-sparing techniques are utilized. Copyright © 2017 Elsevier B.V. All rights reserved.
Iredahl, Fredrik; Högstedt, Alexandra; Henricson, Joakim; Sjöberg, Folke; Tesselaar, Erik; Farnebo, Simon
2016-10-01
Insulin causes capillary recruitment in muscle and adipose tissue, but the metabolic and microvascular effects of insulin in the skin have not been studied in detail. The aim of this study was to measure glucose metabolism and microvascular blood flow in the skin during local insulin delivery and after an oral glucose load. Microdialysis catheters were inserted intracutanously in human subjects. In eight subjects two microdialysis catheters were inserted, one perfused with insulin and one with control solution. First the local effects of insulin was studied, followed by a systemic provocation by an oral glucose load. Additionally, as control experiment, six subjects did not recieve local delivery of insulin or the oral glucose load. During microdialysis the local blood flow was measured by urea clearance and by laser speckle contrast imaging (LSCI). Within 15 minutes of local insulin delivery, microvascular blood flow in the skin increased (urea clearance: P=.047, LSCI: P=.002) paralleled by increases in pyruvate (P=.01) and lactate (P=.04), indicating an increase in glucose uptake. An oral glucose load increased urea clearance from the catheters, indicating an increase in skin perfusion, although no perfusion changes were detected with LSCI. The concentration of glucose, pyruvate and lactate increased in the skin after the oral glucose load. Insulin has metabolic and vasodilatory effects in the skin both when given locally and after systemic delivery through an oral glucose load. © 2016 John Wiley & Sons Ltd.
Simulation of load traffic and steeped speed control of conveyor
NASA Astrophysics Data System (ADS)
Reutov, A. A.
2017-10-01
The article examines the possibilities of the step control simulation of conveyor speed within Mathcad, Simulink, Stateflow software. To check the efficiency of the control algorithms and to more accurately determine the characteristics of the control system, it is necessary to simulate the process of speed control with real values of traffic for a work shift or for a day. For evaluating the belt workload and absence of spillage it is necessary to use empirical values of load flow in a shorter period of time. The analytical formulas for optimal speed step values were received using empirical values of load. The simulation checks acceptability of an algorithm, determines optimal parameters of regulation corresponding to load flow characteristics. The average speed and the number of speed switching during simulation are admitted as criteria of regulation efficiency. The simulation example within Mathcad software is implemented. The average conveyor speed decreases essentially by two-step and three-step control. A further increase in the number of regulatory steps decreases average speed insignificantly but considerably increases the intensity of the speed switching. Incremental algorithm of speed regulation uses different number of stages for growing and reducing load traffic. This algorithm allows smooth control of the conveyor speed changes with monotonic variation of the load flow. The load flow oscillation leads to an unjustified increase or decrease of speed. Work results can be applied at the design of belt conveyors with adjustable drives.
Caulkins, Carrie; Ebramzadeh, Edward; Winet, Howard
2009-05-01
The direct and indirect effects of muscle contraction on bone microcirculation and fluid flow are neither well documented nor explained. However, skeletal muscle contractions may affect the acquisition and maintenance of bone via stimulation of bone circulatory and interstitial fluid flow parameters. The purposes of this study were to assess the effects of transcutaneous electrical neuromuscular stimulation (TENS)-induced muscle contractions on cortical bone blood flow and bone mineral content, and to demonstrate that alterations in blood flow could occur independently of mechanical loading and systemic circulatory mechanisms. Bone chamber implants were used in a rabbit model to observe real-time blood flow rates and TENS-induced muscle contractions. Video recording of fluorescent microspheres injected into the blood circulation was used to calculate changes in cortical blood flow rates. TENS-induced repetitive muscle contractions uncoupled from mechanical loading instantaneously increased cortical microcirculatory flow, directly increased bone blood flow rates by 130%, and significantly increased bone mineral content over 7 weeks. Heart rates and blood pressure did not significantly increase due to TENS treatment. Our findings suggest that muscle contraction therapies have potential clinical applications for improving blood flow to cortical bone in the appendicular skeleton. Copyright 2008 Orthopaedic Research Society
Effects of front-loading and stagger angle on endwall losses of high lift low pressure turbine vanes
NASA Astrophysics Data System (ADS)
Lyall, M. Eric
Past efforts to reduce the airfoil count in low pressure turbines have produced high lift profiles with unacceptably high endwall loss. The purpose of the current work is to suggest alternative approaches for reducing endwall losses. The effects of the fluid mechanics and high lift profile geometry are considered. Mixing effects of the mean flow and turbulence fields are decoupled to show that mean flow shear in the endwall wake is negligible compared to turbulent shear, indicating that turbulence dissipation is the primary cause of total pressure loss. The mean endwall flow field does influence total pressure loss by causing excessive wake growth and perhaps outright separation on the suction surface. For equivalent stagger angles, a front-loaded high lift profile will produce less endwall loss than one aft-loaded, primarily by suppressing suction surface flow separation. Increasing the stagger setting, however, increases the endwall loss due to the static pressure field generating a stronger blockage relative to the incoming endwall boundary layer flow and causing a larger mass of fluid to become entrained in the horseshoe vortex. In short, front-loading the pressure distribution suppresses suction surface separation whereas limiting the stagger angle suppresses inlet boundary layer separation. Results of this work suggest that a front-loaded low stagger profile be used at the endwall to reduce the endwall loss.
NASA Technical Reports Server (NTRS)
2014-01-01
Topics include: Data Fusion for Global Estimation of Forest Characteristics From Sparse Lidar Data; Debris and Ice Mapping Analysis Tool - Database; Data Acquisition and Processing Software - DAPS; Metal-Assisted Fabrication of Biodegradable Porous Silicon Nanostructures; Post-Growth, In Situ Adhesion of Carbon Nanotubes to a Substrate for Robust CNT Cathodes; Integrated PEMFC Flow Field Design for Gravity-Independent Passive Water Removal; Thermal Mechanical Preparation of Glass Spheres; Mechanistic-Based Multiaxial-Stochastic-Strength Model for Transversely-Isotropic Brittle Materials; Methods for Mitigating Space Radiation Effects, Fault Detection and Correction, and Processing Sensor Data; Compact Ka-Band Antenna Feed with Double Circularly Polarized Capability; Dual-Leadframe Transient Liquid Phase Bonded Power Semiconductor Module Assembly and Bonding Process; Quad First Stage Processor: A Four-Channel Digitizer and Digital Beam-Forming Processor; Protective Sleeve for a Pyrotechnic Reefing Line Cutter; Metabolic Heat Regenerated Temperature Swing Adsorption; CubeSat Deployable Log Periodic Dipole Array; Re-entry Vehicle Shape for Enhanced Performance; NanoRacks-Scale MEMS Gas Chromatograph System; Variable Camber Aerodynamic Control Surfaces and Active Wing Shaping Control; Spacecraft Line-of-Sight Stabilization Using LWIR Earth Signature; Technique for Finding Retro-Reflectors in Flash LIDAR Imagery; Novel Hemispherical Dynamic Camera for EVAs; 360 deg Visual Detection and Object Tracking on an Autonomous Surface Vehicle; Simulation of Charge Carrier Mobility in Conducting Polymers; Observational Data Formatter Using CMOR for CMIP5; Propellant Loading Physics Model for Fault Detection Isolation and Recovery; Probabilistic Guidance for Swarms of Autonomous Agents; Reducing Drift in Stereo Visual Odometry; Future Air-Traffic Management Concepts Evaluation Tool; Examination and A Priori Analysis of a Direct Numerical Simulation Database for High-Pressure Turbulent Flows; and Resource-Constrained Application of Support Vector Machines to Imagery.
Lagrangian based methods for coherent structure detection
NASA Astrophysics Data System (ADS)
Allshouse, Michael R.; Peacock, Thomas
2015-09-01
There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.
NASA Astrophysics Data System (ADS)
Leary, K. C. P.; Schmeeckle, M. W.
2017-12-01
Flow separation/reattachment on the lee side of alluvial bed forms is known to produce a complex turbulence field, but the spatiotemporal details of the associated patterns of bed load sediment transported remain largely unknown. Here we report turbulence-resolving, simultaneous measurements of bed load motion and near-bed fluid velocity downstream of a backward facing step in a laboratory flume. Two synchronized high-speed video cameras simultaneously observed bed load motion and the motion of neutrally buoyant particles in a laser light sheet 6 mm above the bed at 250 frames/s downstream of a 3.8 cm backward facing step. Particle Imaging Velocimetry (PIV) and Acoustic Doppler Velocimetry (ADV) were used to characterize fluid turbulent patterns, while manual particle tracking techniques were used to characterize bed load transport. Octant analysis, conducted using ADV data, coupled with Markovian sequence probability analysis highlights differences in the flow near reattachment versus farther downstream. Near reattachment, three distinct flow patterns are apparent. Farther downstream we see the development of a dominant flow sequence. Localized, intermittent, high-magnitude transport events are more apparent near flow reattachment. These events are composed of streamwise and cross-stream fluxes of comparable magnitudes. Transport pattern and fluid velocity data are consistent with the existence of permeable "splat events," wherein a volume of fluid moves toward and impinges on the bed (sweep) causing a radial movement of fluid in all directions around the point of impingement (outward interaction). This is congruent with flow patterns, identified with octant analysis, proximal to flow reattachment.
Trends in phosphorus loading to the western basin of Lake ...
Dave Dolan spent much of his career computing and compiling phosphorus loads to the Great Lakes. None of his work in this area has been more valuable than his continued load estimates to Lake Erie, which has allowed us to unambiguously interpret the cyanobacteria blooms and hypoxia development in the lake. To help understand the re-occurrence of cyanobacteria blooms in the Western Basin of Lake Erie, we have examined the phosphorus loading to the Western Basin over the past 15 years. Furthermore, we have examined the relative contributions from various tributaries and the Detroit River. On an annual basis the total phosphorus load has not exhibited a trend, other than being well correlated with flow from major tributaries. However, the dissolved reactive phosphorus (DRP) load has trended upward, returning to levels observed in the mid-1970s. This increase has largely been attributed to the increase in flow-weighted DRP concentration in the Maumee River. Over the period, about half of the phosphorus load comes from the Maumee River with the other half coming from the Detroit River; other tributaries contribute much small amounts to the load. Seasonal analysis shows the highest percentage of the load occurs in the spring during high flow events. We are very grateful to our friend Dave for making this type of analysis possible not applicable
Stratification and loading of fecal indicator bacteria (FIB) in a tidally muted urban salt marsh.
Johnston, Karina K; Dorsey, John H; Saez, Jose A
2015-03-01
Stratification and loading of fecal indicator bacteria (FIB) were assessed in the main tidal channel of the Ballona Wetlands, an urban salt marsh receiving muted tidal flows, to (1) determine FIB concentration versus loading within the water column at differing tidal flows, (2) identify associations of FIB with other water quality parameters, and (3) compare wetland FIB concentrations to the adjacent estuary. Sampling was conducted four times during spring-tide events; samples were analyzed for FIB and turbidity (NTU) four times over a tidal cycle at pre-allocated depths, depending on the water level. Additional water quality parameters measured included temperature, salinity, oxygen, and pH. Loadings were calculated by integrating the stratified FIB concentrations with water column cross-sectional volumes corresponding to each depth. Enterococci and Escherichia coli were stratified both by concentration and loading, although these variables portrayed different patterns over a tidal cycle. Greatest concentrations occurred in surface to mid-strata levels, during flood tides when contaminated water flowed in from the estuary, and during ebb flows when sediments were suspended. Loading was greatest during flood flows and diminished during low tide periods. FIB concentrations within the estuary often were significantly greater than those within the wetland tide channel, supporting previous studies that the wetlands act as a sink for FIB. For public health water quality monitoring, these results indicate that more accurate estimates of FIB concentrations would be obtained by sampling a number of points within a water column rather than relying only on single surface samples.
Athanasopoulos, Dimitris; Louvaris, Zafeiris; Cherouveim, Evgenia; Andrianopoulos, Vasilis; Roussos, Charis; Zakynthinos, Spyros
2010-01-01
We investigated whether expiratory muscle loading induced by the application of expiratory flow limitation (EFL) during exercise in healthy subjects causes a reduction in quadriceps muscle blood flow in favor of the blood flow to the intercostal muscles. We hypothesized that, during exercise with EFL quadriceps muscle blood flow would be reduced, whereas intercostal muscle blood flow would be increased compared with exercise without EFL. We initially performed an incremental exercise test on eight healthy male subjects with a Starling resistor in the expiratory line limiting expiratory flow to ∼ 1 l/s to determine peak EFL exercise workload. On a different day, two constant-load exercise trials were performed in a balanced ordering sequence, during which subjects exercised with or without EFL at peak EFL exercise workload for 6 min. Intercostal (probe over the 7th intercostal space) and vastus lateralis muscle blood flow index (BFI) was calculated by near-infrared spectroscopy using indocyanine green, whereas cardiac output (CO) was measured by an impedance cardiography technique. At exercise termination, CO and stroke volume were not significantly different during exercise, with or without EFL (CO: 16.5 vs. 15.2 l/min, stroke volume: 104 vs. 107 ml/beat). Quadriceps muscle BFI during exercise with EFL (5.4 nM/s) was significantly (P = 0.043) lower compared with exercise without EFL (7.6 nM/s), whereas intercostal muscle BFI during exercise with EFL (3.5 nM/s) was significantly (P = 0.021) greater compared with that recorded during control exercise (0.4 nM/s). In conclusion, increased respiratory muscle loading during exercise in healthy humans causes an increase in blood flow to the intercostal muscles and a concomitant decrease in quadriceps muscle blood flow. PMID:20507965
Theoretical study of hull-rotor aerodynamic interference on semibuoyant vehicles
NASA Technical Reports Server (NTRS)
Spangler, S. B.; Smith, C. A.; Mendenhall, M. R.
1977-01-01
Theoretical methods are being developed to predict the mutual interference between rotor wakes and the hull for semibuoyant vehicles. The objective of the investigation is to predict the pressure distribution and overall loads on the hull in the presence of rotors whose locations, tilt angles, and disk loading are arbitrarily specified. The methods involve development of potential flow models for the hull alone in a nonuniform onset flow, a rotor wake which has the proper features to predict induced flow outside the wake, and a wake centerline specification technique which accounts for the reactions of the wake to a nonuniform crossflow. The flow models are used in sequence to solve for the mutual influence of the hull and rotor(s) on each other and the resulting loads. A flow separation model is included to estimate the influence of separation on hull loads at high sideslip angles. Only limited results have been obtained to date. These were obtained on a configuration which was tested in the Ames Research Center 7- by 10-Foot Low Speed Tunnel under Goodyear Aircraft Corporation sponsorship and indicate the nature of the interference pressure distribution on a configuration in hover.
Development of parallel algorithms for electrical power management in space applications
NASA Technical Reports Server (NTRS)
Berry, Frederick C.
1989-01-01
The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.
Investigation of the Flow Field and Performances of a Centrifugal Pump at Part Load
NASA Astrophysics Data System (ADS)
Prunières, R.; Inoue, Y.; Nagahara, T.
2016-11-01
Centrifugal pump performance curve instability, characterized by a local dent at part load, can be the consequence of flow instabilities in rotating or stationary parts. Such flow instabilities often result in abnormal operating conditions which can damage both the pump and the system. In order for the pump to have reliable operation over a wide flow rate range, it is necessary to achieve a design free of instability. The present paper focuses on performance curve instability of a centrifugal pump of mid specific speed (ωs = 0.65) for which instability was observed at part load during tests. The geometry used for this research consist of the first stage of a multi-stage centrifugal pump and is composed of a suction bend, a closed-type impeller, a vaned diffuser and return guide vanes. In order to analyse the instability phenomenon, PIV and CFD analysis were performed. Both methods qualitatively agree relatively well. It appears that the main difference before and after head drop is an increase of reverse flow rate at the diffuser passage inlet on the hub side. This reverse flow decreases the flow passing area at the diffuser passage inlet, disallowing effective flow deceleration and impairing static pressure recovery.
Continuation Power Flow Analysis for PV Integration Studies at Distribution Feeders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiyu; Zhu, Xiangqi; Lubkeman, David L.
2017-10-30
This paper presents a method for conducting continuation power flow simulation on high-solar penetration distribution feeders. A load disaggregation method is developed to disaggregate the daily feeder load profiles collected in substations down to each load node, where the electricity consumption of residential houses and commercial buildings are modeled using actual data collected from single family houses and commercial buildings. This allows the modeling of power flow and voltage profile along a distribution feeder on a continuing fashion for a 24- hour period at minute-by-minute resolution. By separating the feeder into load zones based on the distance between the loadmore » node and the feeder head, we studied the impact of PV penetration on distribution grid operation in different seasons and under different weather conditions for different PV placements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero Gomez, Pedro DJ; Richmond, Marshall C.
2014-04-17
Evaluating the consequences from blade-strike of fish on marine hydrokinetic (MHK) turbine blades is essential for incorporating environmental objectives into the integral optimization of machine performance. For instance, experience with conventional hydroelectric turbines has shown that innovative shaping of the blade and other machine components can lead to improved designs that generate more power without increased impacts to fish and other aquatic life. In this work, we used unsteady computational fluid dynamics (CFD) simulations of turbine flow and discrete element modeling (DEM) of particle motion to estimate the frequency and severity of collisions between a horizontal axis MHK tidal energymore » device and drifting aquatic organisms or debris. Two metrics are determined with the method: the strike frequency and survival rate estimate. To illustrate the procedure step-by-step, an exemplary case of a simple runner model was run and compared against a probabilistic model widely used for strike frequency evaluation. The results for the exemplary case showed a strong correlation between the two approaches. In the application case of the MHK turbine flow, turbulent flow was modeled using detached eddy simulation (DES) in conjunction with a full moving rotor at full scale. The CFD simulated power and thrust were satisfactorily comparable to experimental results conducted in a water tunnel on a reduced scaled (1:8.7) version of the turbine design. A cloud of DEM particles was injected into the domain to simulate fish or debris that were entrained into the turbine flow. The strike frequency was the ratio of the count of colliding particles to the crossing sample size. The fish length and approaching velocity were test conditions in the simulations of the MHK turbine. Comparisons showed that DEM-based frequencies tend to be greater than previous results from Lagrangian particles and probabilistic models, mostly because the DEM scheme accounts for both the geometric aspects of the passage event ---which the probabilistic method does--- as well as the fluid-particle interactions ---which the Lagrangian particle method does. The DEM-based survival rates were comparable to laboratory results for small fish but not for mid-size fish because of the considerably different turbine diameters. The modeling framework can be used for applications that aim at evaluating the biological performance of MHK turbine units during the design phase and to provide information to regulatory agencies needed for the environmental permitting process.« less
Dynamic Response during PEM Fuel Cell Loading-up
Pei, Pucheng; Yuan, Xing; Gou, Jun; Li, Pengcheng
2009-01-01
A study on the effects of controlling and operating parameters for a Proton Exchange Membrane (PEM) fuel cell on the dynamic phenomena during the loading-up process is presented. The effect of the four parameters of load-up amplitudes and rates, operating pressures and current levels on gas supply or even starvation in the flow field is analyzed based accordingly on the transient characteristics of current output and voltage. Experiments are carried out in a single fuel cell with an active area of 285 cm2. The results show that increasing the loading-up amplitude can inevitably increase the possibility of gas starvation in channels when a constant flow rate has been set for the cathode; With a higher operating pressure, the dynamic performance will be improved and gas starvations can be relieved. The transient gas supply in the flow channel during two loading-up mode has also been discussed. The experimental results will be helpful for optimizing the control and operation strategies for PEM fuel cells in vehicles.
Zimmerman, Marc J.; Waldron, Marcus C.; DeSimone, Leslie A.
2015-01-01
Analysis of the representative constituents (total phosphorus, total chromium, and suspended sediment) upstream and downstream of impoundments indicated that the existing impoundments, such as Rice City Pond, can be sources of particulate contaminant loads in the Blackstone River. Loads of particulate phosphorus, particulate chromium, and suspended sediment were consistently higher downstream from Rice City Pond than upstream during high-flow events, and there was a positive, linear relation between streamflow and changes in these constituents from upstream to downstream of the impoundment. Thus, particulate contaminants were mobilized from Rice City Pond during high-flow events and transported downstream. In contrast, downstream loads of particulate phosphorus, particulate chromium, and suspended sediment were generally lower than or equal to upstream loads for the former Rockdale Pond impoundment. Sediments associated with the former impoundment at Rockdale Pond, breached in the late 1960s, did not appear to be mobilized during the high-flow events monitored during this study.
Cooling system for superconducting magnet
Gamble, Bruce B.; Sidi-Yekhlef, Ahmed
1998-01-01
A cooling system is configured to control the flow of a refrigerant by controlling the rate at which the refrigerant is heated, thereby providing an efficient and reliable approach to cooling a load (e.g., magnets, rotors). The cooling system includes a conduit circuit connected to the load and within which a refrigerant circulates; a heat exchanger, connected within the conduit circuit and disposed remotely from the load; a first and a second reservoir, each connected within the conduit, each holding at least a portion of the refrigerant; a heater configured to independently heat the first and second reservoirs. In a first mode, the heater heats the first reservoir, thereby causing the refrigerant to flow from the first reservoir through the load and heat exchanger, via the conduit circuit and into the second reservoir. In a second mode, the heater heats the second reservoir to cause the refrigerant to flow from the second reservoir through the load and heat exchanger via the conduit circuit and into the first reservoir.
Cooling system for superconducting magnet
Gamble, B.B.; Sidi-Yekhlef, A.
1998-12-15
A cooling system is configured to control the flow of a refrigerant by controlling the rate at which the refrigerant is heated, thereby providing an efficient and reliable approach to cooling a load (e.g., magnets, rotors). The cooling system includes a conduit circuit connected to the load and within which a refrigerant circulates; a heat exchanger, connected within the conduit circuit and disposed remotely from the load; a first and a second reservoir, each connected within the conduit, each holding at least a portion of the refrigerant; a heater configured to independently heat the first and second reservoirs. In a first mode, the heater heats the first reservoir, thereby causing the refrigerant to flow from the first reservoir through the load and heat exchanger, via the conduit circuit and into the second reservoir. In a second mode, the heater heats the second reservoir to cause the refrigerant to flow from the second reservoir through the load and heat exchanger via the conduit circuit and into the first reservoir. 3 figs.
Design Considerations for Fusible Heat Sink
NASA Technical Reports Server (NTRS)
Cognata, Thomas J.; Leimkuehler, Thomas O.; Sheth, Rubik B.
2011-01-01
Traditionally radiator designs are based off a passive or flow through design depending on vehicle requirements. For cyclical heat loads, a novel idea of combining a full flow through radiator to a phase change material is currently being investigated. The flow through radiator can be designed for an average heat load while the phase change material can be used as a source of supplemental heat rejections when vehicle heat loads go above the average load. Furthermore, by using water as the phase change material, harmful radiation protection can be provided to the crew. This paper discusses numerous trades conducted to understand the most optimal fusible heat sink design for a particular heat load. Trades include configuration concepts, amount of phase change needed for supplemental heat rejection, and the form of interstitial material needed for optimal performance. These trades were used to culminate to a fusible heat sink design. The paper will discuss design parameters taken into account to develop an engineering development unit.
Sojda, Richard S.; Towler, Erin; Roberts, Mike; Rajagopalan, Balaji
2013-01-01
[1] Despite the influence of hydroclimate on river ecosystems, most efforts to date have focused on using climate information to predict streamflow for water supply. However, as water demands intensify and river systems are increasingly stressed, research is needed to explicitly integrate climate into streamflow forecasts that are relevant to river ecosystem management. To this end, we present a five step risk-based framework: (1) define risk tolerance, (2) develop a streamflow forecast model, (3) generate climate forecast ensembles, (4) estimate streamflow ensembles and associated risk, and (5) manage for climate risk. The framework is successfully demonstrated for an unregulated watershed in southwest Montana, where the combination of recent drought and water withdrawals has made it challenging to maintain flows needed for healthy fisheries. We put forth a generalized linear modeling (GLM) approach to develop a suite of tools that skillfully model decision-relevant low flow characteristics in terms of climate predictors. Probabilistic precipitation forecasts are used in conjunction with the GLMs, resulting in season-ahead prediction ensembles that provide the full risk profile. These tools are embedded in an end-to-end risk management framework that directly supports proactive fish conservation efforts. Results show that the use of forecasts can be beneficial to planning, especially in wet years, but historical precipitation forecasts are quite conservative (i.e., not very “sharp”). Synthetic forecasts show that a modest “sharpening” can strongly impact risk and improve skill. We emphasize that use in management depends on defining relevant environmental flows and risk tolerance, requiring local stakeholder involvement.
Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun
2016-10-06
Comparative analysis of protein-protein interaction (PPI) networks provides an effective means of detecting conserved functional network modules across different species. Such modules typically consist of orthologous proteins with conserved interactions, which can be exploited to computationally predict the modules through network comparison. In this work, we propose a novel probabilistic framework for comparing PPI networks and effectively predicting the correspondence between proteins, represented as network nodes, that belong to conserved functional modules across the given PPI networks. The basic idea is to estimate the steady-state network flow between nodes that belong to different PPI networks based on a Markov random walk model. The random walker is designed to make random moves to adjacent nodes within a PPI network as well as cross-network moves between potential orthologous nodes with high sequence similarity. Based on this Markov random walk model, we estimate the steady-state network flow - or the long-term relative frequency of the transitions that the random walker makes - between nodes in different PPI networks, which can be used as a probabilistic score measuring their potential correspondence. Subsequently, the estimated scores can be used for detecting orthologous proteins in conserved functional modules through network alignment. Through evaluations based on multiple real PPI networks, we demonstrate that the proposed scheme leads to improved alignment results that are biologically more meaningful at reduced computational cost, outperforming the current state-of-the-art algorithms. The source code and datasets can be downloaded from http://www.ece.tamu.edu/~bjyoon/CUFID .
Flow Quality for Turbine Engine Loads Simulator (TELS) Facility
1980-06-01
2.2 GAS INGESTION A mathematical simulation of the turbojet engine and jet deflector was formulated to estimate the severity of the recirculating...3. Swain. R. L. and Mitchell, J. G. "’Smlulatlon of Turbine Engine Operational Loads." Journal of Aircraft Vol. 15, No. 6, June 1978• 4. Ryan, J...3 AEDC-TR-79-83 ~...~ i ,i g - Flow Quality for Turbine Engine Loads Simulator (TELS) Facility R..I. Schulz ARO, Inc. June 1980
Runner, Michael S.; Turnipseed, D. Phil; Coupe, Richard H.
2002-01-01
Increased nutrient loading to the Gulf of Mexico from off-continent flux has been identified as contributing to the increase in the areal extent of the low dissolved-oxygen zone that develops annually off the Louisiana and Texas coast. The proximity of the Yazoo River Basin in northwestern Mississippi to the Gulf of Mexico, and the intensive agricultural activities in the basin have led to speculation that the Yazoo River Basin contributes a disproportionate amount of nitrogen and phosphorus to the Mississippi River and ultimately to the Gulf of Mexico. An empirical measurement of the flux of nitrogen and phosphorus from the Yazoo Basin has not been possible due to the hydrology of the lower Yazoo River Basin. Streamflow for the Yazoo River below Steele Bayou is affected by backwater from the Mississippi River. Flow at the gage is non-uniform and varying, with bi-directional and reverse flows possible. Streamflow was computed by using remote sensing and acoustic and conventional discharge and velocity measurement techniques. Streamflow from the Yazoo River for the 1996-2000 period accounted for 2.8 percent of the flow of the Mississippi River for the same period. Water samples from the Yazoo River were collected from February 1996 through December 2000 and were analyzed for total nitrogen, nitrate, total phosphorus, and orthophosphorus as part of the U.S. Geological Survey National Water-Quality Assessment Program. These data were used to compute annual loads of nitrogen and phosphorus discharged from the Yazoo River for the period 1996-2000. Annual loads of nitrogen and phosphorus were calculated by two methods. The first method used multivariate regression and the second method multiplied the mean annual concentration by the total annual flow. Load estimates based on the product of the mean annual concentration and the total annual flow were within the 95 percent confidence interval for the load calculated by multivariate regression in 10 of 20 cases. The Yazoo River loads, compared to average annual loads in the Mississippi River, indicated that the Yazoo River was contributing 1.4 percent of the total nitrogen load, 0.7 percent of the nitrate load, 3.4 percent of the total phosphorus load, and 1.6 percent of the orthophosphorus load during 1996 - 2000. The total nitrogen, nitrate, and orthophosphorus loads in the Yazoo River Basin were less than expected, whereas the total phosphorus load was slightly higher than expected based on discharge.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
NASA Astrophysics Data System (ADS)
Aochi, Hideo
2014-05-01
The Marmara region (Turkey) along the North Anatolian fault is known as a high potential of large earthquakes in the next decades. For the purpose of seismic hazard/risk evaluation, kinematic and dynamic source models have been proposed (e.g. Oglesby and Mai, GJI, 2012). In general, the simulated earthquake scenarios depend on the hypothesis and cannot be verified before the expected earthquake. We then introduce a probabilistic insight to give the initial/boundary conditions to statistically analyze the simulated scenarios. We prepare different fault geometry models, tectonic loading and hypocenter locations. We keep the same framework of the simulation procedure as the dynamic rupture process of the adjacent 1999 Izmit earthquake (Aochi and Madariaga, BSSA, 2003), as the previous models were able to reproduce the seismological/geodetic aspects of the event. Irregularities in fault geometry play a significant role to control the rupture progress, and a relatively large change in geometry may work as barriers. The variety of the simulate earthquake scenarios should be useful for estimating the variety of the expected ground motion.
NASA Technical Reports Server (NTRS)
Borden, C. S.; Schwartz, D. L.
1984-01-01
The purpose of this study is to assess the relative economic potentials of concenrating and two-axis tracking flat-plate photovoltaic arrays for central-station applications in the mid-1990's. Specific objectives of this study are to provide information on concentrator photovoltaic collector probabilistic price and efficiency levels to illustrate critical areas of R&D for concentrator cells and collectors, and to compare concentrator and flat-plate PV price and efficiency alternatives for several locations, based on their implied costs of energy. To deal with the uncertainties surrounding research and development activities in general, a probabilistic assessment of commercially achievable concentrator photovoltaic collector efficiencies and prices (at the factory loading dock) is performed. The results of this projection of concentrator photovoltaic technology are then compared with a previous flat-plate module price analysis (performed early in 1983). To focus this analysis on specific collector alternatives and their implied energy costs for different locations, similar two-axis tracking designs are assumed for both concentrator and flat-plate options.
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Jones, Scott M.; Arcara, Philip C., Jr.; Haller, William J.
2004-01-01
NASA's Ultra Efficient Engine Technology (UEET) program features advanced aeropropulsion technologies that include highly loaded turbomachinery, an advanced low-NOx combustor, high-temperature materials, intelligent propulsion controls, aspirated seal technology, and an advanced computational fluid dynamics (CFD) design tool to help reduce airplane drag. A probabilistic system assessment is performed to evaluate the impact of these technologies on aircraft fuel burn and NOx reductions. A 300-passenger aircraft, with two 396-kN thrust (85,000-pound) engines is chosen for the study. The results show that a large subsonic aircraft equipped with the UEET technologies has a very high probability of meeting the UEET Program goals for fuel-burn (or equivalent CO2) reduction (15% from the baseline) and LTO (landing and takeoff) NOx reductions (70% relative to the 1996 International Civil Aviation Organization rule). These results are used to provide guidance for developing a robust UEET technology portfolio, and to prioritize the most promising technologies required to achieve UEET program goals for the fuel-burn and NOx reductions.
Development of Testing Methodologies for the Mechanical Properties of MEMS
NASA Technical Reports Server (NTRS)
Ekwaro-Osire, Stephen
2003-01-01
This effort is to investigate and design testing strategies to determine the mechanical properties of MicroElectroMechanical Systems (MEMS) as well as investigate the development of a MEMS Probabilistic Design Methodology (PDM). One item of potential interest is the design of a test for the Weibull size effect in pressure membranes. The Weibull size effect is a consequence of a stochastic strength response predicted from the Weibull distribution. Confirming that MEMS strength is controlled by the Weibull distribution will enable the development of a probabilistic design methodology for MEMS - similar to the GRC developed CARES/Life program for bulk ceramics. However, the primary area of investigation will most likely be analysis and modeling of material interfaces for strength as well as developing a strategy to handle stress singularities at sharp corners, filets, and material interfaces. This will be a continuation of the previous years work. The ultimate objective of this effort is to further develop and verify the ability of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) code to predict the time-dependent reliability of MEMS structures subjected to multiple transient loads.
Bonin, Jennifer L.
2010-01-01
Samples of surface water and suspended sediment were collected from the two branches that make up the Elizabeth River in New Jersey - the West Branch and the Main Stem - from October to November 2008 to determine the concentrations of selected chlorinated organic and inorganic constituents. The sampling and analyses were conducted as part of Phase II of the New York-New Jersey Harbor Estuary Plan-Contaminant Assessment and Reduction Program (CARP), which is overseen by the New Jersey Department of Environmental Protection. Phase II of the New Jersey Workplan was conducted by the U.S. Geological Survey to define upstream tributary and point sources of contaminants in those rivers sampled during Phase I work, with special emphasis on the Passaic and Elizabeth Rivers. This portion of the Phase II study was conducted on the two branches of the Elizabeth River, which were previously sampled during July and August of 2003 at low-flow conditions. Samples were collected during 2008 from the West Branch and Main Stem of the Elizabeth River just upstream from their confluence at Hillside, N.J. Both tributaries were sampled once during low-flow discharge conditions and once during high-flow discharge conditions using the protocols and analytical methods that were used in the initial part of Phase II of the Workplan. Grab samples of streamwater also were collected at each site and were analyzed for cadmium, suspended sediment, and particulate organic carbon. The measured concentrations, along with available historical suspended-sediment and stream-discharge data were used to estimate average annual loads of suspended sediment and organic compounds in the two branches of the Elizabeth River. Total suspended-sediment loads for 1975 to 2000 were estimated using rating curves developed from historical U.S. Geological Survey suspended-sediment and discharge data, where available. Concentrations of suspended-sediment-bound polychlorinated biphenyls (PCBs) in the Main Stem and the West Branch of the Elizabeth River during low-flow conditions were 534 ng/g (nanograms per gram) and 1,120 ng/g, respectively, representing loads of 27 g/yr (grams per year) and 416 g/yr, respectively. These loads were estimated using contaminant concentrations during low flow, and the assumed 25-year average discharge, and 25-year average suspended-sediment concentration. Concentrations of suspended-sediment-bound PCBs in the Main Stem and the West Branch of the Elizabeth River during high-flow conditions were 3,530 ng/g and 623 ng/g, respectively, representing loads of 176 g/yr and 231 g/yr, respectively. These loads were estimated using contaminant concentrations during high-flow conditions, the assumed 25-year average discharge, and 25-year average suspended-sediment concentration. Concentrations of suspended-sediment-bound polychlorinated dibenzo-p-dioxins and polychlorinated dibenzo-p-difuran compounds (PCDD/PCDFs) during low-flow conditions were 2,880 pg/g (picograms per gram) and 5,910 pg/g in the Main Stem and West Branch, respectively, representing average annual loads of 0.14 g/yr and 2.2 g/yr, respectively. Concentrations of suspended-sediment-bound PCDD/PCDFs during high-flow conditions were 40,900 pg/g and 12,400 pg/g in the Main Stem and West Branch, respectively, representing average annual loads of 2.05 g/yr and 4.6 g/yr, respectively. Total toxic equivalency (TEQ) loads (sum of PCDD/PCDF and PCB TEQs) were 3.1 mg/yr (milligrams per year) (as 2, 3, 7, 8-TCDD) in the Main Stem and 28 mg/yr in the West Branch during low-flow conditions. Total TEQ loads (sum of PCDD/PCDFs and PCBs) were 27 mg/yr (as 2, 3, 7, 8-TCDD) in the Main Stem and 32 mg/yr in the West Branch during high-flow conditions. All of these load estimates, however, are directly related to the assumed annual discharge for the two branches. Long-term measurement of stream discharge and suspended-sediment concentrations would be needed to verify these loads. On the basis of the loads cal
NASA Technical Reports Server (NTRS)
Katzoff, S; Faison, M Frances; Dubose, Hugh C
1954-01-01
The field of a uniformly loaded wing in subsonic flow is discussed in terms of the acceleration potential. It is shown that, for the design of such wings, the slope of the mean camber surface at any point can be determined by a line integration around the wing boundary. By an additional line integration around the wing boundary, this method is extended to include the case where the local section lift coefficient varies with spanwise location (the chordwise loading at every section still remaining uniform). For the uniformly loaded wing of polygonal plan form, the integrations necessary to determine the local slope of the surface and the further integration of the slopes to determine the ordinate can be done analytically. An outline of these integrations and the resulting formulas are included. Calculated results are given for a sweptback wing with uniform chordwise loading and a highly tapered spanwise loading, a uniformly loaded delta wing, a uniformly loaded sweptback wing, and the same sweptback wing with uniform chordwise loading but elliptical span load distribution.
Hoghooghi, Nahal; Radcliffe, David E; Habteselassie, Mussie Y; Jeong, Jaehak
2017-05-01
Onsite wastewater treatment systems (OWTSs) can be a source of nitrogen (N) pollution in both surface and ground waters. In metropolitan Atlanta, GA, >26% of homes are on OWTSs. In a previous article, we used the Soil Water Assessment Tool to model the effect of OWTSs on stream flow in the Big Haynes Creek Watershed in metropolitan Atlanta. The objective of this study was to estimate the effect of OWTSs, including failing systems, on nitrate as N (NO-N) load in the same watershed. Big Haynes Creek has a drainage area of 44 km with mainly urban land use (67%), and most of the homes use OWTSs. A USGS gauge station where stream flow was measured daily and NO-N concentrations were measured monthly was used as the outlet. The model was simulated for 12 yr. Overall, the model showed satisfactory daily stream flow and NO-N loads with Nash-Sutcliffe coefficients of 0.62 and 0.58 for the calibration period and 0.67 and 0.33 for the validation period at the outlet of the Big Haynes Watershed. Onsite wastewater treatment systems caused an average increase in NO-N load of 23% at the watershed scale and 29% at the outlet of a subbasin with the highest density of OWTSs. Failing OWTSs were estimated to be 1% of the total systems and did not have a large impact on stream flow or NO-N load. The NO-N load was 74% of the total N load in the watershed, indicating the important effect of OWTSs on stream loads in this urban watershed. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Methods for Estimating Annual Wastewater Nutrient Loads in the Southeastern United States
McMahon, Gerard; Tervelt, Larinda; Donehoo, William
2007-01-01
This report describes an approach for estimating annual total nitrogen and total phosphorus loads from point-source dischargers in the southeastern United States. Nutrient load estimates for 2002 were used in the calibration and application of a regional nutrient model, referred to as the SPARROW (SPAtially Referenced Regression On Watershed attributes) watershed model. Loads from dischargers permitted under the National Pollutant Discharge Elimination System were calculated using data from the U.S. Environmental Protection Agency Permit Compliance System database and individual state databases. Site information from both state and U.S. Environmental Protection Agency databases, including latitude and longitude and monitored effluent data, was compiled into a project database. For sites with a complete effluent-monitoring record, effluent-flow and nutrient-concentration data were used to develop estimates of annual point-source nitrogen and phosphorus loads. When flow data were available but nutrient-concentration data were missing or incomplete, typical pollutant-concentration values of total nitrogen and total phosphorus were used to estimate load. In developing typical pollutant-concentration values, the major factors assumed to influence wastewater nutrient-concentration variability were the size of the discharger (the amount of flow), the season during which discharge occurred, and the Standard Industrial Classification code of the discharger. One insight gained from this study is that in order to gain access to flow, concentration, and location data, close communication and collaboration are required with the agencies that collect and manage the data. In addition, the accuracy and usefulness of the load estimates depend on the willingness of the states and the U.S. Environmental Protection Agency to provide guidance and review for at least a subset of the load estimates that may be problematic.
Characterization of centrifugally-loaded flame migration for ultra-compact combustors
NASA Astrophysics Data System (ADS)
LeBay, Kenneth D.
The Air Force Research Laboratory (AFRL) has designed a centrifugally-loaded Ultra-Compact Combustor (UCC) showing viable merit for reducing gas turbine combustor length by as much as 66%. The overarching goal of this research was to characterize the migration of centrifugally-loaded flames in a sectional model of the UCC to enable scaling of the design from 15 cm to the 50--75 cm diameter of most engines. Two-line Planar Laser-Induced Fluorescence thermometry (PLIF) of OH, time-resolved Particle Image Velocimetry (PIV), and high-speed video data were collected. Using a sectional UCC model, the flame migration angle was determined to be a function of the UCC/core velocity ratio (VR) while both the VR and the centrifugal or "g-load" affected the migration quantity. Higher g-loads and lower VRs yielding higher migration but lower VRs had lower core flow temperatures due to higher core air mass flow. A comparison of the straight and curved UCC sections showed the centrifugal load increased the flame migration but increased unsteadiness. The flame migration into the core was estimated using pressure and temperature measurements upstream, and PIV measurements downstream of the core flow interface with constant density and velocity profile assumptions. The flame migration quantity was used to estimate the core flow temperature which was in relatively good agreement with the measured PLIF values. The migration quantity scaled relatively linearly with the UCC tangential velocity, which corresponds to the g-load value, with the slope determined by the VR. A simple analytical model resulted for the dependence of the migration quantity on the tangential velocity and VR. The quantitative relationships determined in this research provided a detailed description of the migration of centrifugally-loaded flames in a sectional UCC.
Overview of aerothermodynamic loads definition study
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.
1991-01-01
The objective of the Aerothermodynamic Loads Definition Study is to develop methods of accurately predicting the operating environment in advanced Earth-to-Orbit (ETO) propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. Development of time averaged and time dependent three dimensional viscous computer codes as well as experimental verification and engine diagnostic testing are considered to be essential in achieving that objective. Time-averaged, nonsteady, and transient operating loads must all be well defined in order to accurately predict powerhead life. Described here is work in unsteady heat flow analysis, improved modeling of preburner flow, turbulence modeling for turbomachinery, computation of three dimensional flow with heat transfer, and unsteady viscous multi-blade row turbine analysis.
Multi-Hazard Advanced Seismic Probabilistic Risk Assessment Tools and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Justin L.; Bolisetti, Chandu; Veeraraghavan, Swetha
Design of nuclear power plant (NPP) facilities to resist natural hazards has been a part of the regulatory process from the beginning of the NPP industry in the United States (US), but has evolved substantially over time. The original set of approaches and methods was entirely deterministic in nature and focused on a traditional engineering margins-based approach. However, over time probabilistic and risk-informed approaches were also developed and implemented in US Nuclear Regulatory Commission (NRC) guidance and regulation. A defense-in-depth framework has also been incorporated into US regulatory guidance over time. As a result, today, the US regulatory framework incorporatesmore » deterministic and probabilistic approaches for a range of different applications and for a range of natural hazard considerations. This framework will continue to evolve as a result of improved knowledge and newly identified regulatory needs and objectives, most notably in response to the NRC activities developed in response to the 2011 Fukushima accident in Japan. Although the US regulatory framework has continued to evolve over time, the tools, methods and data available to the US nuclear industry to meet the changing requirements have not kept pace. Notably, there is significant room for improvement in the tools and methods available for external event probabilistic risk assessment (PRA), which is the principal assessment approach used in risk-informed regulations and risk-informed decision-making applied to natural hazard assessment and design. This is particularly true if PRA is applied to natural hazards other than seismic loading. Development of a new set of tools and methods that incorporate current knowledge, modern best practice, and state-of-the-art computational resources would lead to more reliable assessment of facility risk and risk insights (e.g., the SSCs and accident sequences that are most risk-significant), with less uncertainty and reduced conservatisms.« less
Elwan, Ahmed; Singh, Ranvir; Patterson, Maree; Roygard, Jon; Horne, Dave; Clothier, Brent; Jones, Geoffrey
2018-01-11
Better management of water quality in streams, rivers and lakes requires precise and accurate estimates of different contaminant loads. We assessed four sampling frequencies (2 days, weekly, fortnightly and monthly) and five load calculation methods (global mean (GM), rating curve (RC), ratio estimator (RE), flow-stratified (FS) and flow-weighted (FW)) to quantify loads of nitrate-nitrogen (NO 3 - -N), soluble inorganic nitrogen (SIN), total nitrogen (TN), dissolved reactive phosphorus (DRP), total phosphorus (TP) and total suspended solids (TSS), in the Manawatu River, New Zealand. The estimated annual river loads were compared to the reference 'true' loads, calculated using daily measurements of flow and water quality from May 2010 to April 2011, to quantify bias (i.e. accuracy) and root mean square error 'RMSE' (i.e. accuracy and precision). The GM method resulted into relatively higher RMSE values and a consistent negative bias (i.e. underestimation) in estimates of annual river loads across all sampling frequencies. The RC method resulted in the lowest RMSE for TN, TP and TSS at monthly sampling frequency. Yet, RC highly overestimated the loads for parameters that showed dilution effect such as NO 3 - -N and SIN. The FW and RE methods gave similar results, and there was no essential improvement in using RE over FW. In general, FW and RE performed better than FS in terms of bias, but FS performed slightly better than FW and RE in terms of RMSE for most of the water quality parameters (DRP, TP, TN and TSS) using a monthly sampling frequency. We found no significant decrease in RMSE values for estimates of NO 3 - N, SIN, TN and DRP loads when the sampling frequency was increased from monthly to fortnightly. The bias and RMSE values in estimates of TP and TSS loads (estimated by FW, RE and FS), however, showed a significant decrease in the case of weekly or 2-day sampling. This suggests potential for a higher sampling frequency during flow peaks for more precise and accurate estimates of annual river loads for TP and TSS, in the study river and other similar conditions.
NASA Technical Reports Server (NTRS)
Jadaan, Osama
2001-01-01
Present capabilities of the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code has the capability to compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth (SCG) type failure conditions CARES/Life can handle the cases of sustained and linearly increasing time-dependent loads, while for cyclic fatigue applications various types of repetitive constant amplitude loads can be accounted for. In real applications applied loads are rarely that simple, but rather vary with time in more complex ways such as, for example, engine start up, shut down, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. The objective of this paper is to demonstrate a methodology capable of predicting the time-dependent reliability of components subjected to transient thermomechanical loads that takes into account the change in material response with time. In this paper, the dominant delayed failure mechanism is assumed to be SCG. This capability has been added to the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code, which has also been modified to have the ability of interfacing with commercially available FEA codes executed for transient load histories. An example involving a ceramic exhaust valve subjected to combustion cycle loads is presented to demonstrate the viability of this methodology and the CARES/Life program.
Hazard Monitoring of Growing Lava Flow Fields Using Seismic Tremor
NASA Astrophysics Data System (ADS)
Eibl, E. P. S.; Bean, C. J.; Jónsdottir, I.; Hoskuldsson, A.; Thordarson, T.; Coppola, D.; Witt, T.; Walter, T. R.
2017-12-01
An effusive eruption in 2014/15 created a 85 km2 large lava flow field in a remote location in the Icelandic highlands. The lava flows did not threaten any settlements or paved roads but they were nevertheless interdisciplinarily monitored in detail. Images from satellites and aircraft, ground based video monitoring, GPS and seismic recordings allowed the monitoring and reconstruction of a detailed time series of the growing lava flow field. While the use of satellite images and probabilistic modelling of lava flows are quite common tools to monitor the current and forecast the future growth direction, here we show that seismic recordings can be of use too. We installed a cluster of seismometers at 15 km from the vents and recorded the ground vibrations associated with the eruption. This seismic tremor was not only generated below the vents, but also at the edges of the growing lava flow field and indicated the parts of the lava flow field that were most actively growing. Whilst the time resolution is in the range of days for satellites, seismic stations easily sample continuously at 100 Hz and could therefore provide a much better resolution and estimate of the lava flow hazard in real-time.
The calculation of downwash behind supersonic wings with an application to triangular plan forms
NASA Technical Reports Server (NTRS)
Lomax, Harvard; Sluder, Loma; Heaslet, Max A
1950-01-01
A method is developed consistent with the assumptions of small perturbation theory which provides a means of determining the downwash behind a wing in supersonic flow for a known load distribution. The analysis is based upon the use of supersonic doublets which are distributed over the plan form and wake of the wing in a manner determined from the wing loading. The equivalence in subsonic and supersonic flow of the downwash at infinity corresponding to a given load distribution is proved.
Groundwater Remediation using Bayesian Information-Gap Decision Theory
NASA Astrophysics Data System (ADS)
O'Malley, D.; Vesselinov, V. V.
2016-12-01
Probabilistic analyses of groundwater remediation scenarios frequently fail because the probability of an adverse, unanticipated event occurring is often high. In general, models of flow and transport in contaminated aquifers are always simpler than reality. Further, when a probabilistic analysis is performed, probability distributions are usually chosen more for convenience than correctness. The Bayesian Information-Gap Decision Theory (BIGDT) was designed to mitigate the shortcomings of the models and probabilistic decision analyses by leveraging a non-probabilistic decision theory - information-gap decision theory. BIGDT considers possible models that have not been explicitly enumerated and does not require us to commit to a particular probability distribution for model and remediation-design parameters. Both the set of possible models and the set of possible probability distributions grow as the degree of uncertainty increases. The fundamental question that BIGDT asks is "How large can these sets be before a particular decision results in an undesirable outcome?". The decision that allows these sets to be the largest is considered to be the best option. In this way, BIGDT enables robust decision-support for groundwater remediation problems. Here we apply BIGDT to in a representative groundwater remediation scenario where different options for hydraulic containment and pump & treat are being considered. BIGDT requires many model runs and for complex models high-performance computing resources are needed. These analyses are carried out on synthetic problems, but are applicable to real-world problems such as LANL site contaminations. BIGDT is implemented in Julia (a high-level, high-performance dynamic programming language for technical computing) and is part of the MADS framework (http://mads.lanl.gov/ and https://github.com/madsjulia/Mads.jl).
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
NASA Astrophysics Data System (ADS)
Addor, N.; Jaun, S.; Fundel, F.; Zappa, M.
2012-04-01
The Sihl River flows through Zurich, Switzerland's most populated city, for which it represents the largest flood threat. To anticipate extreme discharge events and provide decision support in case of flood risk, a hydrometeorological ensemble prediction system (HEPS) was launched operationally in 2008. This model chain relies on deterministic (COSMO-7) and probabilistic (COSMO-LEPS) atmospheric forecasts, which are used to force a semi-distributed hydrological model (PREVAH) coupled to a hydraulic model (FLORIS). The resulting hydrological forecasts are eventually communicated to the stakeholders involved in the Sihl discharge management. This fully operational setting provides a real framework with which we assessed the potential of deterministic and probabilistic discharge forecasts for flood mitigation. To study the suitability of HEPS for small-scale basins and to quantify the added value conveyed by the probability information, a 31-month reforecast was produced for the Sihl catchment (336 km2). Several metrics support the conclusion that the performance gain is of up to 2 days lead time for the catchment considered. Brier skill scores show that probabilistic hydrological forecasts outperform their deterministic counterparts for all the lead times and event intensities considered. The small size of the Sihl catchment does not prevent skillful discharge forecasts, but makes them particularly dependent on correct precipitation forecasts. Our evaluation stresses that the capacity of the model to provide confident and reliable mid-term probability forecasts for high discharges is limited. We finally highlight challenges for making decisions on the basis of hydrological predictions, and discuss the need for a tool to be used in addition to forecasts to compare the different mitigation actions possible in the Sihl catchment.
Fracture Mechanics Analysis of LH2 Feed Line Flow Liners
NASA Technical Reports Server (NTRS)
James, Mark A.; Dawicke, David S.; Brzowski, Matthew B.; Raju, Ivatury S.; Elliott, Kenny B.; Harris, Charles E.
2006-01-01
Inspections of the Space Shuttle Main Engine revealed fatigue cracks growing from slots in the flow liner of the liquid hydrogen (LH2) feed lines. During flight, the flow liners experience complex loading induced by flow of LH2 and the resonance characteristics of the structure. The flow liners are made of Inconel 718 and had previously not been considered a fracture critical component. However, fatigue failure of a flow liner could have catastrophic effect on the Shuttle engines. A fracture mechanics study was performed to determine if a damage tolerance approach to life management was possible and to determine the sensitivity to the load spectra, material properties, and crack size. The load spectra were derived separately from ground tests and material properties were obtained from coupon tests. The stress-intensity factors for the fatigue cracks were determined from a shell-dynamics approach that simulated the dominant resonant frequencies. Life predictions were obtained using the NASGRO life prediction code. The results indicated that adequate life could not be demonstrated for initial crack lengths of the size that could be detected by traditional NDE techniques.
Benchmark testing of DIII-D neutral beam modeling with water flow calorimetry
Rauch, J. M.; Crowley, B. J.; Scoville, J. T.; ...
2016-06-02
Power loading on beamline components in the DIII-D neutral beam system is measured in this paper using water flow calorimetry. The results are used to benchmark beam transport models. Finally, anomalously high heat loads in the magnet region are investigated and a speculative hypothesis as to their origin is presented.
NASA Astrophysics Data System (ADS)
Guo, Hang; Liu, Xuan; Zhao, Jian Fu; Ye, Fang; Ma, Chong Fang
2017-06-01
In this work, proton exchange membrane fuel cells (PEMFCs) with transparent windows are designed to study the gas-liquid two-phase flow behaviors inside flow channels and the performance of a PEMFC with vertical channels and a PEMFC with horizontal channels in a normal gravity environment and a 3.6 s short-term microgravity environment. Experiments are conducted under high external circuit load and low external circuit load at low temperature where is 35 °C. The results of the present experimental work demonstrate that the performance and the gas-liquid two-phase flow behaviors of the PEMFC with vertical channels exhibits obvious changes when the PEMFCs enter the 3.6 s short-term microgravity environment from the normal gravity environment. Meanwhile, the performance of the PEMFC with vertical channels increases after the PEMFC enters the 3.6 s short-term microgravity environment under high external circuit load, while under low external circuit load, the PEMFC with horizontal channels exhibits better performance in both the normal gravity environment and the 3.6 s short-term microgravity environment.
The Effect of Laminar Flow on Rotor Hover Performance
NASA Technical Reports Server (NTRS)
Overmeyer, Austin D.; Martin, Preston B.
2017-01-01
The topic of laminar flow effects on hover performance is introduced with respect to some historical efforts where laminar flow was either measured or attempted. An analysis method is outlined using combined blade element, momentum method coupled to an airfoil analysis method, which includes the full e(sup N) transition model. The analysis results compared well with the measured hover performance including the measured location of transition on both the upper and lower blade surfaces. The analysis method is then used to understand the upper limits of hover efficiency as a function of disk loading. The impact of laminar flow is higher at low disk loading, but significant improvement in terms of power loading appears possible even up to high disk loading approaching 20 ps f. A optimum planform design equation is derived for cases of zero profile drag and finite drag levels. These results are intended to be a guide for design studies and as a benchmark to compare higher fidelity analysis results. The details of the analysis method are given to enable other researchers to use the same approach for comparison to other approaches.
Crawford, Charles G.; Wilber, William G.; Peters, James G.
1980-01-01
A digital model calibrated to conditions in the Wabash River in Huntington County, Ind., was used to predict alternatives for future waste loadings that would be compatible with Indiana stream water-quality standards defined for two critical hydrologic conditons, summer and winter low flows. The major point-source waste load affecting the Wabash River in Huntington County is the Huntington wastewater-treatment facility. The most significnt factor potentially affecting the dissolved-oxygen concentration during summer low flows is nitrification. However, nitrification should not be a limiting factor on the allowable nitrogenous and carbonaceous waste loads for the Huntington wastewater-treatment facility during summer low flows if the ammonia-nitrogen toxicity standard for Indiana streams is met. The disolved-oxygen standard for Indiana stream, an average of 5.0 milligrams per liter, should be met during summer and winter low flows if the National Pollution Discharge Elimination System 's 5-day, carbonaceous biochemical-oxygen demands of a monthly average concentration of 30 milligrams per liter and a maximum weekly average of 45 milligrams per liter are not exceeded.
Berkas, Wayne R.
1995-01-01
Sediment data were collected on and along the Missouri River downstream from Garrison Dam during May 1988, May 1989, and April 1991 to characterize sediment transport in the river. Specific study objectives were to (1) identify erosional and depositional reaches during two steady-state low-flow periods and one steady-state high-flow period; (2) determine if the reaches are consistently eroding or depositing, regardless of streamflow; and (3) determine the sources of suspended sediment in the river. Erosional and depositional reaches differed between the two low-flow periods, indicating that slight changes in the channel configuration between the two periods caused changes in erosional and depositional patterns. Erosional and depositional reaches also differed between the low-flow periods and the high-flow period, indicating that channel changes and increased streamflow velocities affect erosional and depositional reaches. The significant sources of suspended sediment in the Missouri River are the riverbed and riverbanks. The riverbed contributes to the silt and sand load in the river, and the riverbanks contribute to the clay, silt, and sand load. The contribution from tributaries to the suspendedsediment load in the Missouri River usually is small. Occasionally, during low-flow periods on the Missouri River, the Knife River can contribute significantly to the suspended-sediment load in the Missouri River.
Effects of subglottal and supraglottal acoustic loading on voice production
NASA Astrophysics Data System (ADS)
Zhang, Zhaoyan; Mongeau, Luc; Frankel, Steven
2002-05-01
Speech production involves sound generation by confined jets through an orifice (the glottis) with a time-varying area. Predictive models are usually based on the quasi-steady assumption. This assumption allows the complex unsteady flows to be treated as steady flows, which are more effectively modeled computationally. Because of the reflective properties of the human lungs, trachea and vocal tract, subglottal and supraglottal resonance and other acoustic effects occur in speech, which might affect glottal impedance, especially in the regime of unsteady flow separation. Changes in the flow structure, or flow regurgitation due to a transient negative transglottal pressure, could also occur. These phenomena may affect the quasi-steady behavior of speech production. To investigate the possible effects of the subglottal and supraglottal acoustic loadings, a dynamic mechanical model of the larynx was designed and built. The subglottal and supraglottal acoustic loadings are simulated using an expansion in the tube upstream of the glottis and a finite length tube downstream, respectively. The acoustic pressures of waves radiated upstream and downstream of the orifice were measured and compared to those predicted using a model based on the quasi-steady assumption. A good agreement between the experimental data and the predictions was obtained for different operating frequencies, flow rates, and orifice shapes. This supports the validity of the quasi-steady assumption for various subglottal and supraglottal acoustic loadings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronold, K.O.; Nielsen, N.J.R.; Tura, F.
This paper demonstrates how a structural reliability method can be applied as a rational means to analyze free spans of submarine pipelines with respect to failure in ultimate loading, and to establish partial safety factors for design of such free spans against this failure mode. It is important to note that the described procedure shall be considered as an illustration of a structural reliability methodology, and that the results do not represent a set of final design recommendations. A scope of design cases, consisting of a number of available site-specific pipeline spans, is established and is assumed representative for themore » future occurrence of submarine pipeline spans. Probabilistic models for the wave and current loading and its transfer to stresses in the pipe wall of a pipeline span is established together with a stochastic representation of the material resistance. The event of failure in ultimate loading is considered as based on a limit state which is reached when the maximum stress over the design life of the pipeline exceeds the yield strength of the pipe material. The yielding limit state is considered an ultimate limit state (ULS).« less
NASA Astrophysics Data System (ADS)
Tinterri, R.; Muzzi Magalhaes, P.; Tagliaferri, A.; Cunha, R. S.
2016-10-01
This work discusses the significance of particular types of soft-sediment deformations very common within turbidite deposits, namely convolute laminations and load structures. Detailed facies analyses of the foredeep turbidites in the Marnoso-arenacea Formation (northern Italy) and Annot Sandstones (south eastern France) show that these deformational structures tend to increase near morphological obstacles, concomitantly with contained-reflected beds. The lateral and vertical distribution of convolute laminae and load structures, as well as their geometry, has a well-defined depositional logic related to flow decelerations and reflections against bounding slopes. This evidence suggests an interaction between fine-grained sediment and the presence of morphologic relief, and impulsive and cyclic-wave loadings, which are produced by flow impacts or reflected bores and internal waves related to impinging bipartite turbidity currents.
NASA Technical Reports Server (NTRS)
Ericsson, L. E.; Reding, J. P.
1976-01-01
An analysis of the steady and unsteady aerodynamics of the space shuttle orbiter has been performed. It is shown that slender wing theory can be modified to account for the effect of Mach number and leading edge roundness on both attached and separated flow loads. The orbiter unsteady aerodynamics can be computed by defining two equivalent slender wings, one for attached flow loads and another for the vortex-induced loads. It is found that the orbiter is in the transonic speed region subject to vortex-shock-boundary layer interactions that cause highly nonlinear or discontinuous load changes which can endanger the structural integrity of the orbiter wing and possibly cause snap roll problems. It is presently impossible to simulate these interactions in a wind tunnel test even in the static case. Thus, a well planned combined analytic and experimental approach is needed to solve the problem.
Mountcastle, Andrew M.; Combes, Stacey A.
2015-01-01
Bumblebee foragers spend a significant portion of their lives transporting nectar and pollen, often carrying loads equivalent to more than half their body mass. Whereas nectar is stored in the abdomen near the bee’s center of mass, pollen is carried on the hind legs, farther from the center of mass. We examine how load position changes the rotational moment of inertia in bumblebees and whether this affects their flight maneuverability and/or stability. We applied simulated pollen or nectar loads of equal mass to Bombus impatiens bumblebees and examined flight performance in a wind tunnel under three conditions: flight in unsteady flow, tracking an oscillating flower in smooth flow, and flower tracking in unsteady flow. Using an inertial model, we estimated that carrying a load on the legs rather than in the abdomen increases a bee’s moment of inertia about the roll and yaw axes but not the pitch axis. Consistent with these predictions, we found that bees carrying a load on their legs displayed slower rotations about their roll and yaw axes, regardless of whether these rotations were driven by external perturbations or self-initiated steering maneuvers. This allowed pollen-loaded bees to maintain a more stable body orientation and higher median flight speed in unsteady flow but reduced their performance when tracking a moving flower, supporting the concept of a tradeoff between stability and maneuverability. These results demonstrate that the types of resources collected by bees affect their flight performance and energetics and suggest that wind conditions may influence resource selection. PMID:26240364
Sreekanth, J; Cui, Tao; Pickett, Trevor; Rassam, David; Gilfedder, Mat; Barrett, Damian
2018-09-01
Large scale development of coal seam gas (CSG) is occurring in many sedimentary basins around the world including Australia, where commercial production of CSG has started in the Surat and Bowen basins. CSG development often involves extraction of large volumes of water that results in depressurising aquifers that overlie and/or underlie the coal seams thus perturbing their flow regimes. This can potentially impact regional aquifer systems that are used for many purposes such as irrigation, and stock and domestic water. In this study, we adopt a probabilistic approach to quantify the depressurisation of the Gunnedah coal seams and how this impacts fluxes to, and from the overlying Great Artesian Basin (GAB) Pilliga Sandstone aquifer. The proposed method is suitable when effects of a new resource development activity on the regional groundwater balance needs to be assessed and account for large scale uncertainties in the groundwater flow system and proposed activity. The results indicated that the extraction of water and gas from the coal seam could potentially induce additional fluxes from the Pilliga Sandstone to the deeper formations due to lowering pressure heads in the coal seams. The median value of the rise in the maximum flux from the Pilliga Sandstone to the deeper formations is estimated to be 85ML/year, which is considered insignificant as it forms only about 0.29% of the Long Term Annual Average Extraction Limit of 30GL/year from the groundwater management area. The probabilistic simulation of the water balance components indicates only small changes being induced by CSG development that influence interactions of the Pilliga Sandstone with the overlying and underlying formations and with the surface water courses. The current analyses that quantified the potential maximum impacts of resource developments and how they influences the regional water balance, would greatly underpin future management decisions. Copyright © 2018 Elsevier B.V. All rights reserved.