Sample records for probability based load

  1. Studying the effects of fuel treatment based on burn probability on a boreal forest landscape.

    PubMed

    Liu, Zhihua; Yang, Jian; He, Hong S

    2013-01-30

    Fuel treatment is assumed to be a primary tactic to mitigate intense and damaging wildfires. However, how to place treatment units across a landscape and assess its effectiveness is difficult for landscape-scale fuel management planning. In this study, we used a spatially explicit simulation model (LANDIS) to conduct wildfire risk assessments and optimize the placement of fuel treatments at the landscape scale. We first calculated a baseline burn probability map from empirical data (fuel, topography, weather, and fire ignition and size data) to assess fire risk. We then prioritized landscape-scale fuel treatment based on maps of burn probability and fuel loads (calculated from the interactions among tree composition, stand age, and disturbance history), and compared their effects on reducing fire risk. The burn probability map described the likelihood of burning on a given location; the fuel load map described the probability that a high fuel load will accumulate on a given location. Fuel treatment based on the burn probability map specified that stands with high burn probability be treated first, while fuel treatment based on the fuel load map specified that stands with high fuel loads be treated first. Our results indicated that fuel treatment based on burn probability greatly reduced the burned area and number of fires of different intensities. Fuel treatment based on burn probability also produced more dispersed and smaller high-risk fire patches and therefore can improve efficiency of subsequent fire suppression. The strength of our approach is that more model components (e.g., succession, fuel, and harvest) can be linked into LANDIS to map the spatially explicit wildfire risk and its dynamics to fuel management, vegetation dynamics, and harvesting. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Comparing fuel reduction treatments for reducing wildfire size and intensity in a boreal forest landscape of northeastern China.

    PubMed

    Wu, Zhiwei; He, Hong S; Liu, Zhihua; Liang, Yu

    2013-06-01

    Fuel load is often used to prioritize stands for fuel reduction treatments. However, wildfire size and intensity are not only related to fuel loads but also to a wide range of other spatially related factors such as topography, weather and human activity. In prioritizing fuel reduction treatments, we propose using burn probability to account for the effects of spatially related factors that can affect wildfire size and intensity. Our burn probability incorporated fuel load, ignition probability, and spread probability (spatial controls to wildfire) at a particular location across a landscape. Our goal was to assess differences in reducing wildfire size and intensity using fuel-load and burn-probability based treatment prioritization approaches. Our study was conducted in a boreal forest in northeastern China. We derived a fuel load map from a stand map and a burn probability map based on historical fire records and potential wildfire spread pattern. The burn probability map was validated using historical records of burned patches. We then simulated 100 ignitions and six fuel reduction treatments to compare fire size and intensity under two approaches of fuel treatment prioritization. We calibrated and validated simulated wildfires against historical wildfire data. Our results showed that fuel reduction treatments based on burn probability were more effective at reducing simulated wildfire size, mean and maximum rate of spread, and mean fire intensity, but less effective at reducing maximum fire intensity across the burned landscape than treatments based on fuel load. Thus, contributions from both fuels and spatially related factors should be considered for each fuel reduction treatment. Published by Elsevier B.V.

  3. Probability of stress-corrosion fracture under random loading

    NASA Technical Reports Server (NTRS)

    Yang, J. N.

    1974-01-01

    Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.

  4. Methods for Combining Payload Parameter Variations with Input Environment

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.; Straayer, J. W.

    1975-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.

  5. Tornado damage risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinhold, T.A.; Ellingwood, B.

    1982-09-01

    Several proposed models were evaluated for predicting tornado wind speed probabilities at nuclear plant sites as part of a program to develop statistical data on tornadoes needed for probability-based load combination analysis. A unified model was developed which synthesized the desired aspects of tornado occurrence and damage potential. The sensitivity of wind speed probability estimates to various tornado modeling assumptions are examined, and the probability distributions of tornado wind speed that are needed for load combination studies are presented.

  6. Time-dependent fracture probability of bilayer, lithium-disilicate-based, glass-ceramic, molar crowns as a function of core/veneer thickness ratio and load orientation.

    PubMed

    Anusavice, Kenneth J; Jadaan, Osama M; Esquivel-Upshaw, Josephine F

    2013-11-01

    Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Predicted fracture probabilities (Pf) for centrally loaded 1.6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8mm/0.8mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4mm/1.2mm). CARES/Life results support the proposed crown design and load orientation hypotheses. The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. Copyright © 2013 Academy of Dental Materials. All rights reserved.

  7. Time-dependent fracture probability of bilayer, lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation

    PubMed Central

    Anusavice, Kenneth J.; Jadaan, Osama M.; Esquivel–Upshaw, Josephine

    2013-01-01

    Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. Objective The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6 mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Materials and methods Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Results Predicted fracture probabilities (Pf) for centrally-loaded 1,6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8 mm/0.8 mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4 mm/1.2 mm). Conclusion CARES/Life results support the proposed crown design and load orientation hypotheses. Significance The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. PMID:24060349

  8. Methods for combining payload parameter variations with input environment. [calculating design limit loads compatible with probabilistic structural design criteria

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.

    1976-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.

  9. Probability of stress-corrosion fracture under random loading.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.

  10. Predicting traffic load impact of alternative recreation developments

    Treesearch

    Gary H. Elsner; Ronald A. Oliveira

    1973-01-01

    Traffic load changes as a result of expansion of recreation facilities may be predicted through computations based on estimates of (a) drawing power of the recreation attracttions, overnight accommodations, and in- or out-terminals; (b) probable types of travel; (c) probable routes of travel; and (d) total number of cars in the recreation system. Once the basic model...

  11. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  12. Effect of tornado loads on transmission lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishac, M.F.; White, H.B.

    Of all the populated areas in Canada, southwestern Ontario has experienced the highest tornado incidence and faces the greatest tornado damage. About 1 or 2 tornadoes per 10,000 km{sup 2} can be expected there annually. The probability of a tornado strike at a given point is very small but the probability of a transmission line being crossed by a tornado is significant. The purpose of this paper is to review the literature related to tornadoes in Ontario and to investigate the effect of tornado loads on transmission lines. Based on this investigation a design basis tornado loading for transmission towersmore » is proposed.« less

  13. Effect of tornado loads on transmission lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishac, M.F.; White, H.B.

    1994-12-31

    Of all the populated areas in Canada, southwestern Ontario has experienced the highest tornado incidence and faces the greatest tornado damage. About 1 or 2 tornadoes per 10,000 km{sup 2} can be expected there annually. The probability of a tornado strike at a given point is very small but the probability of a transmission line being crossed by a tornado is significant. The purpose of this paper is to review the literature related to tornadoes in Ontario and to investigate the effect of tornado loads on transmission lines. Based on this investigation a design basis tornado loading for transmission towersmore » is proposed.« less

  14. A screening-level modeling approach to estimate nitrogen ...

    EPA Pesticide Factsheets

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce

  15. A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges

    PubMed Central

    Wang, Xu; Sun, Baitao

    2014-01-01

    Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347

  16. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  17. A Bayesian approach to infer nitrogen loading rates from crop and land-use types surrounding private wells in the Central Valley, California

    NASA Astrophysics Data System (ADS)

    Ransom, Katherine M.; Bell, Andrew M.; Barber, Quinn E.; Kourakos, George; Harter, Thomas

    2018-05-01

    This study is focused on nitrogen loading from a wide variety of crop and land-use types in the Central Valley, California, USA, an intensively farmed region with high agricultural crop diversity. Nitrogen loading rates for several crop types have been measured based on field-scale experiments, and recent research has calculated nitrogen loading rates for crops throughout the Central Valley based on a mass balance approach. However, research is lacking to infer nitrogen loading rates for the broad diversity of crop and land-use types directly from groundwater nitrate measurements. Relating groundwater nitrate measurements to specific crops must account for the uncertainty about and multiplicity in contributing crops (and other land uses) to individual well measurements, and for the variability of nitrogen loading within farms and from farm to farm for the same crop type. In this study, we developed a Bayesian regression model that allowed us to estimate land-use-specific groundwater nitrogen loading rate probability distributions for 15 crop and land-use groups based on a database of recent nitrate measurements from 2149 private wells in the Central Valley. The water and natural, rice, and alfalfa and pasture groups had the lowest median estimated nitrogen loading rates, each with a median estimate below 5 kg N ha-1 yr-1. Confined animal feeding operations (dairies) and citrus and subtropical crops had the greatest median estimated nitrogen loading rates at approximately 269 and 65 kg N ha-1 yr-1, respectively. In general, our probability-based estimates compare favorably with previous direct measurements and with mass-balance-based estimates of nitrogen loading. Nitrogen mass-balance-based estimates are larger than our groundwater nitrate derived estimates for manured and nonmanured forage, nuts, cotton, tree fruit, and rice crops. These discrepancies are thought to be due to groundwater age mixing, dilution from infiltrating river water, or denitrification between the time when nitrogen leaves the root zone (point of reference for mass-balance-derived loading) and the time and location of groundwater measurement.

  18. Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Hilburger, Mark W.

    2003-01-01

    A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.

  19. An adaptive density-based routing protocol for flying Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Zheng, Xueli; Qi, Qian; Wang, Qingwen; Li, Yongqiang

    2017-10-01

    An Adaptive Density-based Routing Protocol (ADRP) for Flying Ad Hoc Networks (FANETs) is proposed in this paper. The main objective is to calculate forwarding probability adaptively in order to increase the efficiency of forwarding in FANETs. ADRP dynamically fine-tunes the rebroadcasting probability of a node for routing request packets according to the number of neighbour nodes. Indeed, it is more interesting to privilege the retransmission by nodes with little neighbour nodes. We describe the protocol, implement it and evaluate its performance using NS-2 network simulator. Simulation results reveal that ADRP achieves better performance in terms of the packet delivery fraction, average end-to-end delay, normalized routing load, normalized MAC load and throughput, which is respectively compared with AODV.

  20. Assessment of the transport routes of oversized and excessive loads in relation to the passage through roundabout

    NASA Astrophysics Data System (ADS)

    Petru, Jan; Dolezel, Jiri; Krivda, Vladislav

    2017-09-01

    In the past the excessive and oversized loads were realized on selected routes on roads that were adapted to ensure smooth passage of transport. Over the years, keeping the passages was abandoned and currently there are no earmarked routes which would be adapted for such type of transportation. The routes of excessive and oversized loads are currently planned to ensure passage of the vehicle through the critical points on the roads. Critical points are level and fly-over crossings of roads, bridges, toll gates, traffic signs and electrical and other lines. The article deals with the probability assessment of selected critical points of the route of the excessive load on the roads of 1st class, in relation to ensuring the passage through the roundabout. The bases for assessing the passage of the vehicle with excessive load through a roundabout are long-term results of video analyses of monitoring the movement of transports on similar intersections and determination of the theoretical probability model of vehicle movement at selected junctions. On the basis of a virtual simulation of the vehicle movement at crossroads and using MonteCarlo simulation method vehicles’ paths are analysed and the probability of exit of the vehicle outside the crossroad in given junctions is quantified.

  1. Load and Pi control flux through the branched kinetic cycle of myosin V.

    PubMed

    Kad, Neil M; Trybus, Kathleen M; Warshaw, David M

    2008-06-20

    Myosin V is a processive actin-based motor protein that takes multiple 36-nm steps to deliver intracellular cargo to its destination. In the laser trap, applied load slows myosin V heavy meromyosin stepping and increases the probability of backsteps. In the presence of 40 mm phosphate (P(i)), both forward and backward steps become less load-dependent. From these data, we infer that P(i) release commits myosin V to undergo a highly load-dependent transition from a state in which ADP is bound to both heads and its lead head trapped in a pre-powerstroke conformation. Increasing the residence time in this state by applying load increases the probability of backstepping or detachment. The kinetics of detachment indicate that myosin V can detach from actin at two distinct points in the cycle, one of which is turned off by the presence of P(i). We propose a branched kinetic model to explain these data. Our model includes P(i) release prior to the most load-dependent step in the cycle, implying that P(i) release and load both act as checkpoints that control the flux through two parallel pathways.

  2. Probabilistic Analysis of a Composite Crew Module

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Krishnamurthy, Thiagarajan

    2011-01-01

    An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.

  3. Prediction Interval Development for Wind-Tunnel Balance Check-Loading

    NASA Technical Reports Server (NTRS)

    Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.

    2014-01-01

    Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.

  4. Construction of estimated flow- and load-duration curves for Kentucky using the Water Availability Tool for Environmental Resources (WATER)

    USGS Publications Warehouse

    Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.

    2012-01-01

    Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.

  5. Reliability Constrained Priority Load Shedding for Aerospace Power System Automation

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)

    2000-01-01

    The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.

  6. Modeling of Abrasion and Crushing of Unbound Granular Materials During Compaction

    NASA Astrophysics Data System (ADS)

    Ocampo, Manuel S.; Caicedo, Bernardo

    2009-06-01

    Unbound compacted granular materials are commonly used in engineering structures as layers in road pavements, railroad beds, highway embankments, and foundations. These structures are generally subjected to dynamic loading by construction operations, traffic and wheel loads. These repeated or cyclic loads cause abrasion and crushing of the granular materials. Abrasion changes a particle's shape, and crushing divides the particle into a mixture of many small particles of varying sizes. Particle breakage is important because the mechanical and hydraulic properties of these materials depend upon their grain size distribution. Therefore, it is important to evaluate the evolution of the grain size distribution of these materials. In this paper an analytical model for unbound granular materials is proposed in order to evaluate particle crushing of gravels and soils subjected to cyclic loads. The model is based on a Markov chain which describes the development of grading changes in the material as a function of stress levels. In the model proposed, each particle size is a state in the system, and the evolution of the material is the movement of particles from one state to another in n steps. Each step is a load cycle, and movement between states is possible with a transition probability. The crushing of particles depends on the mechanical properties of each grain and the packing density of the granular material. The transition probability was calculated using both the survival probability defined by Weibull and the compressible packing model developed by De Larrard. Material mechanical properties are considered using the Weibull probability theory. The size and shape of the grains, as well as the method of processing the packing density are considered using De Larrard's model. Results of the proposed analytical model show a good agreement with the experimental tests carried out using the gyratory compaction test.

  7. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  8. Comparison of two MAC protocols based on LEO satellite networks

    NASA Astrophysics Data System (ADS)

    Guan, Mingxiang; Wang, Ruichun

    2009-12-01

    With the development of LEO satellite communication, it is the basic requirement that various kinds of services will be provided. Considering that weak channel collision detection ability, long propagation delay and heavy load in LEO satellite communication system, a valid adaptive access control protocol APRMA is proposed. Different access probability functions for different services are obtained and appropriate access probabilities for voice and data users are updated slot by slot based on the estimation of the voice traffic and the channel status. Finally simulation results demonstrate that the performance of system is improved by the APRMA compared with the conventional PRMA, with an acceptable trade-off between QoS of voice and delay of data. Also the APRMA protocol will be suitable for HAPS (high altitude platform station) with the characters of weak channel collision detection ability, long propagation delay and heavy load.

  9. Determination of chemical-constituent loads during base-flow and storm-runoff conditions near historical mines in Prospect Gulch, upper Animas River watershed, southwestern Colorado

    USGS Publications Warehouse

    Wirt, Laurie; Leib, K.J.; Bove, D.J.; Mast, M.A.; Evans, J.B.; Meeker, G.P.

    1999-01-01

    Prospect Gulch is a major source of iron, aluminum, zinc, and other metals to Cement Creek. Information is needed to prioritize remediation and develop strategies for cleanup of historical abandoned mine sites in Prospect Gulch. Chemical-constituent loads were determined in Prospect Gulch, a high-elevation alpine stream in southwestern Colorado that is affected by natural acid drainage from weathering of hydro-thermally altered igneous rock and acidic metal-laden discharge from historical abandoned mines. The objective of the study was to identify metal sources to Prospect Gulch. A tracer solution was injected into Prospect Gulch during water-quality sampling so that loading of geochemical constituents could be calculated throughout the study reach. A thunderstorm occurred during the tracer study, hence, metal loads were measured for storm-runoff as well as for base flow. Data from different parts of the study reach represents different flow conditions. The beginning of the reach represents background conditions during base flow immediately upstream from the Lark and Henrietta mines (samples PG5 to PG45). Other samples were collected during storm runoff conditions (PG100 to PG291); during the first flush of metal runoff following the onset of rainfall (PG303 to PG504), and samples PG542 to PG700 were collected during low-flow conditions. During base-flow conditions, the percentage increase in loads for major constituents and trace metals was more than an order of magnitude greater than the corresponding 36 % increase in stream discharge. Within the study reach, the highest percentage increases for dissolved loads were 740 % for iron (Fe), 465 % for aluminum (Al), 500 % for lead (Pb), 380 % for copper (Cu), 100 % for sulfate (SO4), and 50 % for zinc (Zn). Downstream loads near the mouth of Prospect Gulch often greatly exceeded the loads generated within the study reach but varied by metal species. For example, the study reach accounts for about 6 % of the dissolved-Fe load, 13 % of the dissolved-Al load, and 18 % of the dissolved-Zn load; but probably contributes virtually all of the dissolved Cu and Pb. The greatest downstream gains in dissolved trace-metal loads occurred near waste-rock dumps for the historical mines. The major sources of trace metals to the study reach were related to mining. The major source of trace metals in the reach near the mouth is unknown, however is probably related to weathering of highly altered igneous rocks, although an unknown component of trace metals could be derived from mining sources The late-summer storm dramatically increased the loads of most dissolved and total constituents. The effects of the storm were divided into two distinct periods; (1) a first flush of higher metal concentrations that occurred soon after rainfall began and (2) the peak discharge of the storm runoff. The first flush contained the highest loads of dissolved Fe, total and dissolved Zn, Cu, and Cd. The larger concentrations of Fe and sulfate in the first flush were likely derived from iron hydroxide minerals such as jarosite and schwertmanite, which are common on mine dumps in the Prospect Gulch drainage basin. Peak storm runoff contained the highest measured loads of total Fe, and of total and dissolved calcium, magnesium, silica and Al, which were probably derived from weathering of igneous rocks and clay minerals in the drainage basin.

  10. Biomechanical Tolerance of Calcaneal Fractures

    PubMed Central

    Yoganandan, Narayan; Pintar, Frank A.; Gennarelli, Thomas A.; Seipel, Robert; Marks, Richard

    1999-01-01

    Biomechanical studies have been conducted in the past to understand the mechanisms of injury to the foot-ankle complex. However, statistically based tolerance criteria for calcaneal complex injuries are lacking. Consequently, this research was designed to derive a probability distribution that represents human calcaneal tolerance under impact loading such as those encountered in vehicular collisions. Information for deriving the distribution was obtained by experiments on unembalmed human cadaver lower extremities. Briefly, the protocol included the following. The knee joint was disarticulated such that the entire lower extremity distal to the knee joint remained intact. The proximal tibia was fixed in polymethylmethacrylate. The specimens were aligned and impact loading was applied using mini-sled pendulum equipment. The pendulum impactor dynamically loaded the plantar aspect of the foot once. Following the test, specimens were palpated and radiographs in multiple planes were obtained. Injuries were classified into no fracture, and extra-and intra-articular fractures of the calcaneus. There were 14 cases of no injury and 12 cases of calcaneal fracture. The fracture forces (mean: 7802 N) were significantly different (p<0.01) from the forces in the no injury (mean: 4144 N) group. The probability of calcaneal fracture determined using logistic regression indicated that a force of 6.2 kN corresponds to 50 percent probability of calcaneal fracture. The derived probability distribution is useful in the design of dummies and vehicular surfaces.

  11. Integrity of Ceramic Parts Predicted When Loads and Temperatures Fluctuate Over Time

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    2004-01-01

    Brittle materials are being used, and being considered for use, for a wide variety of high performance applications that operate in harsh environments, including static and rotating turbine parts for unmanned aerial vehicles, auxiliary power units, and distributed power generation. Other applications include thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and microelectromechanical systems (MEMS). In order for these high-technology ceramics to be used successfully for structural applications that push the envelope of materials capabilities, design engineers must consider that brittle materials are designed and analyzed differently than metallic materials. Unlike ductile metals, brittle materials display a stochastic strength response because of the combination of low fracture toughness and the random nature of the size, orientation, and distribution of inherent microscopic flaws. This plus the fact that the strength of a component under load may degrade over time because of slow crack growth means that a probabilistic-based life-prediction methodology must be used when the tradeoffs of failure probability, performance, and useful life are being optimized. The CARES/Life code (which was developed at the NASA Glenn Research Center) predicts the probability of ceramic components failing from spontaneous catastrophic rupture when these components are subjected to multiaxial loading and slow crack growth conditions. Enhancements to CARES/Life now allow for the component survival probability to be calculated when loading and temperature vary over time.

  12. Basic Snow Pressure Calculation

    NASA Astrophysics Data System (ADS)

    Hao, Shouzhi; Su, Jian

    2018-03-01

    As extreme weather rising in recent years, the damage of large steel structures caused by weather is frequent in China. How to consider the effect of wind and snow loads on the structure in structural design has become the focus of attention in engineering field. In this paper, based on the serious snow disasters in recent years and comparative analysis of some scholars, influence factors and the value of the snow load, the probability model are described.

  13. Loading Rate Effects on the One-Dimensional Compressibility of Four Partially Saturated Soils

    DTIC Science & Technology

    1986-12-01

    representations are referred to as constitutive models. Numerous constitutive models incorporating loading rate effects have been developed ( Baladi and Rohani...and probably more indicative of the true values of applied pressure and average strain produced during the test. A technique developed by Baladi and...Sand," Technical Report No. AFWL-TR-66-146, Air Force Weapons Laboratory, Kirtland Air Force Base, New Mexico, June, 1967. 4. Baladi , George Y., and

  14. Reliability analysis of redundant systems. [a method to compute transition probabilities

    NASA Technical Reports Server (NTRS)

    Yeh, H. Y.

    1974-01-01

    A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.

  15. The Hybrid III upper and lower neck response in compressive loading scenarios with known human injury outcomes.

    PubMed

    Toomey, D E; Yang, K H; Van Ee, C A

    2014-01-01

    Physical biomechanical surrogates are critical for testing the efficacy of injury-mitigating safety strategies. The interpretation of measured Hybrid III neck loads in test scenarios resulting in compressive loading modes would be aided by a further understanding of the correlation between the mechanical responses in the Hybrid III neck and the probability of injury in the human cervical spine. The anthropomorphic test device (ATD) peak upper and lower neck responses were measured during dynamic compressive loading conditions comparable to those of postmortem human subject (PMHS) experiments. The peak ATD response could then be compared to the PMHS injury outcomes. A Hybrid III 50th percentile ATD head and neck assembly was tested under conditions matching those of male PMHS tests conducted on an inverted drop track. This includes variation in impact plate orientation (4 sagittal plane and 2 frontal plane orientations), impact plate surface friction, and ATD initial head/neck orientation. This unique matched data with known injury outcomes were used to evaluate existing ATD neck injury criteria. The Hybrid III ATD head and neck assembly was found to be robust and repeatable under severe loading conditions. The initial axial force response of the ATD head and neck is very comparable to PMHS experiments up to the point of PMHS cervical column buckle or material failure. An ATD lower neck peak compressive force as low as 6,290 N was associated with an unstable orthopedic cervical injury in a PMHS under equivalent impact conditions. ATD upper neck peak compressive force associated with a 5% probability of unstable cervical orthopedic injury ranged from as low as 3,708 to 3,877 N depending on the initial ATD neck angle. The correlation between peak ATD compressive neck response and PMHS test outcome in the current study resulted in a relationship between axial load and injury probability consistent with the current Hybrid III injury assessment reference values. The results add to the current understanding of cervical injury probability based on ATD neck compressive loading in that it is the only known study, in addition to Mertz et al. (1978), formulated directly from ATD compressive loading scenarios with known human injury outcomes.

  16. Survival Predictions of Ceramic Crowns Using Statistical Fracture Mechanics

    PubMed Central

    Nasrin, S.; Katsube, N.; Seghi, R.R.; Rokhlin, S.I.

    2017-01-01

    This work establishes a survival probability methodology for interface-initiated fatigue failures of monolithic ceramic crowns under simulated masticatory loading. A complete 3-dimensional (3D) finite element analysis model of a minimally reduced molar crown was developed using commercially available hardware and software. Estimates of material surface flaw distributions and fatigue parameters for 3 reinforced glass-ceramics (fluormica [FM], leucite [LR], and lithium disilicate [LD]) and a dense sintered yttrium-stabilized zirconia (YZ) were obtained from the literature and incorporated into the model. Utilizing the proposed fracture mechanics–based model, crown survival probability as a function of loading cycles was obtained from simulations performed on the 4 ceramic materials utilizing identical crown geometries and loading conditions. The weaker ceramic materials (FM and LR) resulted in lower survival rates than the more recently developed higher-strength ceramic materials (LD and YZ). The simulated 10-y survival rate of crowns fabricated from YZ was only slightly better than those fabricated from LD. In addition, 2 of the model crown systems (FM and LD) were expanded to determine regional-dependent failure probabilities. This analysis predicted that the LD-based crowns were more likely to fail from fractures initiating from margin areas, whereas the FM-based crowns showed a slightly higher probability of failure from fractures initiating from the occlusal table below the contact areas. These 2 predicted fracture initiation locations have some agreement with reported fractographic analyses of failed crowns. In this model, we considered the maximum tensile stress tangential to the interfacial surface, as opposed to the more universally reported maximum principal stress, because it more directly impacts crack propagation. While the accuracy of these predictions needs to be experimentally verified, the model can provide a fundamental understanding of the importance that pre-existing flaws at the intaglio surface have on fatigue failures. PMID:28107637

  17. Optimized Vertex Method and Hybrid Reliability

    NASA Technical Reports Server (NTRS)

    Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.

    2002-01-01

    A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.

  18. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  19. Combined loading criterial influence on structural performance

    NASA Technical Reports Server (NTRS)

    Kuchta, B. J.; Sealey, D. M.; Howell, L. J.

    1972-01-01

    An investigation was conducted to determine the influence of combined loading criteria on the space shuttle structural performance. The study consisted of four primary phases: Phase (1) The determination of the sensitivity of structural weight to various loading parameters associated with the space shuttle. Phase (2) The determination of the sensitivity of structural weight to various levels of loading parameter variability and probability. Phase (3) The determination of shuttle mission loading parameters variability and probability as a function of design evolution and the identification of those loading parameters where inadequate data exists. Phase (4) The determination of rational methods of combining both deterministic time varying and probabilistic loading parameters to provide realistic design criteria. The study results are presented.

  20. Load sharing in distributed real-time systems with state-change broadcasts

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Chang, Yi-Chieh

    1989-01-01

    A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.

  1. Energy Approach-Based Simulation of Structural Materials High-Cycle Fatigue

    NASA Astrophysics Data System (ADS)

    Balayev, A. F.; Korolev, A. V.; Kochetkov, A. V.; Sklyarova, A. I.; Zakharov, O. V.

    2016-02-01

    The paper describes the mechanism of micro-cracks development in solid structural materials based on the theory of brittle fracture. A probability function of material cracks energy distribution is obtained using a probabilistic approach. The paper states energy conditions for cracks growth at material high-cycle loading. A formula allowing to calculate the amount of energy absorbed during the cracks growth is given. The paper proposes a high- cycle fatigue evaluation criterion allowing to determine the maximum permissible number of solid body loading cycles, at which micro-cracks start growing rapidly up to destruction.

  2. Low Probability Tail Event Analysis and Mitigation in BPA Control Area: Task 2 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Shuai; Makarov, Yuri V.; McKinstry, Craig A.

    Task report detailing low probability tail event analysis and mitigation in BPA control area. Tail event refers to the situation in a power system when unfavorable forecast errors of load and wind are superposed onto fast load and wind ramps, or non-wind generators falling short of scheduled output, causing the imbalance between generation and load to become very significant.

  3. Study on probability distribution of prices in electricity market: A case study of zhejiang province, china

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.

    2009-05-01

    The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.

  4. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  5. Life prediction of different commercial dental implants as influence by uncertainties in their fatigue material properties and loading conditions.

    PubMed

    Pérez, M A

    2012-12-01

    Probabilistic analyses allow the effect of uncertainty in system parameters to be determined. In the literature, many researchers have investigated static loading effects on dental implants. However, the intrinsic variability and uncertainty of most of the main problem parameters are not accounted for. The objective of this research was to apply a probabilistic computational approach to predict the fatigue life of three different commercial dental implants considering the variability and uncertainty in their fatigue material properties and loading conditions. For one of the commercial dental implants, the influence of its diameter in the fatigue life performance was also studied. This stochastic technique was based on the combination of a probabilistic finite element method (PFEM) and a cumulative damage approach known as B-model. After 6 million of loading cycles, local failure probabilities of 0.3, 0.4 and 0.91 were predicted for the Lifecore, Avinent and GMI implants, respectively (diameter of 3.75mm). The influence of the diameter for the GMI implant was studied and the results predicted a local failure probability of 0.91 and 0.1 for the 3.75mm and 5mm, respectively. In all cases the highest failure probability was located at the upper screw-threads. Therefore, the probabilistic methodology proposed herein may be a useful tool for performing a qualitative comparison between different commercial dental implants. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  7. New approach to calibrating bed load samplers

    USGS Publications Warehouse

    Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.

    1985-01-01

    Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.

  8. 14 CFR 23.1309 - Equipment, systems, and installations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... compliance with this section with regard to the electrical power system and to equipment design and... the system must be able to supply the following power loads in probable operating combinations and for probable durations: (1) Loads connected to the power distribution system with the system functioning...

  9. The application of structural reliability techniques to plume impingement loading of the Space Station Freedom Photovoltaic Array

    NASA Technical Reports Server (NTRS)

    Yunis, Isam S.; Carney, Kelly S.

    1993-01-01

    A new aerospace application of structural reliability techniques is presented, where the applied forces depend on many probabilistic variables. This application is the plume impingement loading of the Space Station Freedom Photovoltaic Arrays. When the space shuttle berths with Space Station Freedom it must brake and maneuver towards the berthing point using its primary jets. The jet exhaust, or plume, may cause high loads on the photovoltaic arrays. The many parameters governing this problem are highly uncertain and random. An approach, using techniques from structural reliability, as opposed to the accepted deterministic methods, is presented which assesses the probability of failure of the array mast due to plume impingement loading. A Monte Carlo simulation of the berthing approach is used to determine the probability distribution of the loading. A probability distribution is also determined for the strength of the array. Structural reliability techniques are then used to assess the array mast design. These techniques are found to be superior to the standard deterministic dynamic transient analysis, for this class of problem. The results show that the probability of failure of the current array mast design, during its 15 year life, is minute.

  10. Assessment of the transportation route of oversize and excessive loads in relation to the load-bearing capacity of existing bridges

    NASA Astrophysics Data System (ADS)

    Doležel, Jiří; Novák, Drahomír; Petrů, Jan

    2017-09-01

    Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.

  11. Lamb wave-based damage quantification and probability of detection modeling for fatigue life assessment of riveted lap joint

    NASA Astrophysics Data System (ADS)

    He, Jingjing; Wang, Dengjiang; Zhang, Weifang

    2015-03-01

    This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.

  12. 14 CFR 23.1309 - Equipment, systems, and installations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... chapter and that requires a power supply is an “essential load” on the power supply. The power sources and the system must be able to supply the following power loads in probable operating combinations and for probable durations: (1) Loads connected to the power distribution system with the system functioning...

  13. Statistics concerning the Apollo command module water landing, including the probability of occurrence of various impact conditions, sucessful impact, and body X-axis loads

    NASA Technical Reports Server (NTRS)

    Whitnah, A. M.; Howes, D. B.

    1971-01-01

    Statistical information for the Apollo command module water landings is presented. This information includes the probability of occurrence of various impact conditions, a successful impact, and body X-axis loads of various magnitudes.

  14. Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2005-01-01

    The Brownian Passage Time (BPT) model for earthquake recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the rate of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT model's uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of earthquake interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions modeled as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed earthquake occurrences, then the use of a transient deformation model will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate earthquake probabilities based on our increasing understanding of the seismogenic process, including earthquake interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 earthquake, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in earthquake recurrence than a uniform loading model with the same noise level.

  15. Investigation of Dielectric Breakdown Characteristics for Double-break Vacuum Interrupter and Dielectric Breakdown Probability Distribution in Vacuum Interrupter

    NASA Astrophysics Data System (ADS)

    Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi

    To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.

  16. INTEGRATION OF RELIABILITY WITH MECHANISTIC THERMALHYDRAULICS: REPORT ON APPROACH AND TEST PROBLEM RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. Schroeder; R. W. Youngblood

    The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less

  17. Quadratic partial eigenvalue assignment in large-scale stochastic dynamic systems for resilient and economic design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Sonjoy; Goswami, Kundan; Datta, Biswa N.

    2014-12-10

    Failure of structural systems under dynamic loading can be prevented via active vibration control which shifts the damped natural frequencies of the systems away from the dominant range of loading spectrum. The damped natural frequencies and the dynamic load typically show significant variations in practice. A computationally efficient methodology based on quadratic partial eigenvalue assignment technique and optimization under uncertainty has been formulated in the present work that will rigorously account for these variations and result in an economic and resilient design of structures. A novel scheme based on hierarchical clustering and importance sampling is also developed in this workmore » for accurate and efficient estimation of probability of failure to guarantee the desired resilience level of the designed system. Numerical examples are presented to illustrate the proposed methodology.« less

  18. Quantifying suspended sediment loads delivered to Cheney Reservoir, Kansas: Temporal patterns and management implications

    USGS Publications Warehouse

    Stone, Mandy L.; Juracek, Kyle E.; Graham, Jennifer L.; Foster, Guy

    2015-01-01

    Cheney Reservoir, constructed during 1962 to 1965, is the primary water supply for the city of Wichita, the largest city in Kansas. Sediment is an important concern for the reservoir as it degrades water quality and progressively decreases water storage capacity. Long-term data collection provided a unique opportunity to estimate the annual suspended sediment loads for the entire history of the reservoir. To quantify and characterize sediment loading to Cheney Reservoir, discrete suspended sediment samples and continuously measured streamflow data were collected from the North Fork Ninnescah River, the primary inflow to Cheney Reservoir, over a 48-year period. Continuous turbidity data also were collected over a 15-year period. These data were used together to develop simple linear regression models to compute continuous suspended sediment concentrations and loads from 1966 to 2013. The inclusion of turbidity as an additional explanatory variable with streamflow improved regression model diagnostics and increased the amount of variability in suspended sediment concentration explained by 14%. Using suspended sediment concentration from the streamflow-only model, the average annual suspended sediment load was 102,517 t (113,006 tn) and ranged from 4,826 t (5,320 tn) in 1966 to 967,569 t (1,066,562 tn) in 1979. The sediment load in 1979 accounted for about 20% of the total load over the 48-year history of the reservoir and 92% of the 1979 sediment load occurred in one 24-hour period during a 1% annual exceedance probability flow event (104-year flood). Nearly 60% of the reservoir sediment load during the 48-year study period occurred in 5 years with extreme flow events (9% to 1% annual exceedance probability, or 11- to 104-year flood events). A substantial portion (41%) of sediment was transported to the reservoir during five storm events spanning only eight 24-hour periods during 1966 to 2013. Annual suspended sediment load estimates based on streamflow were, on average, within ±20% of estimates based on streamflow and turbidity combined. Results demonstrate that large suspended sediment loads are delivered to Cheney Reservoir in very short time periods, indicating that sediment management plans eventually must address large, infrequent inflow events to be effective.

  19. A new model for bed load sampler calibration to replace the probability-matching method

    Treesearch

    Robert B. Thomas; Jack Lewis

    1993-01-01

    In 1977 extensive data were collected to calibrate six Helley-Smith bed load samplers with four sediment particle sizes in a flume at the St. Anthony Falls Hydraulic Laboratory at the University of Minnesota. Because sampler data cannot be collected at the same time and place as ""true"" trap measurements, the ""probability-matching...

  20. Modal analysis of annual runoff volume and sediment load in the Yangtze river-lake system for the period 1956-2013.

    PubMed

    Chen, Huai; Zhu, Lijun; Wang, Jianzhong; Fan, Hongxia; Wang, Zhihuan

    2017-07-01

    This study focuses on detecting trends in annual runoff volume and sediment load in the Yangtze river-lake system. Times series of annual runoff volume and sediment load at 19 hydrological gauging stations for the period 1956-2013 were collected. Based on the Mann-Kendall test at the 1% significance level, annual sediment loads in the Yangtze River, the Dongting Lake and the Poyang Lake were detected with significantly descending trends. The power spectrum estimation indicated predominant oscillations with periods of 8 and 20 years are embedded in the runoff volume series, probably related to the El Niño Southern Oscillation (2-7 years) and Pacific Decadal Oscillation (20-30 years). Based on dominant components (capturing more than roughly 90% total energy) extracted by the proper orthogonal decomposition method, total change ratios of runoff volume and sediment load during the last 58 years were evaluated. For sediment load, the mean CRT value in the Yangtze River is about -65%, and those in the Dongting Lake and the Poyang Lake are -92.2% and -87.9% respectively. Particularly, the CRT value of the sediment load in the channel inflow of the Dongting Lake is even -99.7%. The Three Gorges Dam has intercepted a large amount of sediment load and decreased the sediment load downstream.

  1. The Extravehicular Suit Impact Load Attenuation Study for Use in Astronaut Bone Fracture Prediction

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Gilkey, Kelly M.; Sulkowski, Christina M.; Samorezov, Sergey; Myers, Jerry G.

    2011-01-01

    The NASA Integrated Medical Model (IMM) assesses the risk, including likelihood and impact of occurrence, of all credible in-flight medical conditions. Fracture of the proximal femur is a traumatic injury that would likely result in loss of mission if it were to happen during spaceflight. The low gravity exposure causes decreases in bone mineral density which heightens the concern. Researchers at the NASA Glenn Research Center have quantified bone fracture probability during spaceflight with a probabilistic model. It was assumed that a pressurized extravehicular activity (EVA) suit would attenuate load during a fall, but no supporting data was available. The suit impact load attenuation study was performed to collect analogous data. METHODS: A pressurized EVA suit analog test bed was used to study how the offset, defined as the gap between the suit and the astronaut s body, impact load magnitude and suit operating pressure affects the attenuation of impact load. The attenuation data was incorporated into the probabilistic model of bone fracture as a function of these factors, replacing a load attenuation value based on commercial hip protectors. RESULTS: Load attenuation was more dependent on offset than on pressurization or load magnitude, especially at small offsets. Load attenuation factors for offsets between 0.1 - 1.5 cm were 0.69 +/- 0.15, 0.49 +/- 0.22 and 0.35 +/- 0.18 for mean impact forces of 4827, 6400 and 8467 N, respectively. Load attenuation factors for offsets of 2.8 - 5.3 cm were 0.93 +/- 0.2, 0.94 +/- 0.1 and 0.84 +/- 0.5, for the same mean impact forces. Reductions were observed in the 95th percentile confidence interval of the bone fracture probability predictions. CONCLUSIONS: The reduction in uncertainty and improved confidence in bone fracture predictions increased the fidelity and credibility of the fracture risk model and its benefit to mission design and operational decisions.

  2. A dynamic programming approach to estimate the capacity value of energy storage

    DOE PAGES

    Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul

    2013-09-17

    Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less

  3. Non-Deterministic Dynamic Instability of Composite Shells

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2004-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.

  4. Dynamic Probabilistic Instability of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2009-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.

  5. Statistical analysis of general aviation VG-VGH data

    NASA Technical Reports Server (NTRS)

    Clay, L. E.; Dickey, R. L.; Moran, M. S.; Payauys, K. W.; Severyn, T. P.

    1974-01-01

    To represent the loads spectra of general aviation aircraft operating in the Continental United States, VG and VGH data collected since 1963 in eight operational categories were processed and analyzed. Adequacy of data sample and current operational categories, and parameter distributions required for valid data extrapolation were studied along with envelopes of equal probability of exceeding the normal load factor (n sub z) versus airspeed for gust and maneuver loads and the probability of exceeding current design maneuver, gust, and landing impact n sub z limits. The significant findings are included.

  6. Ash fallout scenarios at Vesuvius: Numerical simulations and implications for hazard assessment

    NASA Astrophysics Data System (ADS)

    Macedonio, G.; Costa, A.; Folch, A.

    2008-12-01

    Volcanic ash fallout subsequent to a possible renewal of the Vesuvius activity represents a serious threat to the highly urbanized area around the volcano. In order to assess the relative hazard we consider three different possible scenarios such as those following Plinian, Sub-Plinian, and violent Strombolian eruptions. Reference eruptions for each scenario are similar to the 79 AD (Pompeii), the 1631 AD (or 472 AD) and the 1944 AD Vesuvius events, respectively. Fallout deposits for the first two scenarios are modeled using HAZMAP, a model based on a semi-analytical solution of the 2D advection-diffusion-sedimentation equation. In contrast, fallout following a violent Strombolian event is modeled by means of FALL3D, a numerical model based on the solution of the full 3D advection-diffusion-sedimentation equation which is valid also within the atmospheric boundary layer. Inputs for models are total erupted mass, eruption column height, bulk grain-size, bulk component distribution, and a statistical set of wind profiles obtained by the NCEP/NCAR re-analysis. We computed ground load probability maps for different ash loadings. In the case of a Sub-Plinian scenario, the most representative tephra loading maps in 16 cardinal directions were also calculated. The probability maps obtained for the different scenarios are aimed to give support to the risk mitigation strategies.

  7. Loss of Load Probability Calculation for West Java Power System with Nuclear Power Plant Scenario

    NASA Astrophysics Data System (ADS)

    Azizah, I. D.; Abdullah, A. G.; Purnama, W.; Nandiyanto, A. B. D.; Shafii, M. A.

    2017-03-01

    Loss of Load Probability (LOLP) index showing the quality and performance of an electrical system. LOLP value is affected by load growth, the load duration curve, forced outage rate of the plant, number and capacity of generating units. This reliability index calculation begins with load forecasting to 2018 using multiple regression method. Scenario 1 with compositions of conventional plants produce the largest LOLP in 2017 amounted to 71.609 days / year. While the best reliability index generated in scenario 2 with the NPP amounted to 6.941 days / year in 2015. Improved reliability of systems using nuclear power more efficiently when compared to conventional plants because it also has advantages such as emission-free, inexpensive fuel costs, as well as high level of plant availability.

  8. Taking the easy way out? Increasing implementation effort reduces probability maximizing under cognitive load.

    PubMed

    Schulze, Christin; Newell, Ben R

    2016-07-01

    Cognitive load has previously been found to have a positive effect on strategy selection in repeated risky choice. Specifically, whereas inferior probability matching often prevails under single-task conditions, optimal probability maximizing sometimes dominates when a concurrent task competes for cognitive resources. We examined the extent to which this seemingly beneficial effect of increased task demands hinges on the effort required to implement each of the choice strategies. Probability maximizing typically involves a simple repeated response to a single option, whereas probability matching requires choice proportions to be tracked carefully throughout a sequential choice task. Here, we flipped this pattern by introducing a manipulation that made the implementation of maximizing more taxing and, at the same time, allowed decision makers to probability match via a simple repeated response to a single option. The results from two experiments showed that increasing the implementation effort of probability maximizing resulted in decreased adoption rates of this strategy. This was the case both when decision makers simultaneously learned about the outcome probabilities and responded to a dual task (Exp. 1) and when these two aspects were procedurally separated in two distinct stages (Exp. 2). We conclude that the effort involved in implementing a choice strategy is a key factor in shaping repeated choice under uncertainty. Moreover, highlighting the importance of implementation effort casts new light on the sometimes surprising and inconsistent effects of cognitive load that have previously been reported in the literature.

  9. Defense and avoidance of ozone under global change

    Treesearch

    Michael Tausz; Nancy E. Grulke; Gerhard Wieser

    2007-01-01

    The level II approach of the critical loads concept adopted by the UNECE aims at a flux based evaluation and takes into account environmental factors governing stomatal conductance. These factors will probably be affected by global change. The flux concept predicts that a decrease in stomatal conductance would protect trees from air pollution effects by decreasing...

  10. A Compendium of Wind Statistics and Models for the NASA Space Shuttle and Other Aerospace Vehicle Programs

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Adelfang, S. I.

    1998-01-01

    The wind profile with all of its variations with respect to altitude has been, is now, and will continue to be important for aerospace vehicle design and operations. Wind profile databases and models are used for the vehicle ascent flight design for structural wind loading, flight control systems, performance analysis, and launch operations. This report presents the evolution of wind statistics and wind models from the empirical scalar wind profile model established for the Saturn Program through the development of the vector wind profile model used for the Space Shuttle design to the variations of this wind modeling concept for the X-33 program. Because wind is a vector quantity, the vector wind models use the rigorous mathematical probability properties of the multivariate normal probability distribution. When the vehicle ascent steering commands (ascent guidance) are wind biased to the wind profile measured on the day-of-launch, ascent structural wind loads are reduced and launch probability is increased. This wind load alleviation technique is recommended in the initial phase of vehicle development. The vehicle must fly through the largest load allowable versus altitude to achieve its mission. The Gumbel extreme value probability distribution is used to obtain the probability of exceeding (or not exceeding) the load allowable. The time conditional probability function is derived from the Gumbel bivariate extreme value distribution. This time conditional function is used for calculation of wind loads persistence increments using 3.5-hour Jimsphere wind pairs. These increments are used to protect the commit-to-launch decision. Other topics presented include the Shuttle Shuttle load-response to smoothed wind profiles, a new gust model, and advancements in wind profile measuring systems. From the lessons learned and knowledge gained from past vehicle programs, the development of future launch vehicles can be accelerated. However, new vehicle programs by their very nature will require specialized support for new databases and analyses for wind, atmospheric parameters (pressure, temperature, and density versus altitude), and weather. It is for this reason that project managers are encouraged to collaborate with natural environment specialists early in the conceptual design phase. Such action will give the lead time necessary to meet the natural environment design and operational requirements, and thus, reduce development costs.

  11. Determination of the Crack Resistance Parameters at Equipment Nozzle Zones Under the Seismic Loads Via Finite Element Method

    NASA Astrophysics Data System (ADS)

    Kyrychok, Vladyslav; Torop, Vasyl

    2018-03-01

    The present paper is devoted to the problem of the assessment of probable crack growth at pressure vessel nozzles zone under the cyclic seismic loads. The approaches to creating distributed pipeline systems, connected to equipment are being proposed. The possibility of using in common different finite element program packages for accurate estimation of the strength of bonded pipelines and pressure vessels systems is shown and justified. The authors propose checking the danger of defects in nozzle domain, evaluate the residual life of the system, basing on the developed approach.

  12. Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods

    NASA Astrophysics Data System (ADS)

    Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed

    2018-04-01

    This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.

  13. Data Mining of Historical Human Data to Assess the Risk of Injury due to Dynamic Loads

    NASA Technical Reports Server (NTRS)

    Wells, Jesica; Somers, Jeffrey T.; Newby, N.; Gernhardt, Michael

    2014-01-01

    The NASA Occupant Protection Group is charged with ensuring crewmembers are protected during all dynamic phases of spaceflight. Previous work with outside experts has led to the development of a definition of acceptable risk (DAR) for space capsule vehicles. The DAR defines allowable probability rates for various categories of injuries. An important question is how to validate these probabilities for a given vehicle. One approach is to impact test human volunteers under projected nominal landing loads. The main drawback is the large number of subject tests required to attain a reasonable level of confidence that the injury probability rates would meet those outlined in the DAR. An alternative is to mine existing databases containing human responses to impact. Testing an anthropomorphic test device (ATD) at the same human-exposure levels could yield a range of ATD responses that would meet DAR. As one aspect of future vehicle validation, the ATD could be tested in the vehicle's seat and suit configuration at nominal landing loads and compared with the ATD responses supported by the human data set. This approach could reduce the number of human-volunteer tests NASA would need to conduct to validate that a vehicle meets occupant protection standards. METHODS: The U.S. Air Force has recorded hundreds of human responses to frontal, lateral, and spinal impacts at many acceleration levels and pulse durations. All of this data are stored on the Collaborative Biomechanics Data Network (CBDN), which is maintained by the Wright Patterson Air Force Base (WPAFB). The test device for human occupant restraint (THOR) ATD was impact tested on WPAFB's horizontal impulse accelerator (HIA) matching human-volunteer exposures on the HIA to 5 frontal and 3 spinal loading conditions. No human injuries occurred as a result of these impact conditions. Peak THOR response variables for neck axial tension and compression, and thoracic-spine axial compression were collected. Maximal chest deflection was determined from motion capture video of the impact test. HIC- 15 and BRIC were calculated from head acceleration responses. Given the number of human subjects for each test condition a confidence interval of injury probability will be obtained. RESULTS: Results will be discussed in terms of injury-risk probability estimates based on the human data set evaluated. Also, gaps in the data set will be identified. These gaps could be one of two types. One is areas where additional THOR testing would increase the comparable human data set, thereby improving confidence in the injury probability rate. The other is where additional human testing would assist in obtaining information on other acceleration levels or directions. DISCUSSION: The historical human data showed validity of the THOR ATD for supplemental testing. The historical human data are limited in scope, however. Further data are needed to characterize the effects of sex, age, anthropometry, and deconditioning due to spaceflight on risk of injury

  14. Probabilistic safety analysis of earth retaining structures during earthquakes

    NASA Astrophysics Data System (ADS)

    Grivas, D. A.; Souflis, C.

    1982-07-01

    A procedure is presented for determining the probability of failure of Earth retaining structures under static or seismic conditions. Four possible modes of failure (overturning, base sliding, bearing capacity, and overall sliding) are examined and their combined effect is evaluated with the aid of combinatorial analysis. The probability of failure is shown to be a more adequate measure of safety than the customary factor of safety. As Earth retaining structures may fail in four distinct modes, a system analysis can provide a single estimate for the possibility of failure. A Bayesian formulation of the safety retaining walls is found to provide an improved measure for the predicted probability of failure under seismic loading. The presented Bayesian analysis can account for the damage incurred to a retaining wall during an earthquake to provide an improved estimate for its probability of failure during future seismic events.

  15. Strength and life criteria for corrugated fiberboard by three methods

    Treesearch

    Thomas J. Urbanik

    1997-01-01

    The conventional test method for determining the stacking life of corrugated containers at a fixed load level does not adequately predict a safe load when storage time is fixed. This study introduced multiple load levels and related the probability of time at failure to load. A statistical analysis of logarithm-of-time failure data varying with load level predicts the...

  16. A methodology for estimating risks associated with landslides of contaminated soil into rivers.

    PubMed

    Göransson, Gunnel; Norrman, Jenny; Larson, Magnus; Alén, Claes; Rosén, Lars

    2014-02-15

    Urban areas adjacent to surface water are exposed to soil movements such as erosion and slope failures (landslides). A landslide is a potential mechanism for mobilisation and spreading of pollutants. This mechanism is in general not included in environmental risk assessments for contaminated sites, and the consequences associated with contamination in the soil are typically not considered in landslide risk assessments. This study suggests a methodology to estimate the environmental risks associated with landslides in contaminated sites adjacent to rivers. The methodology is probabilistic and allows for datasets with large uncertainties and the use of expert judgements, providing quantitative estimates of probabilities for defined failures. The approach is illustrated by a case study along the river Göta Älv, Sweden, where failures are defined and probabilities for those failures are estimated. Failures are defined from a pollution perspective and in terms of exceeding environmental quality standards (EQSs) and acceptable contaminant loads. Models are then suggested to estimate probabilities of these failures. A landslide analysis is carried out to assess landslide probabilities based on data from a recent landslide risk classification study along the river Göta Älv. The suggested methodology is meant to be a supplement to either landslide risk assessment (LRA) or environmental risk assessment (ERA), providing quantitative estimates of the risks associated with landslide in contaminated sites. The proposed methodology can also act as a basis for communication and discussion, thereby contributing to intersectoral management solutions. From the case study it was found that the defined failures are governed primarily by the probability of a landslide occurring. The overall probabilities for failure are low; however, if a landslide occurs the probabilities of exceeding EQS are high and the probability of having at least a 10% increase in the contamination load within one year is also high. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  18. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  19. A Mathematical Model of the Illinois Interlibrary Loan Network: Project Report Number 2.

    ERIC Educational Resources Information Center

    Rouse, William B.; And Others

    The development of a mathematical model of the Illinois Library and Information Network (ILLINET) is described. Based on queueing network theory, the model predicts the probability of a request being satisfied, the average time from the initiation of a request to the receipt of the desired resources, the costs, and the processing loads. Using a…

  20. Sensitivity analysis of limit state functions for probability-based plastic design

    NASA Technical Reports Server (NTRS)

    Frangopol, D. M.

    1984-01-01

    The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.

  1. Reliability and Creep/Fatigue Analysis of a CMC Component

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; Gyekenyesi, John Z.; Gyekenyesi, John P.

    2007-01-01

    High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight and enable higher operating temperatures requiring less cooling; thus leading to increased engine efficiencies. There is a need for convenient design tools that can accommodate various loading conditions and material data with their associated uncertainties to estimate the minimum predicted life as well as the failure probabilities of a structural component. This paper presents a review of the life prediction and probabilistic analyses performed for a CMC turbine stator vane. A computer code, NASALife, is used to predict the life of a 2-D woven silicon carbide fiber reinforced silicon carbide matrix (SiC/SiC) turbine stator vane due to a mission cycle which induces low cycle fatigue and creep. The output from this program includes damage from creep loading, damage due to cyclic loading and the combined damage due to the given loading cycle. Results indicate that the trends predicted by NASALife are as expected for the loading conditions used for this study. In addition, a combination of woven composite micromechanics, finite element structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Results indicate that reducing the scatter in proportional limit strength of the vane material has the greatest effect in improving the overall reliability of the CMC vane.

  2. The Role of Cognitive and Perceptual Loads in Inattentional Deafness

    PubMed Central

    Causse, Mickaël; Imbert, Jean-Paul; Giraudet, Louise; Jouffrais, Christophe; Tremblay, Sébastien

    2016-01-01

    The current study examines the role of cognitive and perceptual loads in inattentional deafness (the failure to perceive an auditory stimulus) and the possibility to predict this phenomenon with ocular measurements. Twenty participants performed Air Traffic Control (ATC) scenarios—in the Laby ATC-like microworld—guiding one (low cognitive load) or two (high cognitive load) aircraft while responding to visual notifications related to 7 (low perceptual load) or 21 (high perceptual load) peripheral aircraft. At the same time, participants were played standard tones which they had to ignore (probability = 0.80), or deviant tones (probability = 0.20) which they had to report. Behavioral results showed that 28.76% of alarms were not reported in the low cognitive load condition and up to 46.21% in the high cognitive load condition. On the contrary, perceptual load had no impact on the inattentional deafness rate. Finally, the mean pupil diameter of the fixations that preceded the target tones was significantly lower in the trials in which the participants did not report the tones, likely showing a momentary lapse of sustained attention, which in turn was associated to the occurrence of inattentional deafness. PMID:27458362

  3. Influence of maneuverability on helicopter combat effectiveness

    NASA Technical Reports Server (NTRS)

    Falco, M.; Smith, R.

    1982-01-01

    A computational procedure employing a stochastic learning method in conjunction with dynamic simulation of helicopter flight and weapon system operation was used to derive helicopter maneuvering strategies. The derived strategies maximize either survival or kill probability and are in the form of a feedback control based upon threat visual or warning system cues. Maneuverability parameters implicit in the strategy development include maximum longitudinal acceleration and deceleration, maximum sustained and transient load factor turn rate at forward speed, and maximum pedal turn rate and lateral acceleration at hover. Results are presented in terms of probability of skill for all combat initial conditions for two threat categories.

  4. Modeling spot markets for electricity and pricing electricity derivatives

    NASA Astrophysics Data System (ADS)

    Ning, Yumei

    Spot prices for electricity have been very volatile with dramatic price spikes occurring in restructured market. The task of forecasting electricity prices and managing price risk presents a new challenge for market players. The objectives of this dissertation are: (1) to develop a stochastic model of price behavior and predict price spikes; (2) to examine the effect of weather forecasts on forecasted prices; (3) to price electricity options and value generation capacity. The volatile behavior of prices can be represented by a stochastic regime-switching model. In the model, the means of the high-price and low-price regimes and the probabilities of switching from one regime to the other are specified as functions of daily peak load. The probability of switching to the high-price regime is positively related to load, but is still not high enough at the highest loads to predict price spikes accurately. An application of this model shows how the structure of the Pennsylvania-New Jersey-Maryland market changed when market-based offers were allowed, resulting in higher price spikes. An ARIMA model including temperature, seasonal, and weekly effects is estimated to forecast daily peak load. Forecasts of load under different assumptions about weather patterns are used to predict changes of price behavior given the regime-switching model of prices. Results show that the range of temperature forecasts from a normal summer to an extremely warm summer cause relatively small increases in temperature (+1.5%) and load (+3.0%). In contrast, the increases in prices are large (+20%). The conclusion is that the seasonal outlook forecasts provided by NOAA are potentially valuable for predicting prices in electricity markets. The traditional option models, based on Geometric Brownian Motion are not appropriate for electricity prices. An option model using the regime-switching framework is developed to value a European call option. The model includes volatility risk and allows changes in prices and volatility to be correlated. The results show that the value of a power plant is much higher using the financial option model than using traditional discounted cash flow.

  5. A study of the application of power-spectral methods of generalized harmonic analysis to gust loads on airplanes

    NASA Technical Reports Server (NTRS)

    Press, Harry; Mazelsky, Bernard

    1954-01-01

    The applicability of some results from the theory of generalized harmonic analysis (or power-spectral analysis) to the analysis of gust loads on airplanes in continuous rough air is examined. The general relations for linear systems between power spectrums of a random input disturbance and an output response are used to relate the spectrum of airplane load in rough air to the spectrum of atmospheric gust velocity. The power spectrum of loads is shown to provide a measure of the load intensity in terms of the standard deviation (root mean square) of the load distribution for an airplane in flight through continuous rough air. For the case of a load output having a normal distribution, which appears from experimental evidence to apply to homogeneous rough air, the standard deviation is shown to describe the probability distribution of loads or the proportion of total time that the load has given values. Thus, for airplane in flight through homogeneous rough air, the probability distribution of loads may be determined from a power-spectral analysis. In order to illustrate the application of power-spectral analysis to gust-load analysis and to obtain an insight into the relations between loads and airplane gust-response characteristics, two selected series of calculations are presented. The results indicate that both methods of analysis yield results that are consistent to a first approximation.

  6. Estimate of tephra accumulation probabilities for the U.S. Department of Energy's Hanford Site, Washington

    USGS Publications Warehouse

    Hoblitt, Richard P.; Scott, William E.

    2011-01-01

    In response to a request from the U.S. Department of Energy, we estimate the thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded at the Hanford Site in south-central Washington State, where a project to build the Tank Waste Treatment and Immobilization Plant is underway. We follow the methodology of a 1987 probabilistic assessment of tephra accumulation in the Pacific Northwest. For a given thickness of tephra, we calculate the product of three probabilities: (1) the annual probability of an eruption producing 0.1 km3 (bulk volume) or more of tephra, (2) the probability that the wind will be blowing toward the Hanford Site, and (3) the probability that tephra accumulations will equal or exceed the given thickness at a given distance. Mount St. Helens, which lies about 200 km upwind from the Hanford Site, has been the most prolific source of tephra fallout among Cascade volcanoes in the recent geologic past and its annual eruption probability based on this record (0.008) dominates assessment of future tephra falls at the site. The probability that the prevailing wind blows toward Hanford from Mount St. Helens is 0.180. We estimate exceedance probabilities of various thicknesses of tephra fallout from an analysis of 14 eruptions of the size expectable from Mount St. Helens and for which we have measurements of tephra fallout at 200 km. The result is that the estimated thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded is about 10 centimeters. It is likely that this thickness is a maximum estimate because we used conservative estimates of eruption and wind probabilities and because the 14 deposits we used probably provide an over-estimate. The use of deposits in this analysis that were mostly compacted by the time they were studied and measured implies that the bulk density of the tephra fallout we consider here is in the range of 1,000-1,250 kg/m3. The load of 10 cm of such tephra fallout on a flat surface would therefore be in the range of 100-125 kg/m2; addition of water from rainfall or snowmelt would provide additional load.

  7. The Use of the Direct Optimized Probabilistic Calculation Method in Design of Bolt Reinforcement for Underground and Mining Workings

    PubMed Central

    Krejsa, Martin; Janas, Petr; Yilmaz, Işık; Marschalko, Marian; Bouchal, Tomas

    2013-01-01

    The load-carrying system of each construction should fulfill several conditions which represent reliable criteria in the assessment procedure. It is the theory of structural reliability which determines probability of keeping required properties of constructions. Using this theory, it is possible to apply probabilistic computations based on the probability theory and mathematic statistics. Development of those methods has become more and more popular; it is used, in particular, in designs of load-carrying structures with the required level or reliability when at least some input variables in the design are random. The objective of this paper is to indicate the current scope which might be covered by the new method—Direct Optimized Probabilistic Calculation (DOProC) in assessments of reliability of load-carrying structures. DOProC uses a purely numerical approach without any simulation techniques. This provides more accurate solutions to probabilistic tasks, and, in some cases, such approach results in considerably faster completion of computations. DOProC can be used to solve efficiently a number of probabilistic computations. A very good sphere of application for DOProC is the assessment of the bolt reinforcement in the underground and mining workings. For the purposes above, a special software application—“Anchor”—has been developed. PMID:23935412

  8. Water quality of the Neuse River, North Carolina : variability, pollution loads, and long-term trends

    USGS Publications Warehouse

    Harned, Douglas A.

    1980-01-01

    A water-quality study of the Neuse River, N.C., based on data collected during 1956-77 at the U.S. Geological Survey stations at Clayton and Kinston, employs statistical trend analysis techniques that provide a framework for river quality assessment. Overall, water-quality of the Neuse River is satisfactory for most uses. At Clayton, fecal coliform bacteria and nutrient levels are high, but algae and total-organic-carbon data indicate water-quality improvement in recent years, due probably to a new wastewater treatment plant located downstream from Raleigh, N.C. Pollution was determined by subtracting estimated natural loads of constituents from measured total loads. Pollution makes up approximately 50% of the total dissolved material transported by the Neuse. Two different data transformation methods allowed trends to be identified in constituent concentrations. The methods recomputed the concentrations as if they were determined at a constant discharge over the period of record. Although little change since 1956 can be seen in most constituents, large changes in some constituents, such as increases in potassium and sulfate, indicate that the water quality of the Neuse River has noticeably deteriorated. Increases in sulfate are probably largely due to increased long-term inputs of sulfur compounds from airborne pollutants. (USGS)

  9. Resolution of the direct containment heating issue for all Westinghouse plants with large dry containments or subatmospheric containments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilch, M.M.; Allen, M.D.; Klamerus, E.W.

    1996-02-01

    This report uses the scenarios described in NUREG/CR-6075 and NUREG/CR-6075, Supplement 1, to address the direct containment heating (DCH) issue for all Westinghouse plants with large dry or subatmospheric containments. DCH is considered resolved if the conditional containment failure probability (CCFP) is less than 0.1. Loads versus strength evaluations of the CCFP were performed for each plant using plant-specific information. The DCH issue is considered resolved for a plant if a screening phase results in a CCFP less than 0.01, which is more stringent than the overall success criterion. If the screening phase CCFP for a plant is greater thanmore » 0.01, then refined containment loads evaluations must be performed and/or the probability of high pressure at vessel breach must be analyzed. These analyses could be used separately or could be integrated together to recalculate the CCFP for an individual plant to reduce the CCFP to meet the overall success criterion of less than 0.1. The CCFPs for all of the Westinghouse plants with dry containments were less than 0.01 at the screening phase, and thus, the DCH issue is resolved for these plants based on containment loads alone. No additional analyses are required.« less

  10. 14 CFR 23.613 - Material strength properties and design values.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...

  11. 14 CFR 23.613 - Material strength properties and design values.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...

  12. 14 CFR 23.613 - Material strength properties and design values.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...

  13. 14 CFR 25.613 - Material strength properties and material design values.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...

  14. 14 CFR 27.613 - Material strength properties and design values.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  15. 14 CFR 23.613 - Material strength properties and design values.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...

  16. 14 CFR 25.613 - Material strength properties and material design values.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...

  17. 14 CFR 29.613 - Material strength properties and design values.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  18. 14 CFR 25.613 - Material strength properties and material design values.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...

  19. 14 CFR 27.613 - Material strength properties and design values.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  20. 14 CFR 25.613 - Material strength properties and material design values.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...

  1. 14 CFR 29.613 - Material strength properties and design values.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  2. 14 CFR 27.613 - Material strength properties and design values.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  3. 14 CFR 27.613 - Material strength properties and design values.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  4. 14 CFR 25.613 - Material strength properties and material design values.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...

  5. 14 CFR 23.613 - Material strength properties and design values.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...

  6. 14 CFR 27.613 - Material strength properties and design values.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  7. 14 CFR 29.613 - Material strength properties and design values.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  8. 14 CFR 29.613 - Material strength properties and design values.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  9. 14 CFR 29.613 - Material strength properties and design values.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...

  10. Probabilistic liquefaction triggering based on the cone penetration test

    USGS Publications Warehouse

    Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Tokimatsu, K.

    2005-01-01

    Performance-based earthquake engineering requires a probabilistic treatment of potential failure modes in order to accurately quantify the overall stability of the system. This paper is a summary of the application portions of the probabilistic liquefaction triggering correlations proposed recently proposed by Moss and co-workers. To enable probabilistic treatment of liquefaction triggering, the variables comprising the seismic load and the liquefaction resistance were treated as inherently uncertain. Supporting data from an extensive Cone Penetration Test (CPT)-based liquefaction case history database were used to develop a probabilistic correlation. The methods used to measure the uncertainty of the load and resistance variables, how the interactions of these variables were treated using Bayesian updating, and how reliability analysis was applied to produce curves of equal probability of liquefaction are presented. The normalization for effective overburden stress, the magnitude correlated duration weighting factor, and the non-linear shear mass participation factor used are also discussed.

  11. Trends in marine debris in the U.S. Caribbean and the Gulf of Mexico, 1996-2003

    USGS Publications Warehouse

    Ribic, Christine; Seba B. Sheavly,; Rugg, David J.

    2011-01-01

    Marine debris is a widespread and globally recognized problem. Sound information is necessary to understand the extent of the problem and to inform resource managers and policy makers about potential mitigation strategies. Although there are many short-term studies on marine debris, a longer-term perspective and the ability to compare among regions has heretofore been missing in the U.S. Caribbean and the Gulf of Mexico. We used data from a national beach monitoring program to evaluate and compare amounts, composition, and trends of indicator marine debris in the U.S. Caribbean (Puerto Rico and the U.S. Virgin Islands) and the Gulf of Mexico from 1996 to 2003. Indicator items provided a standardized set that all surveys collected; each was assigned a probable source: ocean-based, land-based, or general-source. Probable ocean-based debris was related to activities such as recreational boating/fishing, commercial fishing and activities on oil/gas platforms. Probable land-based debris was related to land-based recreation and sewer systems. General-source debris represented plastic items that can come from either ocean- or land-based sources; these items were plastic bags, strapping bands, and plastic bottles (excluding motor oil containers). Debris loads were similar between the U.S. Caribbean and the western Gulf of Mexico; however, debris composition on U.S. Caribbean beaches was dominated by land-based indicators while the western Gulf of Mexico was dominated by ocean-based indicators. Beaches along the eastern Gulf of Mexico had the lowest counts of debris; composition was dominated by land-based indicators, similar to that found for the U.S. Caribbean. Debris loads on beaches in the Gulf of Mexico are likely affected by Gulf circulation patterns, reducing loads in the eastern Gulf and increasing loads in the western Gulf. Over the seven years of monitoring, we found a large linear decrease in total indicator debris, as well as all source categories, for the U.S. Caribbean. Lower magnitude decreases were seen in indicator debris along the eastern Gulf of Mexico. In contrast, only land-based indicators declined in the western Gulf of Mexico; total, ocean-based and general-source indicators remained unchanged. Decreases in land-based indicators were not related to human population in the coastal regions; human population increased in all regions over the time of the study. Significant monthly patterns for indicator debris were found only in the Gulf of Mexico; counts were highest during May through September, with peaks occurring in July. Inclement weather conditions before the time of the survey also accounted for some of the variation in the western Gulf of Mexico; fewer items were found when there were heavy seas or cold fronts in the weeks prior to the survey, while tropical storms (including hurricanes) increased the amount of debris. With the development around the globe of long-term monitoring programs using standardized methodology, there is the potential to help management at individual sites, as well as generate larger-scale perspectives (from regional to global) to inform decision makers. Incorporating mechanisms producing debris into marine debris programs would be a fruitful area for future research.

  12. Foot-ankle complex injury risk curves using calcaneus bone mineral density data.

    PubMed

    Yoganandan, Narayan; Chirvi, Sajal; Voo, Liming; DeVogel, Nicholas; Pintar, Frank A; Banerjee, Anjishnu

    2017-08-01

    Biomechanical data from post mortem human subject (PMHS) experiments are used to derive human injury probability curves and develop injury criteria. This process has been used in previous and current automotive crashworthiness studies, Federal safety standards, and dummy design and development. Human bone strength decreases as the individuals reach their elderly age. Injury risk curves using the primary predictor variable (e.g., force) should therefore account for such strength reduction when the test data are collected from PMHS specimens of different ages (age at the time of death). This demographic variable is meant to be a surrogate for fracture, often representing bone strength as other parameters have not been routinely gathered in previous experiments. However, bone mineral densities (BMD) can be gathered from tested specimens (presented in this manuscript). The objective of this study is to investigate different approaches of accounting for BMD in the development of human injury risk curves. Using simulated underbody blast (UBB) loading experiments conducted with the PMHS lower leg-foot-ankle complexes, a comparison is made between the two methods: treating BMD as a covariate and pre-scaling test data based on BMD. Twelve PMHS lower leg-foot-ankle specimens were subjected to UBB loads. Calcaneus BMD was obtained from quantitative computed tomography (QCT) images. Fracture forces were recorded using a load cell. They were treated as uncensored data in the survival analysis model which used the Weibull distribution in both methods. The width of the normalized confidence interval (NCIS) was obtained using the mean and ± 95% confidence limit curves. The mean peak forces of 3.9kN and 8.6kN were associated with the 5% and 50% probability of injury for the covariate method of deriving the risk curve for the reference age of 45 years. The mean forces of 5.4 kN and 9.2kN were associated with the 5% and 50% probability of injury for the pre-scaled method. The NCIS magnitudes were greater in the covariate-based risk curves (0.52-1.00) than in the risk curves based on the pre-scaled method (0.24-0.66). The pre-scaling method resulted in a generally greater injury force and a tighter injury risk curve confidence interval. Although not directly applicable to the foot-ankle fractures, when compared with the use of spine BMD from QCT scans to pre-scale the force, the calcaneus BMD scaled data produced greater force at the same risk level in general. Pre-scaling the force data using BMD is an alternate, and likely a more accurate, method instead of using covariate to account for the age-related bone strength change in deriving risk curves from biomechanical experiments using PMHS. Because of the proximity of the calcaneus bone to the impacting load, it is suggested to use and determine the BMD of the foot-ankle bone in future UBB and other loading conditions to derive human injury probability curves for the foot-ankle complex. Copyright © 2017. Published by Elsevier Ltd.

  13. Linkage of Viral Sequences among HIV-Infected Village Residents in Botswana: Estimation of Linkage Rates in the Presence of Missing Data

    PubMed Central

    Carnegie, Nicole Bohme; Wang, Rui; Novitsky, Vladimir; De Gruttola, Victor

    2014-01-01

    Linkage analysis is useful in investigating disease transmission dynamics and the effect of interventions on them, but estimates of probabilities of linkage between infected people from observed data can be biased downward when missingness is informative. We investigate variation in the rates at which subjects' viral genotypes link across groups defined by viral load (low/high) and antiretroviral treatment (ART) status using blood samples from household surveys in the Northeast sector of Mochudi, Botswana. The probability of obtaining a sequence from a sample varies with viral load; samples with low viral load are harder to amplify. Pairwise genetic distances were estimated from aligned nucleotide sequences of HIV-1C env gp120. It is first shown that the probability that randomly selected sequences are linked can be estimated consistently from observed data. This is then used to develop estimates of the probability that a sequence from one group links to at least one sequence from another group under the assumption of independence across pairs. Furthermore, a resampling approach is developed that accounts for the presence of correlation across pairs, with diagnostics for assessing the reliability of the method. Sequences were obtained for 65% of subjects with high viral load (HVL, n = 117), 54% of subjects with low viral load but not on ART (LVL, n = 180), and 45% of subjects on ART (ART, n = 126). The probability of linkage between two individuals is highest if both have HVL, and lowest if one has LVL and the other has LVL or is on ART. Linkage across groups is high for HVL and lower for LVL and ART. Adjustment for missing data increases the group-wise linkage rates by 40–100%, and changes the relative rates between groups. Bias in inferences regarding HIV viral linkage that arise from differential ability to genotype samples can be reduced by appropriate methods for accommodating missing data. PMID:24415932

  14. Linkage of viral sequences among HIV-infected village residents in Botswana: estimation of linkage rates in the presence of missing data.

    PubMed

    Carnegie, Nicole Bohme; Wang, Rui; Novitsky, Vladimir; De Gruttola, Victor

    2014-01-01

    Linkage analysis is useful in investigating disease transmission dynamics and the effect of interventions on them, but estimates of probabilities of linkage between infected people from observed data can be biased downward when missingness is informative. We investigate variation in the rates at which subjects' viral genotypes link across groups defined by viral load (low/high) and antiretroviral treatment (ART) status using blood samples from household surveys in the Northeast sector of Mochudi, Botswana. The probability of obtaining a sequence from a sample varies with viral load; samples with low viral load are harder to amplify. Pairwise genetic distances were estimated from aligned nucleotide sequences of HIV-1C env gp120. It is first shown that the probability that randomly selected sequences are linked can be estimated consistently from observed data. This is then used to develop estimates of the probability that a sequence from one group links to at least one sequence from another group under the assumption of independence across pairs. Furthermore, a resampling approach is developed that accounts for the presence of correlation across pairs, with diagnostics for assessing the reliability of the method. Sequences were obtained for 65% of subjects with high viral load (HVL, n = 117), 54% of subjects with low viral load but not on ART (LVL, n = 180), and 45% of subjects on ART (ART, n = 126). The probability of linkage between two individuals is highest if both have HVL, and lowest if one has LVL and the other has LVL or is on ART. Linkage across groups is high for HVL and lower for LVL and ART. Adjustment for missing data increases the group-wise linkage rates by 40-100%, and changes the relative rates between groups. Bias in inferences regarding HIV viral linkage that arise from differential ability to genotype samples can be reduced by appropriate methods for accommodating missing data.

  15. The capacity credit of grid-connected photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Alsema, E. A.; van Wijk, A. J. M.; Turkenburg, W. C.

    The capacity credit due photovoltaic (PV) power plants if integrated into the Netherlands grid was investigated, together with an estimate of the total allowable penetration. An hourly simulation was performed based on meteorological data from five stations and considering tilted surfaces, the current grid load pattern, and the load pattern after PV-power augmentation. The reliability of the grid was assessed in terms of a loss of load probability analysis, assuming power drops were limited to 1 GW. A projected tolerance for 2.5 GW of PV power was calculated. Peak demands were determined to be highest in winter, contrary to highest insolation levels; however, daily insolation levels coincided with daily peak demands. Combining the PV input with an equal amount of wind turbine power production was found to augment the capacity credit for both at aggregate outputs of 2-4 GW.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Shih-Jung

    Dynamic strength of the High Flux Isotope Reactor (HFIR) vessel to resist hypothetical accidents is analyzed by using the method of fracture mechanics. Vessel critical stresses are estimated by applying dynamic pressure pulses of a range of magnitudes and pulse-durations. The pulses versus time functions are assumed to be step functions. The probability of vessel fracture is then calculated by assuming a distribution of possible surface cracks of different crack depths. The probability distribution function for the crack depths is based on the form that is recommended by the Marshall report. The toughness of the vessel steel used in themore » analysis is based on the projected and embrittled value after 10 effective full power years from 1986. From the study made by Cheverton, Merkle and Nanstad, the weakest point on the vessel for fracture evaluation is known to be located within the region surrounding the tangential beam tube HB3. The increase in the probability of fracture is obtained as an extension of the result from that report for the regular operating condition to include conditions of higher dynamic pressures due to accident loadings. The increase in the probability of vessel fracture is plotted for a range of hoop stresses to indicate the vessel strength against hypothetical accident conditions.« less

  17. Probabilistic structural analysis using a general purpose finite element program

    NASA Astrophysics Data System (ADS)

    Riha, D. S.; Millwater, H. R.; Thacker, B. H.

    1992-07-01

    This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.

  18. Integrated modeling approach using SELECT and SWAT models to simulate source loading and in-stream conditions of fecal indicator bacteria.

    NASA Astrophysics Data System (ADS)

    Ranatunga, T.

    2016-12-01

    Modeling of fate and transport of fecal bacteria in a watershed is generally a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria (E.coli) source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads were input to the SWAT model in order to simulate the transport through the land and in-stream conditions. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on H-GAC's regional land use, population and household projections (up to 2040). Based on the in-stream reductions required to meet the water quality standards, the corresponding required source load reductions were estimated.

  19. Application of SELECT and SWAT models to simulate source load, fate, and transport of fecal bacteria in watersheds.

    NASA Astrophysics Data System (ADS)

    Ranatunga, T.

    2017-12-01

    Modeling of fate and transport of fecal bacteria in a watershed is a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads from SELECT model were input to the SWAT model, and simulate the bacteria transport through the land and in-stream. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on regional land use, population and household forecast (up to 2040). Based on the reductions required to meet the water quality standards in-stream, the corresponding required source load reductions were estimated.

  20. Lower Leg Injury Reference Values and Risk Curves from Survival Analysis for Male and Female Dummies: Meta-analysis of Postmortem Human Subject Tests.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Banerjee, Anjishnu

    2015-01-01

    Derive lower leg injury risk functions using survival analysis and determine injury reference values (IRV) applicable to human mid-size male and small-size female anthropometries by conducting a meta-analysis of experimental data from different studies under axial impact loading to the foot-ankle-leg complex. Specimen-specific dynamic peak force, age, total body mass, and injury data were obtained from tests conducted by applying the external load to the dorsal surface of the foot of postmortem human subject (PMHS) foot-ankle-leg preparations. Calcaneus and/or tibia injuries, alone or in combination and with/without involvement of adjacent articular complexes, were included in the injury group. Injury and noninjury tests were included. Maximum axial loads recorded by a load cell attached to the proximal end of the preparation were used. Data were analyzed by treating force as the primary variable. Age was considered as the covariate. Data were censored based on the number of tests conducted on each specimen and whether it remained intact or sustained injury; that is, right, left, and interval censoring. The best fits from different distributions were based on the Akaike information criterion; mean and plus and minus 95% confidence intervals were obtained; and normalized confidence interval sizes (quality indices) were determined at 5, 10, 25, and 50% risk levels. The normalization was based on the mean curve. Using human-equivalent age as 45 years, data were normalized and risk curves were developed for the 50th and 5th percentile human size of the dummies. Out of the available 114 tests (76 fracture and 38 no injury) from 5 groups of experiments, survival analysis was carried out using 3 groups consisting of 62 tests (35 fracture and 27 no injury). Peak forces associated with 4 specific risk levels at 25, 45, and 65 years of age are given along with probability curves (mean and plus and minus 95% confidence intervals) for PMHS and normalized data applicable to male and female dummies. Quality indices increased (less tightness-of-fit) with decreasing age and risk level for all age groups and these data are given for all chosen risk levels. These PMHS-based probability distributions at different ages using information from different groups of researchers constituting the largest body of data can be used as human tolerances to lower leg injury from axial loading. Decreasing quality indices (increasing index value) at lower probabilities suggest the need for additional tests. The anthropometry-specific mid-size male and small-size female mean human risk curves along with plus and minus 95% confidence intervals from survival analysis and associated IRV data can be used as a first step in studies aimed at advancing occupant safety in automotive and other environments.

  1. Theoretical Analysis of Rain Attenuation Probability

    NASA Astrophysics Data System (ADS)

    Roy, Surendra Kr.; Jha, Santosh Kr.; Jha, Lallan

    2007-07-01

    Satellite communication technologies are now highly developed and high quality, distance-independent services have expanded over a very wide area. As for the system design of the Hokkaido integrated telecommunications(HIT) network, it must first overcome outages of satellite links due to rain attenuation in ka frequency bands. In this paper theoretical analysis of rain attenuation probability on a slant path has been made. The formula proposed is based Weibull distribution and incorporates recent ITU-R recommendations concerning the necessary rain rates and rain heights inputs. The error behaviour of the model was tested with the loading rain attenuation prediction model recommended by ITU-R for large number of experiments at different probability levels. The novel slant path rain attenuastion prediction model compared to the ITU-R one exhibits a similar behaviour at low time percentages and a better root-mean-square error performance for probability levels above 0.02%. The set of presented models exhibits the advantage of implementation with little complexity and is considered useful for educational and back of the envelope computations.

  2. Optimization of structures undergoing harmonic or stochastic excitation. Ph.D. Thesis; [atmospheric turbulence and white noise

    NASA Technical Reports Server (NTRS)

    Johnson, E. H.

    1975-01-01

    The optimal design was investigated of simple structures subjected to dynamic loads, with constraints on the structures' responses. Optimal designs were examined for one dimensional structures excited by harmonically oscillating loads, similar structures excited by white noise, and a wing in the presence of continuous atmospheric turbulence. The first has constraints on the maximum allowable stress while the last two place bounds on the probability of failure of the structure. Approximations were made to replace the time parameter with a frequency parameter. For the first problem, this involved the steady state response, and in the remaining cases, power spectral techniques were employed to find the root mean square values of the responses. Optimal solutions were found by using computer algorithms which combined finite elements methods with optimization techniques based on mathematical programming. It was found that the inertial loads for these dynamic problems result in optimal structures that are radically different from those obtained for structures loaded statically by forces of comparable magnitude.

  3. [The clinical economic analysis of the methods of ischemic heart disease diagnostics].

    PubMed

    Kalashnikov, V Iu; Mitriagina, S N; Syrkin, A L; Poltavskaia, M G; Sorokina, E G

    2007-01-01

    The clinical economical analysis was applied to assess the application of different techniques of ischemic heart disease diagnostics - the electro-cardiographic monitoring, the treadmill-testing, the stress-echo cardiographic with dobutamine, the single-photon computerized axial tomography with load, the multi-spiral computerized axial tomography with coronary arteries staining in patients with different initial probability of disease occurrence. In all groups, the best value of "cost-effectiveness" had the treadmill-test. The patients with low risk needed 17.4 rubles to precise the probability of ischemic heart disease occurrence at 1%. In the group with medium and high risk this indicator was 9.4 and 24.7 rubles correspondingly. It is concluded that to precise the probability of ischemic heart disease occurrence after tredmil-test in the patients with high probability it is appropriate to use the single-photon computerized axial tomography with load and in the case of patients with low probability the multi-spiral computerized axial tomography with coronary arteries staining.

  4. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  5. Cascading failures with local load redistribution in interdependent Watts-Strogatz networks

    NASA Astrophysics Data System (ADS)

    Hong, Chen; Zhang, Jun; Du, Wen-Bo; Sallan, Jose Maria; Lordan, Oriol

    2016-05-01

    Cascading failures of loads in isolated networks have been studied extensively over the last decade. Since 2010, such research has extended to interdependent networks. In this paper, we study cascading failures with local load redistribution in interdependent Watts-Strogatz (WS) networks. The effects of rewiring probability and coupling strength on the resilience of interdependent WS networks have been extensively investigated. It has been found that, for small values of the tolerance parameter, interdependent networks are more vulnerable as rewiring probability increases. For larger values of the tolerance parameter, the robustness of interdependent networks firstly decreases and then increases as rewiring probability increases. Coupling strength has a different impact on robustness. For low values of coupling strength, the resilience of interdependent networks decreases with the increment of the coupling strength until it reaches a certain threshold value. For values of coupling strength above this threshold, the opposite effect is observed. Our results are helpful to understand and design resilient interdependent networks.

  6. Enclosure fire hazard analysis using relative energy release criteria. [burning rate and combustion control

    NASA Technical Reports Server (NTRS)

    Coulbert, C. D.

    1978-01-01

    A method for predicting the probable course of fire development in an enclosure is presented. This fire modeling approach uses a graphic plot of five fire development constraints, the relative energy release criteria (RERC), to bound the heat release rates in an enclosure as a function of time. The five RERC are flame spread rate, fuel surface area, ventilation, enclosure volume, and total fuel load. They may be calculated versus time based on the specified or empirical conditions describing the specific enclosure, the fuel type and load, and the ventilation. The calculation of these five criteria, using the common basis of energy release rates versus time, provides a unifying framework for the utilization of available experimental data from all phases of fire development. The plot of these criteria reveals the probable fire development envelope and indicates which fire constraint will be controlling during a criteria time period. Examples of RERC application to fire characterization and control and to hazard analysis are presented along with recommendations for the further development of the concept.

  7. An approximate methods approach to probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.

  8. Wind models for the NSTS ascent trajectory biasing for wind load alleviation

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Adelfang, S. I.; Batts, G. W.; Hill, C. K.

    1989-01-01

    New concepts are presented for aerospace vehicle ascent wind profile biasing. The purpose for wind biasing the ascent trajectory is to provide ascent wind loads relief and thus decrease the probability for launch delays due to wind loads exceeding critical limits. Wind biasing trajectories to the profile of monthly mean winds have been widely used for this purpose. The wind profile models presented give additional alternatives for wind biased trajectories. They are derived from the properties of the bivariate normal probability function using the available wind statistical parameters for the launch site. The analytical expressions are presented to permit generalizations. Specific examples are given to illustrate the procedures. The wind profile models can be used to establish the ascent trajectory steering commands to guide the vehicle through the first stage. For the National Space Transportation System (NSTS) program these steering commands are called I-loads.

  9. Assuring Life in Composite Systems

    NASA Technical Reports Server (NTRS)

    Chamis, Christos c.

    2008-01-01

    A computational simulation method is presented to assure life in composite systems by using dynamic buckling of smart composite shells as an example. The combined use of composite mechanics, finite element computer codes, and probabilistic analysis enable the effective assessment of the dynamic buckling load of smart composite shells. A universal plot is generated to estimate the dynamic buckling load of composite shells at various load rates and probabilities. The shell structure is also evaluated with smart fibers embedded in the plies right below the outer plies. The results show that, on the average, the use of smart fibers improved the shell buckling resistance by about 9% at different probabilities and delayed the buckling occurrence time. The probabilistic sensitivities results indicate that uncertainties in the fiber volume ratio and ply thickness have major effects on the buckling load. The uncertainties in the electric field strength and smart material volume fraction have moderate effects and thereby in the assured life of the shell.

  10. Re‐estimated effects of deep episodic slip on the occurrence and probability of great earthquakes in Cascadia

    USGS Publications Warehouse

    Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy

    2013-01-01

    Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.

  11. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  12. Advances in the Assessment of Wind Turbine Operating Extreme Loads via More Efficient Calculation Approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter; Damiani, Rick R.; Dykes, Katherine

    2017-01-09

    A new adaptive stratified importance sampling (ASIS) method is proposed as an alternative approach for the calculation of the 50 year extreme load under operational conditions, as in design load case 1.1 of the the International Electrotechnical Commission design standard. ASIS combines elements of the binning and extrapolation technique, currently described by the standard, and of the importance sampling (IS) method to estimate load probability of exceedances (POEs). Whereas a Monte Carlo (MC) approach would lead to the sought level of POE with a daunting number of simulations, IS-based techniques are promising as they target the sampling of the inputmore » parameters on the parts of the distributions that are most responsible for the extreme loads, thus reducing the number of runs required. We compared the various methods on select load channels as output from FAST, an aero-hydro-servo-elastic tool for the design and analysis of wind turbines developed by the National Renewable Energy Laboratory (NREL). Our newly devised method, although still in its infancy in terms of tuning of the subparameters, is comparable to the others in terms of load estimation and its variance versus computational cost, and offers great promise going forward due to the incorporation of adaptivity into the already powerful importance sampling concept.« less

  13. Optimization of laminated stacking sequence for buckling load maximization by genetic algorithm

    NASA Technical Reports Server (NTRS)

    Le Riche, Rodolphe; Haftka, Raphael T.

    1992-01-01

    The use of a genetic algorithm to optimize the stacking sequence of a composite laminate for buckling load maximization is studied. Various genetic parameters including the population size, the probability of mutation, and the probability of crossover are optimized by numerical experiments. A new genetic operator - permutation - is proposed and shown to be effective in reducing the cost of the genetic search. Results are obtained for a graphite-epoxy plate, first when only the buckling load is considered, and then when constraints on ply contiguity and strain failure are added. The influence on the genetic search of the penalty parameter enforcing the contiguity constraint is studied. The advantage of the genetic algorithm in producing several near-optimal designs is discussed.

  14. Limits on the prediction of helicopter rotor noise using thickness and loading sources: Validation of helicopter noise prediction techniques

    NASA Technical Reports Server (NTRS)

    Succi, G. P.

    1983-01-01

    The techniques of helicopter rotor noise prediction attempt to describe precisely the details of the noise field and remove the empiricisms and restrictions inherent in previous methods. These techniques require detailed inputs of the rotor geometry, operating conditions, and blade surface pressure distribution. The Farassat noise prediction techniques was studied, and high speed helicopter noise prediction using more detailed representations of the thickness and loading noise sources was investigated. These predictions were based on the measured blade surface pressures on an AH-1G rotor and compared to the measured sound field. Although refinements in the representation of the thickness and loading noise sources improve the calculation, there are still discrepancies between the measured and predicted sound field. Analysis of the blade surface pressure data indicates shocks on the blades, which are probably responsible for these discrepancies.

  15. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  16. Fracture Tests of Etched Components Using a Focused Ion Beam Machine

    NASA Technical Reports Server (NTRS)

    Kuhn, Jonathan, L.; Fettig, Rainer K.; Moseley, S. Harvey; Kutyrev, Alexander S.; Orloff, Jon; Powers, Edward I. (Technical Monitor)

    2000-01-01

    Many optical MEMS device designs involve large arrays of thin (0.5 to 1 micron components subjected to high stresses due to cyclic loading. These devices are fabricated from a variety of materials, and the properties strongly depend on size and processing. Our objective is to develop standard and convenient test methods that can be used to measure the properties of large numbers of witness samples, for every device we build. In this work we explore a variety of fracture test configurations for 0.5 micron thick silicon nitride membranes machined using the Reactive Ion Etching (RIE) process. Testing was completed using an FEI 620 dual focused ion beam milling machine. Static loads were applied using a probe. and dynamic loads were applied through a piezo-electric stack mounted at the base of the probe. Results from the tests are presented and compared, and application for predicting fracture probability of large arrays of devices are considered.

  17. Hydraulic transients in the long diversion-type hydropower station with a complex differential surge tank.

    PubMed

    Yu, Xiaodong; Zhang, Jian; Zhou, Ling

    2014-01-01

    Based on the theory of hydraulic transients and the method of characteristics (MOC), a mathematic model of the differential surge tank with pressure-reduction orifices (PROs) and overflow weirs for transient calculation is proposed. The numerical model of hydraulic transients is established using the data of a practical hydropower station; and the probable transients are simulated. The results show that successive load rejection is critical for calculating the maximum pressure in spiral case and the maximum rotating speed of runner when the bifurcated pipe is converging under the surge tank in a diversion-type hydropower station; the pressure difference between two sides of breast wall is large during transient conditions, and it would be more serious when simultaneous load rejections happen after load acceptance; the reasonable arrangement of PROs on breast wall can effectively decrease the pressure difference.

  18. Hydraulic Transients in the Long Diversion-Type Hydropower Station with a Complex Differential Surge Tank

    PubMed Central

    Yu, Xiaodong; Zhang, Jian

    2014-01-01

    Based on the theory of hydraulic transients and the method of characteristics (MOC), a mathematic model of the differential surge tank with pressure-reduction orifices (PROs) and overflow weirs for transient calculation is proposed. The numerical model of hydraulic transients is established using the data of a practical hydropower station; and the probable transients are simulated. The results show that successive load rejection is critical for calculating the maximum pressure in spiral case and the maximum rotating speed of runner when the bifurcated pipe is converging under the surge tank in a diversion-type hydropower station; the pressure difference between two sides of breast wall is large during transient conditions, and it would be more serious when simultaneous load rejections happen after load acceptance; the reasonable arrangement of PROs on breast wall can effectively decrease the pressure difference. PMID:25133213

  19. Bridge reliability assessment based on the PDF of long-term monitored extreme strains

    NASA Astrophysics Data System (ADS)

    Jiao, Meiju; Sun, Limin

    2011-04-01

    Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.

  20. Safety envelope for load tolerance of structural element design based on multi-stage testing

    DOE PAGES

    Park, Chanyoung; Kim, Nam H.

    2016-09-06

    Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less

  1. Ignition threshold of aluminized HMX-based PBXs

    NASA Astrophysics Data System (ADS)

    Miller, Christopher; Zhou, Min

    2017-06-01

    We report the results of micromechanical simulations of the ignition of aluminized HMX-based PBX under loading due to impact by thin flyers. The conditions analyzed concern loading pulses on the order of 20 nanoseconds to 0.8 microseconds in duration and impact piston velocities on the order of 300-1000 ms-1. The samples consist of a stochastically similar bimodal distribution of HMX grains, an Estane binder, and 50 μm aluminum particles. The computational model accounts for constituent elasto-vicoplasticity, viscoelasticity, bulk compressibility, fracture, interfacial debonding, fracture, internal contact, bulk and frictional heating, and heat conduction. The analysis focuses on the development of hotspots under different material settings and loading conditions. In particular, the ignition threshold in the form of the James relation and the corresponding ignition probability are calculated for the PBXs containing 0%, 6%, 10%, and 18% aluminum by volume. It is found that the addition of aluminum increases the ignition threshold, causing the materials to be less sensitive. Dissipation and heating mechanism changes responsible for this trend are delineated. Support by DOE NNSA SSGF is gratefully acknowledged.

  2. Load-Based Lower Neck Injury Criteria for Females from Rear Impact from Cadaver Experiments.

    PubMed

    Yoganandan, Narayan; Pintar, Frank A; Banerjee, Anjishnu

    2017-05-01

    The objectives of this study were to derive lower neck injury metrics/criteria and injury risk curves for the force, moment, and interaction criterion in rear impacts for females. Biomechanical data were obtained from previous intact and isolated post mortem human subjects and head-neck complexes subjected to posteroanterior accelerative loading. Censored data were used in the survival analysis model. The primary shear force, sagittal bending moment, and interaction (lower neck injury criterion, LN ic ) metrics were significant predictors of injury. The most optimal distribution was selected (Weibulll, log normal, or log logistic) using the Akaike information criterion according to the latest ISO recommendations for deriving risk curves. The Kolmogorov-Smirnov test was used to quantify robustness of the assumed parametric model. The intercepts for the interaction index were extracted from the primary risk curves. Normalized confidence interval sizes (NCIS) were reported at discrete probability levels, along with the risk curves and 95% confidence intervals. The mean force of 214 N, moment of 54 Nm, and 0.89 LN ic were associated with a five percent probability of injury. The NCIS for these metrics were 0.90, 0.95, and 0.85. These preliminary results can be used as a first step in the definition of lower neck injury criteria for women under posteroanterior accelerative loading in crashworthiness evaluations.

  3. On-line prognosis of fatigue crack propagation based on Gaussian weight-mixture proposal particle filter.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo

    2018-01-01

    Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Estimating the number of terrestrial organisms on the moon.

    NASA Technical Reports Server (NTRS)

    Dillon, R. T.; Gavin, W. R.; Roark, A. L.; Trauth, C. A., Jr.

    1973-01-01

    Methods used to obtain estimates for the biological loadings on moon bound spacecraft prior to launch are reviewed, along with the mathematical models used to calculate the microorganism density on the lunar surface (such as it results from contamination deposited by manned and unmanned flights) and the probability of lunar soil sample contamination. Some of the results obtained by the use of a lunar inventory system based on these models are presented.

  5. Occupant dynamics in rollover crashes: influence of roof deformation and seat belt performance on probable spinal column injury.

    PubMed

    Bidez, Martha W; Cochran, John E; King, Dottie; Burke, Donald S

    2007-11-01

    Motor vehicle crashes are the leading cause of death in the United States for people ages 3-33, and rollover crashes have a higher fatality rate than any other crash mode. At the request and under the sponsorship of Ford Motor Company, Autoliv conducted a series of dynamic rollover tests on Ford Explorer sport utility vehicles (SUV) during 1998 and 1999. Data from those tests were made available to the public and were analyzed in this study to investigate the magnitude of and the temporal relationship between roof deformation, lap-shoulder seat belt loads, and restrained anthropometric test dummy (ATD) neck loads. During each of the three FMVSS 208 dolly rollover tests of Ford Explorer SUVs, the far-side, passenger ATDs exhibited peak neck compression and flexion loads, which indicated a probable spinal column injury in all three tests. In those same tests, the near-side, driver ATD neck loads never predicted a potential injury. In all three tests, objective roof/pillar deformation occurred prior to the occurrence of peak neck loads (F ( z ), M ( y )) for far-side, passenger ATDs, and peak neck loads were predictive of probable spinal column injury. The production lap and shoulder seat belts in the SUVs, which restrained both driver and passenger ATDs, consistently allowed ATD head contact with the roof while the roof was contacting the ground during this 1000 ms test series. Local peak neck forces and moments were noted each time the far-side, passenger ATD head contacted ("dived into") the roof while the roof was in contact with the ground; however, the magnitude of these local peaks was only 2-13% of peak neck loads in all three tests. "Diving-type" neck loads were not predictive of injury for either driver or passenger ATD in any of the three tests.

  6. Occupant Dynamics in Rollover Crashes: Influence of Roof Deformation and Seat Belt Performance on Probable Spinal Column Injury

    PubMed Central

    Cochran, John E.; King, Dottie; Burke, Donald S.

    2007-01-01

    Motor vehicle crashes are the leading cause of death in the United States for people ages 3–33, and rollover crashes have a higher fatality rate than any other crash mode. At the request and under the sponsorship of Ford Motor Company, Autoliv conducted a series of dynamic rollover tests on Ford Explorer sport utility vehicles (SUV) during 1998 and 1999. Data from those tests were made available to the public and were analyzed in this study to investigate the magnitude of and the temporal relationship between roof deformation, lap–shoulder seat belt loads, and restrained anthropometric test dummy (ATD) neck loads. During each of the three FMVSS 208 dolly rollover tests of Ford Explorer SUVs, the far-side, passenger ATDs exhibited peak neck compression and flexion loads, which indicated a probable spinal column injury in all three tests. In those same tests, the near-side, driver ATD neck loads never predicted a potential injury. In all three tests, objective roof/pillar deformation occurred prior to the occurrence of peak neck loads (Fz, My) for far-side, passenger ATDs, and peak neck loads were predictive of probable spinal column injury. The production lap and shoulder seat belts in the SUVs, which restrained both driver and passenger ATDs, consistently allowed ATD head contact with the roof while the roof was contacting the ground during this 1000 ms test series. Local peak neck forces and moments were noted each time the far-side, passenger ATD head contacted (“dived into”) the roof while the roof was in contact with the ground; however, the magnitude of these local peaks was only 2–13% of peak neck loads in all three tests. “Diving-type” neck loads were not predictive of injury for either driver or passenger ATD in any of the three tests. PMID:17641975

  7. An Analytic Equation Partitioning Climate Variation and Human Impacts on River Sediment Load

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Gao, G.; Fu, B.

    2017-12-01

    Spatial or temporal patterns and process-based equations could co-exist in hydrologic model. Yet, existing approaches quantifying the impacts of those variables on river sediment load (RSL) changes are found to be severely limited, and new ways to evaluate the contribution of these variables are thus needed. Actually, the Newtonian modeling is hardly achievable for this process due to the limitation of both observations and knowledge of mechanisms, whereas laws based on the Darwinian approach could provide one component of a developed hydrologic model. Since that streamflow is the carrier of suspended sediment, sediment load changes are documented in changes of streamflow and suspended sediment concentration (SSC) - water discharge relationships. Consequently, an analytic equation for river sediment load changes are proposed to explicitly quantify the relative contributions of climate variation and direct human impacts on river sediment load changes. Initially, the sediment rating curve, which is of great significance in RSL changes analysis, was decomposed as probability distribution of streamflow and the corresponding SSC - water discharge relationships at equally spaced discharge classes. Furthermore, a proposed segmentation algorithm based on the fractal theory was used to decompose RSL changes attributed to these two portions. Additionally, the water balance framework was utilized and the corresponding elastic parameters were calculated. Finally, changes in climate variables (i.e. precipitation and potential evapotranspiration) and direct human impacts on river sediment load could be figured out. By data simulation, the efficiency of the segmentation algorithm was verified. The analytic equation provides a superior Darwinian approach partitioning climate and human impacts on RSL changes, as only data series of precipitation, potential evapotranspiration and SSC - water discharge are demanded.

  8. Crane-Load Contact Sensor

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert; Mata, Carlos; Cox, Robert

    2005-01-01

    An electronic instrument has been developed as a prototype of a portable crane-load contact sensor. Such a sensor could be helpful in an application in which the load rests on a base in a horizontal position determined by vertical alignment pins (see Figure 1). If the crane is not positioned to lift the load precisely vertically, then the load can be expected to swing once it has been lifted clear of the pins. If the load is especially heavy, large, and/or fragile, it could hurt workers and/or damage itself and nearby objects. By indicating whether the load remains in contact with the pins when it has been lifted a fraction of the length of the pins, the crane-load contact sensor helps the crane operator determine whether it is safe to lift the load clear of the pins: If there is contact, then the load is resting against the sides of the pins and, hence, it may not be safe to lift; if contact is occasionally broken, then the load is probably not resting against the pins, so it should be safe to lift. It is assumed that the load and base, or at least the pins and the surfaces of the alignment holes in the load, are electrically conductive, so the instrument can use electrical contact to indicate mechanical contact. However, DC resistance cannot be used as an indicator of contact for the following reasons: The load and the base are both electrically grounded through cables (the load is grounded through the lifting cable of the crane) to prevent discharge of static electricity. In other words, the DC resistance between the load and the pins is always low, as though they were always in direct contact. Therefore, instead of DC resistance, the instrument utilizes the AC electrical impedance between the pins and the load. The signal frequency used in the measurement is high enough (.1 MHz) that the impedance contributed by the cables and the electrical ground network of the building in which the crane and the base are situated is significantly greater than the contact impedance between the pins and the load. The instrument includes a signal generator and voltage-measuring circuitry, and is connected to the load and the base as shown in Figure 2. The output of the signal generator (typically having amplitude of the order of a volt) is applied to the load via a 50-resistor, and the voltage between the load and the pins is measured. When the load and the pins are not in contact, the impedance between them is relatively high, causing the measured voltage to exceed a threshold value. When the load and the pins are in contact, the impedance between them falls to a much lower value, causing the voltage to fall below the threshold value. The voltage-measuring circuitry turns on a red light-emitting diode (LED) to indicate the lower-voltage/ contact condition. Whenever the contact has been broken and the non-contact/higher-voltage condition has lasted for more than 2 ms, the voltage-measuring circuitry indicates this condition by blinking a green LED.

  9. Space Station laboratory module power loading analysis

    NASA Astrophysics Data System (ADS)

    Fu, S. J.

    1994-07-01

    The electrical power system of Space Station Freedom is an isolated electrical power generation and distribution network designed to meet the demands of a large number of electrical loads. An algorithm is developed to determine the power bus loading status under normal operating conditions to ensure the supply meets demand. The probabilities of power availability for payload operations (experiments) are also derived.

  10. Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads

    NASA Technical Reports Server (NTRS)

    Brown, A. M.; McGhee, D. S.

    2003-01-01

    Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.

  11. Statistical Comparison and Improvement of Methods for Combining Random and Harmonic Loads

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; McGhee, David S.

    2004-01-01

    Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield ultimate strength and high cycle fatigue capability. This paper examines the cumulative distribution function (CDF) percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematics is then used to calculate the combined value corresponding to any desired percentile along with a curve fit to this value. Another Excel macro is used to calculate the combination using a Monte Carlo simulation. Unlike the traditional techniques, these methods quantify the calculated load value with a Consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Also, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can lower the design loading substantially without losing any of the identified structural reliability.

  12. Introduction of an Emergency Response Plan for flood loading of Sultan Abu Bakar Dam in Malaysia

    NASA Astrophysics Data System (ADS)

    Said, N. F. Md; Sidek, L. M.; Basri, H.; Muda, R. S.; Razad, A. Z. Abdul

    2016-03-01

    Sultan Abu Bakar Dam Emergency Response Plan (ERP) is designed to assist employees for identifying, monitoring, responding and mitigation dam safety emergencies. This paper is outlined to identification of an organization chart, responsibility for emergency management team and triggering level in Sultan Abu Bakar Dam ERP. ERP is a plan that guides responsibilities for proper operation of Sultan Abu Bakar Dam in respond to emergency incidents affecting the dam. Based on this study four major responsibilities are needed for Abu Bakar Dam owing to protect any probable risk for downstream which they can be Incident Commander, Deputy Incident Commander, On-Scene Commander, Civil Engineer. In conclusion, having organization charts based on ERP studies can be helpful for decreasing the probable risks in any projects such as Abu Bakar Dam and it is a way to identify and suspected and actual dam safety emergencies.

  13. Transitional probability-based model for HPV clearance in HIV-1-positive adolescent females.

    PubMed

    Kravchenko, Julia; Akushevich, Igor; Sudenga, Staci L; Wilson, Craig M; Levitan, Emily B; Shrestha, Sadeep

    2012-01-01

    HIV-1-positive patients clear the human papillomavirus (HPV) infection less frequently than HIV-1-negative. Datasets for estimating HPV clearance probability often have irregular measurements of HPV status and risk factors. A new transitional probability-based model for estimation of probability of HPV clearance was developed to fully incorporate information on HIV-1-related clinical data, such as CD4 counts, HIV-1 viral load (VL), highly active antiretroviral therapy (HAART), and risk factors (measured quarterly), and HPV infection status (measured at 6-month intervals). Data from 266 HIV-1-positive and 134 at-risk HIV-1-negative adolescent females from the Reaching for Excellence in Adolescent Care and Health (REACH) cohort were used in this study. First, the associations were evaluated using the Cox proportional hazard model, and the variables that demonstrated significant effects on HPV clearance were included in transitional probability models. The new model established the efficacy of CD4 cell counts as a main clearance predictor for all type-specific HPV phylogenetic groups. The 3-month probability of HPV clearance in HIV-1-infected patients significantly increased with increasing CD4 counts for HPV16/16-like (p<0.001), HPV18/18-like (p<0.001), HPV56/56-like (p = 0.05), and low-risk HPV (p<0.001) phylogenetic groups, with the lowest probability found for HPV16/16-like infections (21.60±1.81% at CD4 level 200 cells/mm(3), p<0.05; and 28.03±1.47% at CD4 level 500 cells/mm(3)). HIV-1 VL was a significant predictor for clearance of low-risk HPV infections (p<0.05). HAART (with protease inhibitor) was significant predictor of probability of HPV16 clearance (p<0.05). HPV16/16-like and HPV18/18-like groups showed heterogeneity (p<0.05) in terms of how CD4 counts, HIV VL, and HAART affected probability of clearance of each HPV infection. This new model predicts the 3-month probability of HPV infection clearance based on CD4 cell counts and other HIV-1-related clinical measurements.

  14. Surface Fractal Analysis for Estimating the Fracture Energy Absorption of Nanoparticle Reinforced Composites

    PubMed Central

    Pramanik, Brahmananda; Tadepalli, Tezeswi; Mantena, P. Raju

    2012-01-01

    In this study, the fractal dimensions of failure surfaces of vinyl ester based nanocomposites are estimated using two classical methods, Vertical Section Method (VSM) and Slit Island Method (SIM), based on the processing of 3D digital microscopic images. Self-affine fractal geometry has been observed in the experimentally obtained failure surfaces of graphite platelet reinforced nanocomposites subjected to quasi-static uniaxial tensile and low velocity punch-shear loading. Fracture energy and fracture toughness are estimated analytically from the surface fractal dimensionality. Sensitivity studies show an exponential dependency of fracture energy and fracture toughness on the fractal dimensionality. Contribution of fracture energy to the total energy absorption of these nanoparticle reinforced composites is demonstrated. For the graphite platelet reinforced nanocomposites investigated, surface fractal analysis has depicted the probable ductile or brittle fracture propagation mechanism, depending upon the rate of loading. PMID:28817017

  15. An application of extreme value theory to the management of a hydroelectric dam.

    PubMed

    Minkah, Richard

    2016-01-01

    Assessing the probability of very low or high water levels is an important issue in the management of hydroelectric dams. In the case of the Akosombo dam, very low and high water levels result in load shedding of electrical power and flooding in communities downstream respectively. In this paper, we use extreme value theory to estimate the probability and return period of very low water levels that can result in load shedding or a complete shutdown of the dam's operations. In addition, we assess the probability and return period of high water levels near the height of the dam and beyond. This provides a framework for a possible extension of the dam to sustain the generation of electrical power and reduce the frequency of spillage that causes flooding in communities downstream. The results show that an extension of the dam can reduce the probability and prolong the return period of a flood. In addition, we found a negligible probability of a complete shutdown of the dam due to inadequate water level.

  16. Long-term effectiveness of initiating non-nucleoside reverse transcriptase inhibitor- versus ritonavir-boosted protease inhibitor-based antiretroviral therapy: implications for first-line therapy choice in resource-limited settings

    PubMed Central

    Lima, Viviane D; Hull, Mark; McVea, David; Chau, William; Harrigan, P Richard; Montaner, Julio SG

    2016-01-01

    Introduction In many resource-limited settings, combination antiretroviral therapy (cART) failure is diagnosed clinically or immunologically. As such, there is a high likelihood that patients may stay on a virologically failing regimen for a substantial period of time. Here, we compared the long-term impact of initiating non-nucleoside reverse transcriptase inhibitor (NNRTI)- versus boosted protease inhibitor (bPI)-based cART in British Columbia (BC), Canada. Methods We followed prospectively 3925 ART-naïve patients who started NNRTIs (N=1963, 50%) or bPIs (N=1962; 50%) from 1 January 2000 until 30 June 2013 in BC. At six months, we assessed whether patients virologically failed therapy (a plasma viral load (pVL) >50 copies/mL), and we stratified them based on the pVL at the time of failure ≤500 versus >500 copies/mL. We then followed these patients for another six months and calculated their probability of achieving subsequent viral suppression (pVL <50 copies/mL twice consecutively) and of developing drug resistance. These probabilities were adjusted for fixed and time-varying factors, including cART adherence. Results At six months, virologic failure rates were 9.5 and 14.3 cases per 100 person-months for NNRTI and bPI initiators, respectively. NNRTI initiators who failed with a pVL ≤500 copies/mL had a 16% higher probability of achieving subsequent suppression at 12 months than bPI initiators (0.81 (25th–75th percentile 0.75–0.83) vs. 0.72 (0.61–0.75)). However, if failing NNRTI initiators had a pVL >500 copies/mL, they had a 20% lower probability of suppressing at 12 months than pVL-matched bPI initiators (0.37 (0.29–0.45) vs. 0.46 (0.38–0.54)). In terms of evolving HIV drug resistance, those who failed on NNRTI performed worse than bPI in all scenarios, especially if they failed with a viral load >500 copies/mL. Conclusions Our results show that patients who virologically failed at six months on NNRTI and continued on the same regimen had a lower probability of subsequently achieving viral suppression and a higher chance of evolving HIV drug resistance. These results suggest that improving access to regular virologic monitoring is critically important, especially if NNRTI-based cART is to remain a preferred choice for first-line therapy in resource-limited settings. PMID:27499064

  17. Effect of bow-type initial imperfection on reliability of minimum-weight, stiffened structural panels

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, Thiagaraja; Sykes, Nancy P.; Elishakoff, Isaac

    1993-01-01

    Computations were performed to determine the effect of an overall bow-type imperfection on the reliability of structural panels under combined compression and shear loadings. A panel's reliability is the probability that it will perform the intended function - in this case, carry a given load without buckling or exceeding in-plane strain allowables. For a panel loaded in compression, a small initial bow can cause large bending stresses that reduce both the buckling load and the load at which strain allowables are exceeded; hence, the bow reduces the reliability of the panel. In this report, analytical studies on two stiffened panels quantified that effect. The bow is in the shape of a half-sine wave along the length of the panel. The size e of the bow at panel midlength is taken to be the single random variable. Several probability density distributions for e are examined to determine the sensitivity of the reliability to details of the bow statistics. In addition, the effects of quality control are explored with truncated distributions.

  18. Deductibles in health insurance

    NASA Astrophysics Data System (ADS)

    Dimitriyadis, I.; Öney, Ü. N.

    2009-11-01

    This study is an extension to a simulation study that has been developed to determine ruin probabilities in health insurance. The study concentrates on inpatient and outpatient benefits for customers of varying age bands. Loss distributions are modelled through the Allianz tool pack for different classes of insureds. Premiums at different levels of deductibles are derived in the simulation and ruin probabilities are computed assuming a linear loading on the premium. The increase in the probability of ruin at high levels of the deductible clearly shows the insufficiency of proportional loading in deductible premiums. The PH-transform pricing rule developed by Wang is analyzed as an alternative pricing rule. A simple case, where an insured is assumed to be an exponential utility decision maker while the insurer's pricing rule is a PH-transform is also treated.

  19. Wind/tornado design criteria, development to achieve required probabilistic performance goals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, D.S.

    1991-06-01

    This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less

  20. Wind/tornado design criteria, development to achieve required probabilistic performance goals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, D.S.

    This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less

  1. Examination of the gamma equilibrium point hypothesis when applied to single degree of freedom movements performed with different inertial loads.

    PubMed

    Bellomo, A; Inbar, G

    1997-01-01

    One of the theories of human motor control is the gamma Equilibrium Point Hypothesis. It is an attractive theory since it offers an easy control scheme where the planned trajectory shifts monotionically from an initial to a final equilibrium state. The feasibility of this model was tested by reconstructing the virtual trajectory and the stiffness profiles for movements performed with different inertial loads and examining them. Three types of movements were tested: passive movements, targeted movements, and repetitive movements. Each of the movements was performed with five different inertial loads. Plausible virtual trajectories and stiffness profiles were reconstructed based on the gamma Equilibrium Point Hypothesis for the three different types of movements performed with different inertial loads. However, the simple control strategy supported by the model, where the planned trajectory shifts monotonically from an initial to a final equilibrium state, could not be supported for targeted movements performed with added inertial load. To test the feasibility of the model further we must examine the probability that the human motor control system would choose a trajectory more complicated than the actual trajectory to control.

  2. Calibration of micromechanical parameters for DEM simulations by using the particle filter

    NASA Astrophysics Data System (ADS)

    Cheng, Hongyang; Shuku, Takayuki; Thoeni, Klaus; Yamamoto, Haruyuki

    2017-06-01

    The calibration of DEM models is typically accomplished by trail and error. However, the procedure lacks of objectivity and has several uncertainties. To deal with these issues, the particle filter is employed as a novel approach to calibrate DEM models of granular soils. The posterior probability distribution of the microparameters that give numerical results in good agreement with the experimental response of a Toyoura sand specimen is approximated by independent model trajectories, referred as `particles', based on Monte Carlo sampling. The soil specimen is modeled by polydisperse packings with different numbers of spherical grains. Prepared in `stress-free' states, the packings are subjected to triaxial quasistatic loading. Given the experimental data, the posterior probability distribution is incrementally updated, until convergence is reached. The resulting `particles' with higher weights are identified as the calibration results. The evolutions of the weighted averages and posterior probability distribution of the micro-parameters are plotted to show the advantage of using a particle filter, i.e., multiple solutions are identified for each parameter with known probabilities of reproducing the experimental response.

  3. Splash evaluation of SRB designs

    NASA Technical Reports Server (NTRS)

    Counter, D. N.

    1974-01-01

    A technique is developed to optimize the shuttle solid rocket booster (SRB) design for water impact loads. The SRB is dropped by parachute and recovered at sea for reuse. Loads experienced at water impact are design critical. The probability of each water impact load is determined using a Monte Carlo technique and an aerodynamic analysis of the SRB parachute system. Meteorological effects are included and four configurations are evaluated.

  4. On the extinction probability in models of within-host infection: the role of latency and immunity.

    PubMed

    Yan, Ada W C; Cao, Pengxing; McCaw, James M

    2016-10-01

    Not every exposure to virus establishes infection in the host; instead, the small amount of initial virus could become extinct due to stochastic events. Different diseases and routes of transmission have a different average number of exposures required to establish an infection. Furthermore, the host immune response and antiviral treatment affect not only the time course of the viral load provided infection occurs, but can prevent infection altogether by increasing the extinction probability. We show that the extinction probability when there is a time-dependent immune response depends on the chosen form of the model-specifically, on the presence or absence of a delay between infection of a cell and production of virus, and the distribution of latent and infectious periods of an infected cell. We hypothesise that experimentally measuring the extinction probability when the virus is introduced at different stages of the immune response, alongside the viral load which is usually measured, will improve parameter estimates and determine the most suitable mathematical form of the model.

  5. Unified nano-mechanics based probabilistic theory of quasibrittle and brittle structures: I. Strength, static crack growth, lifetime and scaling

    NASA Astrophysics Data System (ADS)

    Le, Jia-Liang; Bažant, Zdeněk P.; Bazant, Martin Z.

    2011-07-01

    Engineering structures must be designed for an extremely low failure probability such as 10 -6, which is beyond the means of direct verification by histogram testing. This is not a problem for brittle or ductile materials because the type of probability distribution of structural strength is fixed and known, making it possible to predict the tail probabilities from the mean and variance. It is a problem, though, for quasibrittle materials for which the type of strength distribution transitions from Gaussian to Weibullian as the structure size increases. These are heterogeneous materials with brittle constituents, characterized by material inhomogeneities that are not negligible compared to the structure size. Examples include concrete, fiber composites, coarse-grained or toughened ceramics, rocks, sea ice, rigid foams and bone, as well as many materials used in nano- and microscale devices. This study presents a unified theory of strength and lifetime for such materials, based on activation energy controlled random jumps of the nano-crack front, and on the nano-macro multiscale transition of tail probabilities. Part I of this study deals with the case of monotonic and sustained (or creep) loading, and Part II with fatigue (or cyclic) loading. On the scale of the representative volume element of material, the probability distribution of strength has a Gaussian core onto which a remote Weibull tail is grafted at failure probability of the order of 10 -3. With increasing structure size, the Weibull tail penetrates into the Gaussian core. The probability distribution of static (creep) lifetime is related to the strength distribution by the power law for the static crack growth rate, for which a physical justification is given. The present theory yields a simple relation between the exponent of this law and the Weibull moduli for strength and lifetime. The benefit is that the lifetime distribution can be predicted from short-time tests of the mean size effect on strength and tests of the power law for the crack growth rate. The theory is shown to match closely numerous test data on strength and static lifetime of ceramics and concrete, and explains why their histograms deviate systematically from the straight line in Weibull scale. Although the present unified theory is built on several previous advances, new contributions are here made to address: (i) a crack in a disordered nano-structure (such as that of hydrated Portland cement), (ii) tail probability of a fiber bundle (or parallel coupling) model with softening elements, (iii) convergence of this model to the Gaussian distribution, (iv) the stress-life curve under constant load, and (v) a detailed random walk analysis of crack front jumps in an atomic lattice. The nonlocal behavior is captured in the present theory through the finiteness of the number of links in the weakest-link model, which explains why the mean size effect coincides with that of the previously formulated nonlocal Weibull theory. Brittle structures correspond to the large-size limit of the present theory. An important practical conclusion is that the safety factors for strength and tolerable minimum lifetime for large quasibrittle structures (e.g., concrete structures and composite airframes or ship hulls, as well as various micro-devices) should be calculated as a function of structure size and geometry.

  6. 18 CFR 12.35 - Specific inspection requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) Seismicity; (ix) Internal stress and hydrostatic pressures in project structures or their foundations or... structures; (iii) The structural adequacy and stability of structures under all credible loading conditions... project works to withstand the loading or overtopping which may occur from a flood up to the probable...

  7. 18 CFR 12.35 - Specific inspection requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...) Seismicity; (ix) Internal stress and hydrostatic pressures in project structures or their foundations or... structures; (iii) The structural adequacy and stability of structures under all credible loading conditions... project works to withstand the loading or overtopping which may occur from a flood up to the probable...

  8. Live fire testing requirements - Assessing the impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Bryon, J.F.

    1992-08-01

    Full-up live-fire testing (LFT) of aircraft configured for combat is evaluated in terms of the practical implications of the technique. LFT legislation requires the testing of tactical fighters, helicopters, and other aircraft when they are loaded with the flammables and explosives associated with combat. LFT permits the study of damage mechanisms and battle-damage repair techniques during the design phase, and probability-of-kill estimates and novel systems designs can be developed based on LFT data.

  9. Real time network traffic monitoring for wireless local area networks based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Balouchestani, Mohammadreza

    2017-05-01

    A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.

  10. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.

  11. Seismic fragility assessment of low-rise stone masonry buildings

    NASA Astrophysics Data System (ADS)

    Abo-El-Ezz, Ahmad; Nollet, Marie-José; Nastev, Miroslav

    2013-03-01

    Many historic buildings in old urban centers in Eastern Canada are made of stone masonry reputed to be highly vulnerable to seismic loads. Seismic risk assessment of stone masonry buildings is therefore the first step in the risk mitigation process to provide adequate planning for retrofit and preservation of historical urban centers. This paper focuses on development of analytical displacement-based fragility curves reflecting the characteristics of existing stone masonry buildings in Eastern Canada. The old historic center of Quebec City has been selected as a typical study area. The standard fragility analysis combines the inelastic spectral displacement, a structure-dependent earthquake intensity measure, and the building damage state correlated to the induced building displacement. The proposed procedure consists of a three-step development process: (1) mechanics-based capacity model, (2) displacement-based damage model and (3) seismic demand model. The damage estimation for a uniform hazard scenario of 2% in 50 years probability of exceedance indicates that slight to moderate damage is the most probable damage experienced by these stone masonry buildings. Comparison is also made with fragility curves implicit in the seismic risk assessment tools Hazus and ELER. Hazus shows the highest probability of the occurrence of no to slight damage, whereas the highest probability of extensive and complete damage is predicted with ELER. This comparison shows the importance of the development of fragility curves specific to the generic construction characteristics in the study area and emphasizes the need for critical use of regional risk assessment tools and generated results.

  12. Desmopressin resistant nocturnal polyuria secondary to increased nocturnal osmotic excretion.

    PubMed

    Dehoorne, Jo L; Raes, Ann M; van Laecke, Erik; Hoebeke, Piet; Vande Walle, Johan G

    2006-08-01

    We investigated the role of increased solute excretion in children with desmopressin resistant nocturnal enuresis and nocturnal polyuria. A total of 42 children with monosymptomatic nocturnal enuresis and significant nocturnal polyuria with high nocturnal urinary osmolality (more than 850 mmol/l) were not responding to desmopressin. A 24-hour urinary concentration profile was obtained with measurement of urine volume, osmolality, osmotic excretion and creatinine. The control group consisted of 100 children without enuresis. Based on osmotic excretion patients were classified into 3 groups. Group 1 had 24-hour increased osmotic excretion, most likely secondary to a high renal osmotic load. This was probably diet related since 11 of these 12 patients were obese. Group 2 had increased osmotic excretion in the evening and night, probably due to a high renal osmotic load caused by the diet characteristics of the evening meal. Group 3 had deficient osmotic excretion during the day, secondary to extremely low fluid intake to compensate for small bladder capacity. Nocturnal polyuria with high urinary osmolality in our patients with desmopressin resistant monosymptomatic nocturnal enuresis is related to abnormal increased osmotic excretion. This may be explained by their fluid and diet habits, eg daytime fluid restriction, and high protein and salt intake.

  13. Risk-based targeting: A new approach in environmental protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, C.A.

    1995-12-31

    Risk-based targeting has recently emerged as an effective tool to help prioritize efforts to identify and manage geographic areas, chemicals, facilities, and agricultural activities that cause the most environmental degradation. This paper focuses on how the Environmental Protection Agency (EPA) has recently used risk-based targeting to identify and screen Federal, industrial, commercial and municipal facilities which contribute to probable human health (fish consumption advisories and contaminated fish tissue) and aquatic life (contaminated sediments) impacts. Preliminary results identified several hundred potential contributors of problem chemicals to probable impacts within the same river reach in 1991--93. Analysis by industry sector showed thatmore » the majority of the facilities identified were publicly owned treatment works (POTWs), in addition to industry organic and inorganic chemical manufacturers, petroleum refineries, and electric services, coatings, engravings, and allied services, among others. Both compliant and non-compliant potentially contributing facilities were identified to some extent in all EPA regions. Additional results identifying possible linkages of other pollutant sources to probable impacts, as well as estimation of potential exposure of these contaminants to minority and/or poverty populations are also presented. Out of these analyses, a number of short and long-term strategies are being developed that EPA may use to reduce loadings of problem contaminants to impacted waterbodies.« less

  14. Probabilistic Simulation of Progressive Fracture in Bolted-Joint Composite Laminates

    NASA Technical Reports Server (NTRS)

    Minnetyan, L.; Singhal, S. N.; Chamis, C. C.

    1996-01-01

    This report describes computational methods to probabilistically simulate fracture in bolted composite structures. An innovative approach that is independent of stress intensity factors and fracture toughness was used to simulate progressive fracture. The effect of design variable uncertainties on structural damage was also quantified. A fast probability integrator assessed the scatter in the composite structure response before and after damage. Then the sensitivity of the response to design variables was computed. General-purpose methods, which are applicable to bolted joints in all types of structures and in all fracture processes-from damage initiation to unstable propagation and global structure collapse-were used. These methods were demonstrated for a bolted joint of a polymer matrix composite panel under edge loads. The effects of the fabrication process were included in the simulation of damage in the bolted panel. Results showed that the most effective way to reduce end displacement at fracture is to control both the load and the ply thickness. The cumulative probability for longitudinal stress in all plies was most sensitive to the load; in the 0 deg. plies it was very sensitive to ply thickness. The cumulative probability for transverse stress was most sensitive to the matrix coefficient of thermal expansion. In addition, fiber volume ratio and fiber transverse modulus both contributed significantly to the cumulative probability for the transverse stresses in all the plies.

  15. Detection of drug resistance mutations at low plasma HIV-1 RNA load in a European multicentre cohort study.

    PubMed

    Prosperi, Mattia C F; Mackie, Nicola; Di Giambenedetto, Simona; Zazzi, Maurizio; Camacho, Ricardo; Fanti, Iuri; Torti, Carlo; Sönnerborg, Anders; Kaiser, Rolf; Codoñer, Francisco M; Van Laethem, Kristel; Bansi, Loveleen; van de Vijver, David A M C; Geretti, Anna Maria; De Luca, Andrea

    2011-08-01

    Guidelines indicate a plasma HIV-1 RNA load of 500-1000 copies/mL as the minimal threshold for antiretroviral drug resistance testing. Resistance testing at lower viral load levels may be useful to guide timely treatment switches, although data on the clinical utility of this remain limited. We report here the influence of viral load levels on the probability of detecting drug resistance mutations (DRMs) and other mutations by routine genotypic testing in a large multicentre European cohort, with a focus on tests performed at a viral load <1000 copies/mL. A total of 16 511 HIV-1 reverse transcriptase and protease sequences from 11 492 treatment-experienced patients were identified, and linked to clinical data on viral load, CD4 T cell counts and antiretroviral treatment history. Test results from 3162 treatment-naive patients served as controls. Multivariable analysis was employed to identify predictors of reverse transcriptase and protease DRMs. Overall, 2500/16 511 (15.14%) test results were obtained at a viral load <1000 copies/mL. Individuals with viral load levels of 1000-10000 copies/mL showed the highest probability of drug resistance to any drug class. Independently from other measurable confounders, treatment-experienced patients showed a trend for DRMs and other mutations to decrease at viral load levels <500 copies/mL. Genotypic testing at low viral load may identify emerging antiretroviral drug resistance at an early stage, and thus might be successfully employed in guiding prompt management strategies that may reduce the accumulation of resistance and cross-resistance, viral adaptive changes, and larger viral load increases.

  16. Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Brunett, Acacia

    2015-04-26

    The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less

  17. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less

  18. Carbonate-H2O2 Leaching for Sequestering Uranium from Seawater

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Horng-Bin; Weisheng, Liao; Wai, Chien

    Uranium adsorbed on amidoxime-based polyethylene fiber in simulated seawater can be quantitatively eluted at room temperature using 1M Na2CO3 containing 0.1 M H2O2. This efficient elution process is probably due to formation of an extremely stable uranyl-peroxo-carbonato complex in the carbonate solution. After washing with water, the sorbent can be reused with little loss of uranium loading capacity. Possible existence of this stable uranyl species in ocean water is also discussed.

  19. Carbonate-H₂O₂ leaching for sequestering uranium from seawater.

    PubMed

    Pan, Horng-Bin; Liao, Weisheng; Wai, Chien M; Oyola, Yatsandra; Janke, Christopher J; Tian, Guoxin; Rao, Linfeng

    2014-07-28

    Uranium adsorbed on amidoxime-based polyethylene fiber in simulated seawater can be quantitatively eluted at room temperature using 1 M Na2CO3 containing 0.1 M H2O2. This efficient elution process is probably due to the formation of an extremely stable uranyl-peroxo-carbonato complex in the carbonate solution. After washing with water, the sorbent can be reused with minimal loss of uranium loading capacity. Possible existence of this stable uranyl species in ocean water is also discussed.

  20. A reliability-based cost effective fail-safe design procedure

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1976-01-01

    The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.

  1. A summary risk score for the prediction of Alzheimer disease in elderly persons.

    PubMed

    Reitz, Christiane; Tang, Ming-Xin; Schupf, Nicole; Manly, Jennifer J; Mayeux, Richard; Luchsinger, José A

    2010-07-01

    To develop a simple summary risk score for the prediction of Alzheimer disease in elderly persons based on their vascular risk profiles. A longitudinal, community-based study. New York, New York. Patients One thousand fifty-one Medicare recipients aged 65 years or older and residing in New York who were free of dementia or cognitive impairment at baseline. We separately explored the associations of several vascular risk factors with late-onset Alzheimer disease (LOAD) using Cox proportional hazards models to identify factors that would contribute to the risk score. Then we estimated the score values of each factor based on their beta coefficients and created the LOAD vascular risk score by summing these individual scores. Risk factors contributing to the risk score were age, sex, education, ethnicity, APOE epsilon4 genotype, history of diabetes, hypertension or smoking, high-density lipoprotein levels, and waist to hip ratio. The resulting risk score predicted dementia well. According to the vascular risk score quintiles, the risk to develop probable LOAD was 1.0 for persons with a score of 0 to 14 and increased 3.7-fold for persons with a score of 15 to 18, 3.6-fold for persons with a score of 19 to 22, 12.6-fold for persons with a score of 23 to 28, and 20.5-fold for persons with a score higher than 28. While additional studies in other populations are needed to validate and further develop the score, our study suggests that this vascular risk score could be a valuable tool to identify elderly individuals who might be at risk of LOAD. This risk score could be used to identify persons at risk of LOAD, but can also be used to adjust for confounders in epidemiologic studies.

  2. An extravehicular suit impact load attenuation study to improve astronaut bone fracture prediction.

    PubMed

    Sulkowski, Christina M; Gilkey, Kelly M; Lewandowski, Beth E; Samorezov, Sergey; Myers, Jerry G

    2011-04-01

    Understanding the contributions to the risk of bone fracture during spaceflight is essential for mission success. A pressurized extravehicular activity (EVA) suit analogue test bed was developed, impact load attenuation data were obtained, and the load at the hip of an astronaut who falls to the side during an EVA was characterized. Offset (representing the gap between the EVA suit and the astronaut's body), impact load magnitude, and EVA suit operating pressure were factors varied in the study. The attenuation data were incorporated into a probabilistic model of bone fracture risk during spaceflight, replacing the previous load attenuation value that was based on commercial hip protector data. Load attenuation was more dependent on offset than on pressurization or load magnitude, especially at small offset values. Load attenuation factors for offsets between 0.1-1.5 cm were 0.69 +/- 0.15, 0.49 +/- 0.22, and 0.35 +/- 0.18 for mean impact forces of 4827, 6400, and 8467 N, respectively. Load attenuation factors for offsets of 2.8-5.3 cm were 0.93 +/- 0.2, 0.94 +/- 0.1, and 0.84 +/- 0.5 for the same mean impact forces. The mean and 95th percentile bone fracture risk index predictions were each reduced by 65-83%. The mean and 95th percentile bone fracture probability predictions were both reduced approximately 20-50%. The reduction in uncertainty and improved confidence in bone fracture predictions increased the fidelity and credibility of the fracture risk model and its benefit to mission design and in-flight operational decisions.

  3. Probability of nitrate contamination of recently recharged groundwaters in the conterminous United States

    USGS Publications Warehouse

    Nolan, B.T.; Hitt, K.J.; Ruddy, B.C.

    2002-01-01

    A new logistic regression (LR) model was used to predict the probability of nitrate contamination exceeding 4 mg/L in predominantly shallow, recently recharged ground waters of the United States. The new model contains variables representing (1) N fertilizer loading (p 2 = 0.875), indicating that the LR model fits the data well. The likelihood of nitrate contamination is greater in areas with high N loading and well-drained surficial soils over unconsolidated sand and gravels. The LR model correctly predicted the status of nitrate contamination in 75% of wells in a validation data set. Considering all wells used in both calibration and validation, observed median nitrate concentration increased from 0.24 to 8.30 mg/L as the mapped probability of nitrate exceeding 4 mg/L increased from less than or equal to 0.17 to > 0.83.

  4. CARES/Life Software for Designing More Reliable Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.

  5. Dynamic Response of an Optomechanical System to a Stationary Random Excitation in the Time Domain

    DOE PAGES

    Palmer, Jeremy A.; Paez, Thomas L.

    2011-01-01

    Modern electro-optical instruments are typically designed with assemblies of optomechanical members that support optics such that alignment is maintained in service environments that include random vibration loads. This paper presents a nonlinear numerical analysis that calculates statistics for the peak lateral response of optics in an optomechanical sub-assembly subject to random excitation of the housing. The work is unique in that the prior art does not address peak response probability distribution for stationary random vibration in the time domain for a common lens-retainer-housing system with Coulomb damping. Analytical results are validated by using displacement response data from random vibration testingmore » of representative prototype sub-assemblies. A comparison of predictions to experimental results yields reasonable agreement. The Type I Asymptotic form provides the cumulative distribution function for peak response probabilities. Probabilities are calculated for actual lens centration tolerances. The probability that peak response will not exceed the centration tolerance is greater than 80% for prototype configurations where the tolerance is high (on the order of 30 micrometers). Conversely, the probability is low for those where the tolerance is less than 20 micrometers. The analysis suggests a design paradigm based on the influence of lateral stiffness on the magnitude of the response.« less

  6. Effect of Cyclic Thermo-Mechanical Loads on Fatigue Reliability in Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Murthy, P. L. N.; Chamis, C. C.

    1996-01-01

    A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multi-factor interaction relationship developed at NASA Lewis Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability- based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)(sub s) graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  7. Optimized lower leg injury probability curves from postmortem human subject tests under axial impacts.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko

    2014-01-01

    Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines.

  8. Performance evaluation of data center service localization based on virtual resource migration in software defined elastic optical network.

    PubMed

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; Tan, Yuanlong; Lin, Yi; Han, Jianrui; Lee, Young

    2015-09-07

    Data center interconnection with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented cross stratum optimization of optical network and application stratums resources that allows to accommodate data center services. In view of this, this study extends the data center resources to user side to enhance the end-to-end quality of service. We propose a novel data center service localization (DCSL) architecture based on virtual resource migration in software defined elastic data center optical network. A migration evaluation scheme (MES) is introduced for DCSL based on the proposed architecture. The DCSL can enhance the responsiveness to the dynamic end-to-end data center demands, and effectively reduce the blocking probability to globally optimize optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of our OpenFlow-based enhanced SDN testbed. The performance of MES scheme under heavy traffic load scenario is also quantitatively evaluated based on DCSL architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning scheme.

  9. Geochemical and lithological factors in acid precipitation

    Treesearch

    James R. Kramer

    1976-01-01

    Acid precipitation is altered by interaction with rocks, sediment and soil. A calcareous region buffers even the most intense loading at pH ~8; an alumino silicate region with unconsolidated sediment buffers acid loadings at pH ~6.5; alumino silicate outcrops are generally acidified. Either FeOOH or alumino silicates are probable H+...

  10. Active controls technology to maximize structural efficiency

    NASA Technical Reports Server (NTRS)

    Hoy, J. M.; Arnold, J. M.

    1978-01-01

    The implication of the dependence on active controls technology during the design phase of transport structures is considered. Critical loading conditions are discussed along with probable ways of alleviating these loads. Why fatigue requirements may be critical and can only be partially alleviated is explained. The significance of certain flutter suppression system criteria is examined.

  11. Runoff and phosphorus loads from two Iowa fields with and without applied manure, 2000-2011

    USDA-ARS?s Scientific Manuscript database

    Understanding the dynamics of field-edge runoff water quality and responses to changes in management practices and climate through monitoring will probably require decade-duration data sets. This study compared runoff volumes and phosphorus loads from two fields in central Iowa, where the glacial la...

  12. Internalizing and externalizing problems in adolescence: general and dimension-specific effects of familial loadings and preadolescent temperament traits.

    PubMed

    Ormel, J; Oldehinkel, A J; Ferdinand, R F; Hartman, C A; De Winter, A F; Veenstra, R; Vollebergh, W; Minderaa, R B; Buitelaar, J K; Verhulst, F C

    2005-12-01

    We investigated the links between familial loading, preadolescent temperament, and internalizing and externalizing problems in adolescence, hereby distinguishing effects on maladjustment in general versus dimension-specific effects on either internalizing or externalizing problems. In a population-based sample of 2230 preadolescents (10-11 years) familial loading (parental lifetime psychopathology) and offspring temperament were assessed at baseline by parent report, and offspring psychopathology at 2.5-years follow-up by self-report, teacher report and parent report. We used purified measures of temperament and psychopathology and partialled out shared variance between internalizing and externalizing problems. Familial loading of internalizing psychopathology predicted offspring internalizing but not externalizing problems, whereas familial loading of externalizing psychopathology predicted offspring externalizing but not internalizing problems. Both familial loadings were associated with Frustration, low Effortful Control, and Fear. Frustration acted as a general risk factor predicting severity of maladjustment; low Effortful Control and Fear acted as dimension-specific risk factors that predicted a particular type of psychopathology; whereas Shyness, High-Intensity Pleasure, and Affiliation acted as direction markers that steered the conditional probability of internalizing versus externalizing problems, in the event of maladjustment. Temperament traits mediated one-third of the association between familial loading and psychopathology. Findings were robust across different composite measures of psychopathology, and applied to girls as well as boys. With regard to familial loading and temperament, it is important to distinguish general risk factors (Frustration) from dimension-specific risk factors (familial loadings, Effortful Control, Fear), and direction markers that act as pathoplastic factors (Shyness, High-Intensity Pleasure, Affiliation) from both types of risk factors. About one-third of familial loading effects on psychopathology in early adolescence are mediated by temperament.

  13. Promise and problems in using stress triggering models for time-dependent earthquake hazard assessment

    NASA Astrophysics Data System (ADS)

    Cocco, M.

    2001-12-01

    Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to the stressing history perturbing the faults (such as dynamic stress changes, post-seismic stress changes caused by viscolelastic relaxation or fluid flow). If, for instance, we believe that dynamic stress changes can trigger aftershocks or earthquakes years after the passing of the seismic waves through the fault, the perspective of calculating interaction probability is untenable. It is therefore clear we have learned a lot on earthquake interaction incorporating fault constitutive properties, allowing to solve existing controversy, but leaving open questions for future research.

  14. Cache-enabled small cell networks: modeling and tradeoffs.

    PubMed

    Baştuǧ, Ejder; Bennis, Mehdi; Kountouris, Marios; Debbah, Mérouane

    We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users' demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.

  15. A Bayesian Approach for Sensor Optimisation in Impact Identification

    PubMed Central

    Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.

    2016-01-01

    This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064

  16. The use of subjective expert opinions in cost optimum design of aerospace structures. [probabilistic failure models

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hanagud, S.

    1975-01-01

    The results of two questionnaires sent to engineering experts are statistically analyzed and compared with objective data from Saturn V design and testing. Engineers were asked how likely it was for structural failure to occur at load increments above and below analysts' stress limit predictions. They were requested to estimate the relative probabilities of different failure causes, and of failure at each load increment given a specific cause. Three mathematical models are constructed based on the experts' assessment of causes. The experts' overall assessment of prediction strength fits the Saturn V data better than the models do, but a model test option (T-3) based on the overall assessment gives more design change likelihood to overstrength structures than does an older standard test option. T-3 compares unfavorably with the standard option in a cost optimum structural design problem. The report reflects a need for subjective data when objective data are unavailable.

  17. Reading skill and word skipping: Implications for visual and linguistic accounts of word skipping.

    PubMed

    Eskenazi, Michael A; Folk, Jocelyn R

    2015-11-01

    We investigated whether high-skill readers skip more words than low-skill readers as a result of parafoveal processing differences based on reading skill. We manipulated foveal load and word length, two variables that strongly influence word skipping, and measured reading skill using the Nelson-Denny Reading Test. We found that reading skill did not influence the probability of skipping five-letter words, but low-skill readers were less likely to skip three-letter words when foveal load was high. Thus, reading skill is likely to influence word skipping when the amount of information in the parafovea falls within the word identification span. We interpret the data in the context of visual-based (extended optimal viewing position model) and linguistic based (E-Z Reader model) accounts of word skipping. The models make different predictions about how and why a word and skipped; however, the data indicate that both models should take into account the fact that different factors influence skipping rates for high- and low-skill readers. (c) 2015 APA, all rights reserved).

  18. Probabilistic Sizing and Verification of Space Ceramic Structures

    NASA Astrophysics Data System (ADS)

    Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit

    2012-07-01

    Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.

  19. Annual suspended-sediment loads in the Colorado River near Cisco, Utah, 1930-82

    USGS Publications Warehouse

    Thompson, K.R.

    1985-01-01

    The Colorado River upstream of gaging station 09180500 near Cisco, Utah, drains about 24,100 square miles in Utah and Colorado. Altitudes in the basin range from 12,480 feet near the headwaters to 4,090 feet at station 09180500. The average annual precipitation for 1894-1982 near the station was 7.94 inches. The average annual precipitation near the headwaters often exceeds 50 inches. Rocks ranging in age from Precambrian to Holocene are exposed in the drainage basin upstream from station 09180500. Shale, limestone, siltstone, mudstone, and sandstone probably are the most easily eroded rocks in the basin, and they contribute large quantities of sediment to the Colorado River. During 1930-82, the U.S. Geological Survey collected records of fluvial sediment at station 09180500. Based on these records, the mean annual suspended-sediment load was 11,390,000 tone, ranging from 2,038,000 tons in water year 1981 to 35,700,000 tons in water year 1938. The minimum daily load of 14 tons was on August 22, 1960, and the maximum daily load of 2,790,000 tons was on October 14, 1941. (USGS)

  20. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  1. Probability-based methodology for buckling investigation of sandwich composite shells with and without cut-outs

    NASA Astrophysics Data System (ADS)

    Alfano, M.; Bisagni, C.

    2017-01-01

    The objective of the running EU project DESICOS (New Robust DESign Guideline for Imperfection Sensitive COmposite Launcher Structures) is to formulate an improved shell design methodology in order to meet the demand of aerospace industry for lighter structures. Within the project, this article discusses the development of a probability-based methodology developed at Politecnico di Milano. It is based on the combination of the Stress-Strength Interference Method and the Latin Hypercube Method with the aim to predict the bucking response of three sandwich composite cylindrical shells, assuming a loading condition of pure compression. The three shells are made of the same material, but have different stacking sequence and geometric dimensions. One of them presents three circular cut-outs. Different types of input imperfections, treated as random variables, are taken into account independently and in combination: variability in longitudinal Young's modulus, ply misalignment, geometric imperfections, and boundary imperfections. The methodology enables a first assessment of the structural reliability of the shells through the calculation of a probabilistic buckling factor for a specified level of probability. The factor depends highly on the reliability level, on the number of adopted samples, and on the assumptions made in modeling the input imperfections. The main advantage of the developed procedure is the versatility, as it can be applied to the buckling analysis of laminated composite shells and sandwich composite shells including different types of imperfections.

  2. A hybrid deterministic-probabilistic approach to model the mechanical response of helically arranged hierarchical strands

    NASA Astrophysics Data System (ADS)

    Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.

    2017-09-01

    Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.

  3. Constituent loads in small streams: the process and problems of estimating sediment flux

    Treesearch

    R. B. Thomas

    1989-01-01

    Constituent loads in small streams are often estimated poorly. This is especially true for discharge-related constituents like sediment, since their flux is highly variable and mainly occurs during infrequent high-flow events. One reason for low-quality estimates is that most prevailing data collection methods ignore sampling probabilities and only partly account for...

  4. Bed load transport over a broad range of timescales: Determination of three regimes of fluctuations

    NASA Astrophysics Data System (ADS)

    Ma, Hongbo; Heyman, Joris; Fu, Xudong; Mettra, Francois; Ancey, Christophe; Parker, Gary

    2014-12-01

    This paper describes the relationship between the statistics of bed load transport flux and the timescale over which it is sampled. A stochastic formulation is developed for the probability distribution function of bed load transport flux, based on the Ancey et al. (2008) theory. An analytical solution for the variance of bed load transport flux over differing sampling timescales is presented. The solution demonstrates that the timescale dependence of the variance of bed load transport flux reduces to a three-regime relation demarcated by an intermittency timescale (tI) and a memory timescale (tc). As the sampling timescale increases, this variance passes through an intermittent stage (≪tI), an invariant stage (tI < t < tc), and a memoryless stage (≫ tc). We propose a dimensionless number (Ra) to represent the relative strength of fluctuation, which provides a common ground for comparison of fluctuation strength among different experiments, as well as different sampling timescales for each experiment. Our analysis indicates that correlated motion and the discrete nature of bed load particles are responsible for this three-regime behavior. We use the data from three experiments with high temporal resolution of bed load transport flux to validate the proposed three-regime behavior. The theoretical solution for the variance agrees well with all three sets of experimental data. Our findings contribute to the understanding of the observed fluctuations of bed load transport flux over monosize/multiple-size grain beds, to the characterization of an inherent connection between short-term measurements and long-term statistics, and to the design of appropriate sampling strategies for bed load transport flux.

  5. Developing the fuzzy c-means clustering algorithm based on maximum entropy for multitarget tracking in a cluttered environment

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing

    2018-01-01

    For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.

  6. Fishnet statistics for probabilistic strength and scaling of nacreous imbricated lamellar materials

    NASA Astrophysics Data System (ADS)

    Luo, Wen; Bažant, Zdeněk P.

    2017-12-01

    Similar to nacre (or brick masonry), imbricated (or staggered) lamellar structures are widely found in nature and man-made materials, and are of interest for biomimetics. They can achieve high defect insensitivity and fracture toughness, as demonstrated in previous studies. But the probability distribution with a realistic far-left tail is apparently unknown. Here, strictly for statistical purposes, the microstructure of nacre is approximated by a diagonally pulled fishnet with quasibrittle links representing the shear bonds between parallel lamellae (or platelets). The probability distribution of fishnet strength is calculated as a sum of a rapidly convergent series of the failure probabilities after the rupture of one, two, three, etc., links. Each of them represents a combination of joint probabilities and of additive probabilities of disjoint events, modified near the zone of failed links by the stress redistributions caused by previously failed links. Based on previous nano- and multi-scale studies at Northwestern, the strength distribution of each link, characterizing the interlamellar shear bond, is assumed to be a Gauss-Weibull graft, but with a deeper Weibull tail than in Type 1 failure of non-imbricated quasibrittle materials. The autocorrelation length is considered equal to the link length. The size of the zone of failed links at maximum load increases with the coefficient of variation (CoV) of link strength, and also with fishnet size. With an increasing width-to-length aspect ratio, a rectangular fishnet gradually transits from the weakest-link chain to the fiber bundle, as the limit cases. The fishnet strength at failure probability 10-6 grows with the width-to-length ratio. For a square fishnet boundary, the strength at 10-6 failure probability is about 11% higher, while at fixed load the failure probability is about 25-times higher than it is for the non-imbricated case. This is a major safety advantage of the fishnet architecture over particulate or fiber reinforced materials. There is also a strong size effect, partly similar to that of Type 1 while the curves of log-strength versus log-size for different sizes could cross each other. The predicted behavior is verified by about a million Monte Carlo simulations for each of many fishnet geometries, sizes and CoVs of link strength. In addition to the weakest-link or fiber bundle, the fishnet becomes the third analytically tractable statistical model of structural strength, and has the former two as limit cases.

  7. Numerical simulation of large-scale bed load particle tracer advection-dispersion in rivers with free bars

    USGS Publications Warehouse

    Iwasaki, Toshiki; Nelson, Jonathan M.; Shimizu, Yasuyuki; Parker, Gary

    2017-01-01

    Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.

  8. Numerical simulation of large-scale bed load particle tracer advection-dispersion in rivers with free bars

    NASA Astrophysics Data System (ADS)

    Iwasaki, Toshiki; Nelson, Jonathan; Shimizu, Yasuyuki; Parker, Gary

    2017-04-01

    Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.

  9. Pigment epithelial-derived factor gene loaded novel COOH-PEG-PLGA-COOH nanoparticles promoted tumor suppression by systemic administration.

    PubMed

    Yu, Ting; Xu, Bei; He, Lili; Xia, Shan; Chen, Yan; Zeng, Jun; Liu, Yongmei; Li, Shuangzhi; Tan, Xiaoyue; Ren, Ke; Yao, Shaohua; Song, Xiangrong

    2016-01-01

    Anti-angiogenesis has been proposed as an effective therapeutic strategy for cancer treatment. Pigment epithelium-derived factor (PEDF) is one of the most powerful endogenous anti-angiogenic reagents discovered to date and PEDF gene therapy has been recognized as a promising treatment option for various tumors. There is an urgent need to develop a safe and valid vector for its systemic delivery. Herein, a novel gene delivery system based on the newly synthesized copolymer COOH-PEG-PLGA-COOH (CPPC) was developed in this study, which was probably capable of overcoming the disadvantages of viral vectors and cationic lipids/polymers-based nonviral carriers. PEDF gene loaded CPPC nanoparticles (D-NPs) were fabricated by a modified double-emulsion water-in-oil-in-water (W/O/W) solvent evaporation method. D-NPs with uniform spherical shape had relatively high drug loading (~1.6%), probably because the introduced carboxyl group in poly (D,L-lactide-co-glycolide) terminal enhanced the interaction of copolymer with the PEDF gene complexes. An excellent in vitro antitumor effect was found in both C26 and A549 cells treated by D-NPs, in which PEDF levels were dramatically elevated due to the successful transfection of PEDF gene. D-NPs also showed a strong inhibitory effect on proliferation of human umbilical vein endothelial cells in vitro and inhibited the tumor-induced angiogenesis in vivo by an alginate-encapsulated tumor cell assay. Further in vivo antitumor investigation, carried out in a C26 subcutaneous tumor model by intravenous injection, demonstrated that D-NPs could achieve a significant antitumor activity with sharply reduced microvessel density and significantly promoted tumor cell apoptosis. Additionally, the in vitro hemolysis analysis and in vivo serological and biochemical analysis revealed that D-NPs had no obvious toxicity. All the data indicated that the novel CPPC nanoparticles were ideal vectors for the systemic delivery of PEDF gene and might be widely used as systemic gene vectors.

  10. Effectiveness of two conventional methods for seismic retrofit of steel and RC moment resisting frames based on damage control criteria

    NASA Astrophysics Data System (ADS)

    Beheshti Aval, Seyed Bahram; Kouhestani, Hamed Sadegh; Mottaghi, Lida

    2017-07-01

    This study investigates the efficiency of two types of rehabilitation methods based on economic justification that can lead to logical decision making between the retrofitting schemes. Among various rehabilitation methods, concentric chevron bracing (CCB) and cylindrical friction damper (CFD) were selected. The performance assessment procedure of the frames is divided into two distinct phases. First, the limit state probabilities of the structures before and after rehabilitation are investigated. In the second phase, the seismic risk of structures in terms of life safety and financial losses (decision variables) using the recently published FEMA P58 methodology is evaluated. The results show that the proposed retrofitting methods improve the serviceability and life safety performance levels of steel and RC structures at different rates when subjected to earthquake loads. Moreover, these procedures reveal that financial losses are greatly decreased, and were more tangible by the application of CFD rather than using CCB. Although using both retrofitting methods reduced damage state probabilities, incorporation of a site-specific seismic hazard curve to evaluate mean annual occurrence frequency at the collapse prevention limit state caused unexpected results to be obtained. Contrary to CFD, the collapse probability of the structures retrofitted with CCB increased when compared with the primary structures.

  11. A data-driven wavelet-based approach for generating jumping loads

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Guo; Racic, Vitomir

    2018-06-01

    This paper suggests an approach to generate human jumping loads using wavelet transform and a database of individual jumping force records. A total of 970 individual jumping force records of various frequencies were first collected by three experiments from 147 test subjects. For each record, every jumping pulse was extracted and decomposed into seven levels by wavelet transform. All the decomposition coefficients were stored in an information database. Probability distributions of jumping cycle period, contact ratio and energy of the jumping pulse were statistically analyzed. Inspired by the theory of DNA recombination, an approach was developed by interchanging the wavelet coefficients between different jumping pulses. To generate a jumping force time history with N pulses, wavelet coefficients were first selected randomly from the database at each level. They were then used to reconstruct N pulses by the inverse wavelet transform. Jumping cycle periods and contract ratios were then generated randomly based on their probabilistic functions. These parameters were assigned to each of the N pulses which were in turn scaled by the amplitude factors βi to account for energy relationship between successive pulses. The final jumping force time history was obtained by linking all the N cycles end to end. This simulation approach can preserve the non-stationary features of the jumping load force in time-frequency domain. Application indicates that this approach can be used to generate jumping force time history due to single people jumping and also can be extended further to stochastic jumping loads due to groups and crowds.

  12. Offshore fatigue design turbulence

    NASA Astrophysics Data System (ADS)

    Larsen, Gunner C.

    2001-07-01

    Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.

  13. Some Interesting Applications of Probabilistic Techiques in Structural Dynamic Analysis of Rocket Engines

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.

    2014-01-01

    Numerical and Analytical methods developed to determine damage accumulation in specific engine components when speed variation included. Dither Life Ratio shown to be well over factor of 2 for specific example. Steady-State assumption shown to be accurate for most turbopump cases, allowing rapid calculation of DLR. If hot-fire speed data unknown, Monte Carlo method developed that uses speed statistics for similar engines. Application of techniques allow analyst to reduce both uncertainty and excess conservatism. High values of DLR could allow previously unacceptable part to pass HCF criteria without redesign. Given benefit and ease of implementation, recommend that any finite life turbomachine component analysis adopt these techniques. Probability Values calculated, compared, and evaluated for several industry-proposed methods for combining random and harmonic loads. Two new excel macros written to calculate combined load for any specific probability level. Closed form Curve fits generated for widely used 3(sigma) and 2(sigma) probability levels. For design of lightweight aerospace components, obtaining accurate, reproducible, statistically meaningful answer critical.

  14. Photos for Estimating Residue Loadings Before and After Burning in Southern Appalachian Mixed Pine - Hardwood Clearcuts

    Treesearch

    Bradford M. Sanders; David H. van Lear

    1988-01-01

    Paired photographs show fuel conditions before and after burning in recently clearcut stands of mixed pine-hardwoods in the Southern Appalachians. Comparison with the photos permits fast assessment of fuel loading and probable burning success. Information with each photo includes measured weights, volumes, and other residue data, information about the timber stand and...

  15. Probabilistic Dynamic Buckling of Smart Composite Shells

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    A computational simulation method is presented to evaluate the deterministic and nondeterministic dynamic buckling of smart composite shells. The combined use of composite mechanics, finite element computer codes, and probabilistic analysis enable the effective assessment of the dynamic buckling load of smart composite shells. A universal plot is generated to estimate the dynamic buckling load of composite shells at various load rates and probabilities. The shell structure is also evaluated with smart fibers embedded in the plies right below the outer plies. The results show that, on the average, the use of smart fibers improved the shell buckling resistance by about 10 percent at different probabilities and delayed the buckling occurrence time. The probabilistic sensitivities results indicate that uncertainties in the fiber volume ratio and ply thickness have major effects on the buckling load while uncertainties in the electric field strength and smart material volume fraction have moderate effects. For the specific shell considered in this evaluation, the use of smart composite material is not recommended because the shell buckling resistance can be improved by simply re-arranging the orientation of the outer plies, as shown in the dynamic buckling analysis results presented in this report.

  16. Probabilistic Dynamic Buckling of Smart Composite Shells

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2007-01-01

    A computational simulation method is presented to evaluate the deterministic and nondeterministic dynamic buckling of smart composite shells. The combined use of intraply hybrid composite mechanics, finite element computer codes, and probabilistic analysis enable the effective assessment of the dynamic buckling load of smart composite shells. A universal plot is generated to estimate the dynamic buckling load of composite shells at various load rates and probabilities. The shell structure is also evaluated with smart fibers embedded in the plies right next to the outer plies. The results show that, on the average, the use of smart fibers improved the shell buckling resistance by about 10% at different probabilities and delayed the buckling occurrence time. The probabilistic sensitivities results indicate that uncertainties in the fiber volume ratio and ply thickness have major effects on the buckling load while uncertainties in the electric field strength and smart material volume fraction have moderate effects. For the specific shell considered in this evaluation, the use of smart composite material is not recommended because the shell buckling resistance can be improved by simply re-arranging the orientation of the outer plies, as shown in the dynamic buckling analysis results presented in this report.

  17. Reliability, Risk and Cost Trade-Offs for Composite Designs

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1996-01-01

    Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.

  18. A Neutron Based Interrogation System For SNM In Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, Steven Z.; Koltick, David S.

    A complete system has been simulated using experimentally obtained input parameters for the detection of special nuclear materials (SNM). A variation of the associated particle imaging (API) technique, referred to as reverse associated particle imaging detection (RAPID), has been developed in the context of detecting 5-kg spherical samples of U-235 in cargo. The RAPID technique allows for the interrogation of containers at neutron production rates between {approx}1x10{sup 8} neutrons/s and {approx}3x10{sup 8} neutrons/s. The merit of performance for the system is the time to detect the threat material with 95% probability of detection and 10{sup -4} false positive rate permore » interrogated voxel of cargo. Detection times of 5 minutes were found for a maximally loaded cargo container uniformly filled with iron and as low as 1 second in containers loaded to 1/4 of full capacity with either iron or wood. The worse case system performance, 30 minutes interrogation time, occurs for a maximally loaded container containing wood at 0.4 g/cm{sup 3}.« less

  19. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2011-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  20. Probabilistic Simulation for Combined Cycle Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multifactor interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  1. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  2. Analysis of various descent trajectories for a hypersonic-cruise, cold-wall research airplane

    NASA Technical Reports Server (NTRS)

    Lawing, P. L.

    1975-01-01

    The probable descent operating conditions for a hypersonic air-breathing research airplane were examined. Descents selected were cruise angle of attack, high dynamic pressure, high lift coefficient, turns, and descents with drag brakes. The descents were parametrically exercised and compared from the standpoint of cold-wall (367 K) aircraft heat load. The descent parameters compared were total heat load, peak heating rate, time to landing, time to end of heat pulse, and range. Trends in total heat load as a function of cruise Mach number, cruise dynamic pressure, angle-of-attack limitation, pull-up g-load, heading angle, and drag-brake size are presented.

  3. Nutrient Concentrations, Loads, and Yields in the Eucha-Spavinaw Basin, Arkansas and Oklahoma, 2002-2004

    USGS Publications Warehouse

    Tortorelli, Robert L.

    2006-01-01

    The City of Tulsa, Oklahoma, uses Lake Eucha and Spavinaw Lake in the Eucha-Spavinaw basin in northwestern Arkansas and northeastern Oklahoma for public water supply. Taste and odor problems in the water attributable to blue-green algae have increased in frequency over time. Changes in the algae community in the lakes may be attributable to increases in nutrient levels in the lakes, and in the waters feeding the lakes. The U.S. Geological Survey, in cooperation with the City of Tulsa, conducted an investigation to summarize nitrogen and phosphorus concentrations and provide estimates of nitrogen and phosphorus loads, yields, and flow-weighted concentrations in the Eucha-Spavinaw basin for a 3-year period from January 2002 through December 2004. This report provides information needed to advance knowledge of the regional hydrologic system and understanding of hydrologic processes, and provides hydrologic data and results useful to multiple parties for interstate compacts. Nitrogen and phosphorus concentrations were significantly greater in runoff samples than in base-flow samples at Spavinaw Creek near Maysville, Arkansas; Spavinaw Creek near Colcord, Oklahoma, and Beaty Creek near Jay, Oklahoma. Runoff concentrations were not significantly greater than in base-flow samples at Spavinaw Creek near Cherokee, Arkansas; and Spavinaw Creek near Sycamore, Oklahoma. Nitrogen concentrations in base-flow samples significantly increased in the downstream direction in Spavinaw Creek from the Maysville to Sycamore stations then significantly decreased from the Sycamore to the Colcord stations. Nitrogen in base-flow samples from Beaty Creek was significantly less than in those from Spavinaw Creek. Phosphorus concentrations in base-flow samples significantly increased from the Maysville to Cherokee stations in Spavinaw Creek, probably due to a point source between those stations, then significantly decreased downstream from the Cherokee to Colcord stations. Phosphorus in base-flow samples from Beaty Creek was significantly less than phosphorus in base-flow samples from Spavinaw Creek downstream from the Maysville station. Nitrogen concentrations in runoff samples were not significantly different among the stations on Spavinaw Creek; however, the concentrations at Beaty Creek were significantly less than at all other stations. Phosphorus concentrations in runoff samples were not significantly different among the three downstream stations on Spavinaw Creek, and not significantly different at the Maysville station on Spavinaw Creek and the Beaty Creek station. Phosphorus and nitrogen concentrations in runoff samples from all stations generally increased with increasing streamflow. Estimated mean annual nitrogen total loads from 2002-2004 were substantially greater at the Spavinaw Creek stations than at Beaty Creek and increased in a downstream direction from Maysville to Colcord in Spavinaw Creek, with the load at the Colcord station about 2 times that of Maysville station. Estimated mean annual nitrogen base-flow loads at the Spavinaw Creek stations were about 5 to 11 times greater than base-flow loads at Beaty Creek. The runoff component of the annual nitrogen total load for Beaty Creek was 85 percent, whereas, at the Spavinaw Creek stations, the range in the runoff component was 60 to 66 percent. Estimated mean annual phosphorus total loads from 2002-2004 were greater at the Spavinaw Creek stations from Cherokee to Colcord than at Beaty Creek and increased in a downstream direction from Maysville to Colcord in Spavinaw Creek, with the load at the Colcord station about 2.5 times that of Maysville station. Estimated mean annual phosphorus base-flow loads at the Spavinaw Creek stations were about 2.5 to 19 times greater than at Beaty Creek. Phosphorus base-flow loads increased about 8 times from Maysville to Cherokee in Spavinaw Creek; the base-flow loads were about the same at the three downstream stations. The runoff component

  4. Micro- and macromechanics of fracture of structural elements

    NASA Astrophysics Data System (ADS)

    Zavoychinskaya, E. B.

    2012-05-01

    A mathematical model for the description of bulk microfracture processes in metals, which are understood as nucleation and coalescence of submicrocracks, microcracks, and short nonpropagating microcracks, and of brittle macrofracture processes in metals is presented. This model takes into account the laws of formation and propagation of short propagating, medium, and significant microcracks. The basic notions of this model are the reduced length of cracks and the probability of micro- and macrofracture. The model is based on the mechanical parameters of metal strength and fracture, which are studied experimentally. The expressions for determining the probability in the case of one-dimensional symmetric loading are given. The formulas for determining the threshold number of cycles at the beginning of crack formation are obtained for cracks of each type. For the first time, the data on standard parameters of fatigue strength were used to construct the fatigue curve of metals and alloys for macrocracks.

  5. Focused sunlight factor of forest fire danger assessment using Web-GIS and RS technologies

    NASA Astrophysics Data System (ADS)

    Baranovskiy, Nikolay V.; Sherstnyov, Vladislav S.; Yankovich, Elena P.; Engel, Marina V.; Belov, Vladimir V.

    2016-08-01

    Timiryazevskiy forestry of Tomsk region (Siberia, Russia) is a study area elaborated in current research. Forest fire danger assessment is based on unique technology using probabilistic criterion, statistical data on forest fires, meteorological conditions, forest sites classification and remote sensing data. MODIS products are used for estimating some meteorological conditions and current forest fire situation. Geonformation technologies are used for geospatial analysis of forest fire danger situation on controlled forested territories. GIS-engine provides opportunities to construct electronic maps with different levels of forest fire probability and support raster layer for satellite remote sensing data on current forest fires. Web-interface is used for data loading on specific web-site and for forest fire danger data representation via World Wide Web. Special web-forms provide interface for choosing of relevant input data in order to process the forest fire danger data and assess the forest fire probability.

  6. A self-analysis of the NASA-TLX workload measure.

    PubMed

    Noyes, Jan M; Bruneau, Daniel P J

    2007-04-01

    Computer use and, more specifically, the administration of tests and materials online continue to proliferate. A number of subjective, self-report workload measures exist, but the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is probably the most well known and used. The aim of this paper is to consider the workload costs associated with the computer-based and paper versions of the NASA-TLX measure. It was found that there is a significant difference between the workload scores for the two media, with the computer version of the NASA-TLX incurring more workload. This has implications for the practical use of the NASA-TLX as well as for other computer-based workload measures.

  7. Photoanode Thickness Optimization and Impedance Spectroscopic Analysis of Dye-Sensitized Solar Cells based on a Carbazole-Containing Ruthenium Dye

    NASA Astrophysics Data System (ADS)

    Choi, Jongwan; Kim, Felix Sunjoo

    2018-03-01

    We studied the influence of photoanode thickness on the photovoltaic characteristics and impedance responses of the dye-sensitized solar cells based on a ruthenium dye containing a hexyloxyl-substituted carbazole unit (Ru-HCz). As the thickness of photoanode increases from 4.2 μm to 14.8 μm, the dye-loading amount and the efficiency increase. The device with thicker photoanode shows a decrease in the efficiency due to the higher probability of recombination of electron-hole pairs before charge extraction. We also analyzed the electron-transfer and recombination characteristics as a function of photoanode thickness through detailed electrochemical impedance spectroscopy analysis.

  8. Impact of adherence on duration of virological suppression among patients receiving combination antiretroviral therapy.

    PubMed

    Raboud, J M; Harris, M; Rae, S; Montaner, J S G

    2002-04-01

    To assess the effect of adherence to antiretroviral therapy on the duration of virological suppression after controlling for whether or not the patient ever attained a plasma viral load below the limit of detection of sensitive HIV-1 RNA assays. Data were combined from three randomized, blinded clinical trials (INCAS, AVANTI-2, and AVANTI-3) that compared the antiviral effects of two- and three-drug antiretroviral regimens. Virological suppression was defined as maintaining a plasma viral load below 1000 copies/mL. Adherence was defined prospectively and measured by patient self-report. Adherence did not have a major impact on the probability of achieving virological suppression for patients receiving dual therapy. However, for patients receiving triple therapy, adherence increased the probability of virological suppression, whether the plasma viral load nadir was above or below the lower limit of quantification. Compared to adherent patients with a plasma viral load nadir below the lower limit of quantification, the relative risk of virological failure was 3.0 for non-adherent patients with a nadir below the limit, 18.1 for adherent patients with a nadir above the limit, and 32.1 for non-adherent patients with a nadir above the limit. For patients receiving current three-drug antiretroviral regimens, adherence to therapy and plasma viral load nadir are important factors determining the duration of virological suppression.

  9. Predicting the Reliability of Brittle Material Structures Subjected to Transient Proof Test and Service Loading

    NASA Astrophysics Data System (ADS)

    Nemeth, Noel N.; Jadaan, Osama M.; Palfi, Tamas; Baker, Eric H.

    Brittle materials today are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts, thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing brittle material components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The NASA CARES/Life 1 (Ceramic Analysis and Reliability Evaluation of Structure/Life) code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. This capability includes predicting the time-dependent failure probability of ceramic components against catastrophic rupture when subjected to transient thermomechanical loads (including cyclic loads). The developed methodology allows for changes in material response that can occur with temperature or time (i.e. changing fatigue and Weibull parameters with temperature or time). For this article an overview of the transient reliability methodology and how this methodology is extended to account for proof testing is described. The CARES/Life code has been modified to have the ability to interface with commercially available finite element analysis (FEA) codes executed for transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.

  10. Optimized lower leg injury probability curves from post-mortem human subject tests under axial impacts

    PubMed Central

    Yoganandan, Narayan; Arun, Mike W.J.; Pintar, Frank A.; Szabo, Aniko

    2015-01-01

    Objective Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. Methods The study re-examined lower leg PMHS data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and non-injury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the co-variable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal and log-logistic distributions was based on the Akaike Information Criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. Results The mean age, stature and weight: 58.2 ± 15.1 years, 1.74 ± 0.08 m and 74.9 ± 13.8 kg. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other two distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-old at five, 25 and 50% risk levels age groups for lower leg fracture. For 25, 45 and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. Conclusions This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines. PMID:25307381

  11. Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Licatta, Angelo; Griffin, Devon

    2007-01-01

    The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.

  12. Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Griffin, Devon

    2008-01-01

    The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.

  13. Effect of cyclic and static tensile loading on water content and solute diffusion in canine flexor tendons: an in vitro study.

    PubMed

    Hannafin, J A; Arnoczky, S P

    1994-05-01

    This study was designed to determine the effects of various loading conditions (no load and static and cyclic tensile load) on the water content and pattern of nutrient diffusion of canine flexor tendons in vitro. Region D (designated by Okuda et al.) of the flexor digitorum profundus was subjected to a cyclic or static tensile load of 100 g for times ranging from 5 minutes to 24 hours. The results demonstrated a statistically significant loss of water in tendons subjected to both types of load as compared with the controls (no load). This loss appeared to progress with time. However, neither static nor cyclic loading appeared to alter the diffusion of 3H-glucose into the tendon over a 24-hour period compared with the controls. These results suggest that any benefit in tendon repair derived from intermittent passive motion is probably not a result of an increase in the diffusion of small nutrients in response to intermittent tensile load.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Chanyoung; Kim, Nam H.

    Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less

  15. Hereditary hemochromatosis is characterized by a clinically definable arthropathy that correlates with iron load.

    PubMed

    Carroll, G J; Breidahl, W H; Bulsara, M K; Olynyk, J K

    2011-01-01

    To determine the frequency and character of arthropathy in hereditary hemochromatosis (HH) and to investigate the relationship between this arthropathy, nodal interphalangeal osteoarthritis, and iron load. Participants were recruited from the community by newspaper advertisement and assigned to diagnostic confidence categories for HH (definite/probable or possible/unlikely). Arthropathy was determined by use of a predetermined clinical protocol, radiographs of the hands of all participants, and radiographs of other joints in which clinical criteria were met. An arthropathy considered typical for HH, involving metacarpophalangeal joints 2-5 and bilateral specified large joints, was observed in 10 of 41 patients with definite or probable HH (24%), all of whom were homozygous for the C282Y mutation in the HFE gene, while only 2 of 62 patients with possible/unlikely HH had such an arthropathy (P=0.0024). Arthropathy in definite/probable HH was more common with increasing age and was associated with ferritin concentrations>1,000 μg/liter at the time of diagnosis (odds ratio 14.0 [95% confidence interval 1.30-150.89], P=0.03). A trend toward more episodes requiring phlebotomy was also observed among those with arthropathy, but this was not statistically significant (odds ratio 1.03 [95% confidence interval 0.99-1.06], P=0.097). There was no significant association between arthropathy in definite/probable HH and a history of intensive physical labor (P=0.12). An arthropathy consistent with that commonly attributed to HH was found to occur in 24% of patients with definite/probable HH. The association observed between this arthropathy, homozygosity for C282Y, and serum ferritin concentrations at the time of diagnosis suggests that iron load is likely to be a major determinant of arthropathy in HH and to be more important than occupational factors. Copyright © 2011 by the American College of Rheumatology.

  16. Modeling Operator Performance in Low Task Load Supervisory Domains

    DTIC Science & Technology

    2011-06-01

    PDF Probability Distribution Function SAFE System for Aircrew Fatigue Evaluation SAFTE Sleep , Activity, Fatigue, and Task Effectiveness SCT...attentional capacity due to high mental workload. In low task load settings, fatigue is mainly caused by lack of sleep and boredom experienced by...performance decrements. Also, psychological fatigue is strongly correlated with lack of sleep . Not surprisingly, operators of the morning shift reported the

  17. Assessment of Mental, Emotional and Physical Stress through Analysis of Physiological Signals Using Smartphones.

    PubMed

    Mohino-Herranz, Inma; Gil-Pita, Roberto; Ferreira, Javier; Rosa-Zurera, Manuel; Seoane, Fernando

    2015-10-08

    Determining the stress level of a subject in real time could be of special interest in certain professional activities to allow the monitoring of soldiers, pilots, emergency personnel and other professionals responsible for human lives. Assessment of current mental fitness for executing a task at hand might avoid unnecessary risks. To obtain this knowledge, two physiological measurements were recorded in this work using customized non-invasive wearable instrumentation that measures electrocardiogram (ECG) and thoracic electrical bioimpedance (TEB) signals. The relevant information from each measurement is extracted via evaluation of a reduced set of selected features. These features are primarily obtained from filtered and processed versions of the raw time measurements with calculations of certain statistical and descriptive parameters. Selection of the reduced set of features was performed using genetic algorithms, thus constraining the computational cost of the real-time implementation. Different classification approaches have been studied, but neural networks were chosen for this investigation because they represent a good tradeoff between the intelligence of the solution and computational complexity. Three different application scenarios were considered. In the first scenario, the proposed system is capable of distinguishing among different types of activity with a 21.2% probability error, for activities coded as neutral, emotional, mental and physical. In the second scenario, the proposed solution distinguishes among the three different emotional states of neutral, sadness and disgust, with a probability error of 4.8%. In the third scenario, the system is able to distinguish between low mental load and mental overload with a probability error of 32.3%. The computational cost was calculated, and the solution was implemented in commercially available Android-based smartphones. The results indicate that execution of such a monitoring solution is negligible compared to the nominal computational load of current smartphones.

  18. Organic matter dynamics and stable isotopes for tracing sources of suspended sediment

    NASA Astrophysics Data System (ADS)

    Schindler Wildhaber, Y.; Liechti, R.; Alewell, C.

    2012-01-01

    Suspended sediment (SS) and organic matter in rivers can harm brown trout Salmo trutta by impact on health and fitness of free swimming fish and siltation of the riverbed. The later results in a decrease of hydraulic conductivity and therefore smaller oxygen supply to the salmonid embryos. Additionally, oxygen demand within riverbeds will increase as the pool of organic matter increases. We assessed the temporal and spatial dynamics of sediment, carbon (C) and nitrogen (N) during the brown trout spawning season and used C isotopes as well as the C/N atomic ratio to distinguish autochthonous and allochthonous sources of organic matter in SS loads. The visual basic program IsoSource with 13Ctot and 15N as input isotopes was used to quantify the sources of SS in respect of time and space. Organic matter fractions in the infiltrated and suspended sediment were highest during low flow periods with small sediment loads and lowest during high flow periods with high sediment loads. Peak values in nitrate and dissolved organic C were measured during high flow and precipitation probably due to leaching from pasture and arable land. The organic matter was of allochthonous sources as indicated by the C/N atomic ratio and δ13Corg. Organic matter in SS increased from up- to downstream due to pasture and arable land. The fraction of SS originating from upper watershed riverbed sediment increased at all sites during high flow. Its mean fraction decreased from up- to downstream. During base flow conditions, the major sources of SS are pasture and arable land. The later increased during rainy and warmer periods probably due to snow melting and erosion processes. These modeling results support the measured increased DOC and NO3 concentrations during high flow.

  19. Probabilistic Design of a Plate-Like Wing to Meet Flutter and Strength Requirements

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, T.; Mason, Brian H.; Smith, Steven A.; Naser, Ahmad S.

    2002-01-01

    An approach is presented for carrying out reliability-based design of a metallic, plate-like wing to meet strength and flutter requirements that are given in terms of risk/reliability. The design problem is to determine the thickness distribution such that wing weight is a minimum and the probability of failure is less than a specified value. Failure is assumed to occur if either the flutter speed is less than a specified allowable or the stress caused by a pressure loading is greater than a specified allowable. Four uncertain quantities are considered: wing thickness, calculated flutter speed, allowable stress, and magnitude of a uniform pressure load. The reliability-based design optimization approach described herein starts with a design obtained using conventional deterministic design optimization with margins on the allowables. Reliability is calculated using Monte Carlo simulation with response surfaces that provide values of stresses and flutter speed. During the reliability-based design optimization, the response surfaces and move limits are coordinated to ensure accuracy of the response surfaces. Studies carried out in the paper show the relationship between reliability and weight and indicate that, for the design problem considered, increases in reliability can be obtained with modest increases in weight.

  20. Emerging organic contaminants in vertical subsurface flow constructed wetlands: influence of media size, loading frequency and use of active aeration.

    PubMed

    Avila, Cristina; Nivala, Jaime; Olsson, Linda; Kassa, Kinfe; Headley, Tom; Mueller, Roland A; Bayona, Josep Maria; García, Joan

    2014-10-01

    Four side-by-side pilot-scale vertical flow (VF) constructed wetlands of different designs were evaluated for the removal of eight widely used emerging organic contaminants from municipal wastewater (i.e. ibuprofen, acetaminophen, diclofenac, tonalide, oxybenzone, triclosan, ethinylestradiol, bisphenol A). Three of the systems were free-draining, with one containing a gravel substrate (VGp), while the other two contained sand substrate (VS1p and VS2p). The fourth system had a saturated gravel substrate and active aeration supplied across the bottom of the bed (VAp). All beds were pulse-loaded on an hourly basis, except VS2p, which was pulse-loaded every 2h. Each system had a surface area of 6.2m(2), received a hydraulic loading rate of 95 mm/day and was planted with Phragmites australis. The beds received an organic loading rate of 7-16 gTOC/m(2)d. The sand-based VF (VS1p) performed significantly better (p<0.05) than the gravel-based wetland (VGp) both in the removal of conventional water quality parameters (TSS, TOC, NH4-N) and studied emerging organic contaminants except for diclofenac (85 ± 17% vs. 74 ± 15% average emerging organic contaminant removal for VS1p and VGp, respectively). Although loading frequency (hourly vs. bi-hourly) was not observed to affect the removal efficiency of the cited conventional water quality parameters, significantly lower removal efficiencies were found for tonalide and bisphenol A for the VF wetland that received bi-hourly dosing (VS2p) (higher volume per pulse), probably due to the more reducing conditions observed in that system. However, diclofenac was the only contaminant showing an opposite trend to the rest of the compounds, achieving higher elimination rates in the wetlands that exhibited less-oxidizing conditions (VS2p and VGp). The use of active aeration in the saturated gravel bed (VAp) generally improved the treatment performance compared to the free-draining gravel bed (VGp) and achieved a similar performance to the free-draining sand-based VF wetlands (VS1p). Copyright © 2014 Elsevier B.V. All rights reserved.

  1. ZERODUR - bending strength: review of achievements

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2017-08-01

    Increased demand for using the glass ceramic ZERODUR® with high mechanical loads called for strength data based on larger statistical samples. Design calculations for failure probability target value below 1: 100 000 cannot be made reliable with parameters derived from 20 specimen samples. The data now available for a variety of surface conditions, ground with different grain sizes and acid etched for full micro crack removal, allow stresses by factors four to ten times higher than before. The large sample revealed that breakage stresses of ground surfaces follow the three parameter Weibull distribution instead of the two parameter version. This is more reasonable considering that the micro cracks of such surfaces have a maximum depth which is reflected in the existence of a threshold breakage stress below which breakage probability is zero. This minimum strength allows calculating minimum lifetimes. Fatigue under load can be taken into account by using the stress corrosion coefficient for the actual environmental humidity. For fully etched surfaces Weibull statistics fails. The precondition of the Weibull distribution, the existence of one unique failure mechanism, is not given anymore. ZERODUR® with fully etched surfaces free from damages introduced after etching endures easily 100 MPa tensile stress. The possibility to use ZERODUR® for combined high precision and high stress application was confirmed by the successful launch and continuing operation of LISA Pathfinder the precursor experiment for the gravitational wave antenna satellite array eLISA.

  2. Reliability analysis of structures under periodic proof tests in service

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.

  3. Generating electricity while walking with loads.

    PubMed

    Rome, Lawrence C; Flynn, Louis; Goldman, Evan M; Yoo, Taeseung D

    2005-09-09

    We have developed the suspended-load backpack, which converts mechanical energy from the vertical movement of carried loads (weighing 20 to 38 kilograms) to electricity during normal walking [generating up to 7.4 watts, or a 300-fold increase over previous shoe devices (20 milliwatts)]. Unexpectedly, little extra metabolic energy (as compared to that expended carrying a rigid backpack) is required during electricity generation. This is probably due to a compensatory change in gait or loading regime, which reduces the metabolic power required for walking. This electricity generation can help give field scientists, explorers, and disaster-relief workers freedom from the heavy weight of replacement batteries and thereby extend their ability to operate in remote areas.

  4. An Approach to Risk-Based Design Incorporating Damage Tolerance Analyses

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Glaessgen, Edward H.; Sleight, David W.

    2002-01-01

    Incorporating risk-based design as an integral part of spacecraft development is becoming more and more common. Assessment of uncertainties associated with design parameters and environmental aspects such as loading provides increased knowledge of the design and its performance. Results of such studies can contribute to mitigating risk through a system-level assessment. Understanding the risk of an event occurring, the probability of its occurrence, and the consequences of its occurrence can lead to robust, reliable designs. This paper describes an approach to risk-based structural design incorporating damage-tolerance analysis. The application of this approach to a candidate Earth-entry vehicle is described. The emphasis of the paper is on describing an approach for establishing damage-tolerant structural response inputs to a system-level probabilistic risk assessment.

  5. A procedure for combining acoustically induced and mechanically induced loads (first passage failure design criterion)

    NASA Technical Reports Server (NTRS)

    Crowe, D. R.; Henricks, W.

    1983-01-01

    The combined load statistics are developed by taking the acoustically induced load to be a random population, assumed to be stationary. Each element of this ensemble of acoustically induced loads is assumed to have the same power spectral density (PSD), obtained previously from a random response analysis employing the given acoustic field in the STS cargo bay as a stationary random excitation. The mechanically induced load is treated as either (1) a known deterministic transient, or (2) a nonstationary random variable of known first and second statistical moments which vary with time. A method is then shown for determining the probability that the combined load would, at any time, have a value equal to or less than a certain level. Having obtained a statistical representation of how the acoustic and mechanical loads are expected to combine, an analytical approximation for defining design levels for these loads is presented using the First Passage failure criterion.

  6. An approximate methods approach to probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.

  7. Validation of the CNS Penetration-Effectiveness Rank for Quantifying Antiretroviral Penetration Into the Central Nervous System

    PubMed Central

    Letendre, Scott; Marquie-Beck, Jennifer; Capparelli, Edmund; Best, Brookie; Clifford, David; Collier, Ann C.; Gelman, Benjamin B.; McArthur, Justin C.; McCutchan, J. Allen; Morgello, Susan; Simpson, David; Grant, Igor; Ellis, Ronald J.

    2009-01-01

    Objective To evaluate whether penetration of a combination regimen into the central nervous system (CNS), as estimated by the CNS Penetration-Effectiveness (CPE) rank, is associated with lower cerebrospinal fluid (CSF) viral load. Design Data were analyzed from 467 participants who were human immunodeficiency virus (HIV) seropositive and who reported antiretroviral (ARV) drug use. Individual ARV drugs were assigned a penetration rank of 0 (low), 0.5 (intermediate), or 1 (high) based on their chemical properties, concentrations in CSF, and/or effectiveness in the CNS in clinical studies. The CPE rank was calculated by summing the individual penetration ranks for each ARV in the regimen. Results The median CPE rank was 1.5 (interquartile range, 1–2). Lower CPE ranks correlated with higher CSF viral loads. Ranks less than 2 were associated with an 88% increase in the odds of detectable CSF viral load. In multivariate regression, lower CPE ranks were associated with detectable CSF viral loads even after adjusting for total number of ARV drugs, ARV drug adherence, plasma viral load, duration and type of the current regimen, and CD4 count. Conclusions Poorer penetration of ARV drugs into the CNS appears to allow continued HIV replication in the CNS as indicated by higher CSF HIV viral loads. Because inhibition of HIV replication in the CNS is probably critical in treating patients who have HIV-associated neurocognitive disorders, ARV treatment strategies that account for CNS penetration should be considered in consensus treatment guidelines and validated in clinical studies. PMID:18195140

  8. A comparison of reliability and conventional estimation of safe fatigue life and safe inspection intervals

    NASA Technical Reports Server (NTRS)

    Hooke, F. H.

    1972-01-01

    Both the conventional and reliability analyses for determining safe fatigue life are predicted on a population having a specified (usually log normal) distribution of life to collapse under a fatigue test load. Under a random service load spectrum, random occurrences of load larger than the fatigue test load may confront and cause collapse of structures which are weakened, though not yet to the fatigue test load. These collapses are included in reliability but excluded in conventional analysis. The theory of risk determination by each method is given, and several reasonably typical examples have been worked out, in which it transpires that if one excludes collapse through exceedance of the uncracked strength, the reliability and conventional analyses gave virtually identical probabilities of failure or survival.

  9. Probabilistic evaluation of uncertainties and risks in aerospace components

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.

    1992-01-01

    This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.

  10. Fatigue Behavior of Computer-Aided Design/Computer-Assisted Manufacture Ceramic Abutments as a Function of Design and Ceramics Processing.

    PubMed

    Kelly, J Robert; Rungruanganunt, Patchnee

    2016-01-01

    Zirconia is being widely used, at times apparently by simply copying a metal design into ceramic. Structurally, ceramics are sensitive to both design and processing (fabrication) details. The aim of this work was to examine four computer-aided design/computer-assisted manufacture (CAD/CAM) abutments using a modified International Standards Organization (ISO) implant fatigue protocol to determine performance as a function of design and processing. Two full zirconia and two hybrid (Ti-based) abutments (n = 12 each) were tested wet at 15 Hz at a variety of loads to failure. Failure probability distributions were examined at each load, and when found to be the same, data from all loads were combined for lifetime analysis from accelerated to clinical conditions. Two distinctly different failure modes were found for both full zirconia and Ti-based abutments. One of these for zirconia has been reported clinically in the literature, and one for the Ti-based abutments has been reported anecdotally. The ISO protocol modification in this study forced failures in the abutments; no implant bodies failed. Extrapolated cycles for 10% failure at 70 N were: full zirconia, Atlantis 2 × 10(7) and Straumann 3 × 10(7); and Ti-based, Glidewell 1 × 10(6) and Nobel 1 × 10(21). Under accelerated conditions (200 N), performance differed significantly: Straumann clearly outperformed Astra (t test, P = .013), and the Glidewell Ti-base abutment also outperformed Atlantis zirconia at 200 N (Nobel ran-out; t test, P = .035). The modified ISO protocol in this study produced failures that were seen clinically. The manufacture matters; differences in design and fabrication that influence performance cannot be discerned clinically.

  11. Developing cross entropy genetic algorithm for solving Two-Dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP)

    NASA Astrophysics Data System (ADS)

    Paramestha, D. L.; Santosa, B.

    2018-04-01

    Two-dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP) is a combination of Heterogeneous Fleet VRP and a packing problem well-known as Two-Dimensional Bin Packing Problem (BPP). 2L-HFVRP is a Heterogeneous Fleet VRP in which these costumer demands are formed by a set of two-dimensional rectangular weighted item. These demands must be served by a heterogeneous fleet of vehicles with a fix and variable cost from the depot. The objective function 2L-HFVRP is to minimize the total transportation cost. All formed routes must be consistent with the capacity and loading process of the vehicle. Sequential and unrestricted scenarios are considered in this paper. We propose a metaheuristic which is a combination of the Genetic Algorithm (GA) and the Cross Entropy (CE) named Cross Entropy Genetic Algorithm (CEGA) to solve the 2L-HFVRP. The mutation concept on GA is used to speed up the algorithm CE to find the optimal solution. The mutation mechanism was based on local improvement (2-opt, 1-1 Exchange, and 1-0 Exchange). The probability transition matrix mechanism on CE is used to avoid getting stuck in the local optimum. The effectiveness of CEGA was tested on benchmark instance based 2L-HFVRP. The result of experiments shows a competitive result compared with the other algorithm.

  12. Rotorcraft fatigue life-prediction: Past, present, and future

    NASA Technical Reports Server (NTRS)

    Everett, Richard A., Jr.; Elber, W.

    1994-01-01

    In this paper the methods used for calculating the fatigue life of metallic dynamic components in rotorcraft is reviewed. In the past, rotorcraft fatigue design has combined constant amplitude tests of full-scale parts with flight loads and usage data in a conservative manner to provide 'safe life' component replacement times. This is in contrast to other industries, such as the automobile industry, where spectrum loading in fatigue testing is a part of the design procedure. Traditionally, the linear cumulative damage rule has been used in a deterministic manner using a conservative value for fatigue strength based on a one in a thousand probability of failure. Conservatism on load and usage are also often employed. This procedure will be discussed along with the current U.S. Army fatigue life specification for new rotorcraft which is the so-called 'six nines' reliability requirement. In order to achieve the six nines reliability requirement the exploration and adoption of new approaches in design and fleet management may also be necessary if this requirement is to be met with a minimum impact on structural weight. To this end a fracture mechanics approach to fatigue life design may be required in order to provide a more accurate estimate of damage progression. Also reviewed in this paper is a fracture mechanics approach for calculating total fatigue life which is based on a crack-closure small crack considerations.

  13. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  14. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  15. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  16. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  17. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  18. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  19. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    NASA Astrophysics Data System (ADS)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.

  20. Scaling of load in communications networks.

    PubMed

    Narayan, Onuttom; Saniee, Iraj

    2010-09-01

    We show that the load at each node in a preferential attachment network scales as a power of the degree of the node. For a network whose degree distribution is p(k)∼k{-γ} , we show that the load is l(k)∼k{η} with η=γ-1 , implying that the probability distribution for the load is p(l)∼1/l{2} independent of γ . The results are obtained through scaling arguments supported by finite size scaling studies. They contradict earlier claims, but are in agreement with the exact solution for the special case of tree graphs. Results are also presented for real communications networks at the IP layer, using the latest available data. Our analysis of the data shows relatively poor power-law degree distributions as compared to the scaling of the load versus degree. This emphasizes the importance of the load in network analysis.

  1. Load-carriage distance run and push-ups tests: no body mass bias and occupationally relevant.

    PubMed

    Vanderburgh, Paul M; Mickley, Nicholas S; Anloague, Philip A

    2011-09-01

    Recent research has demonstrated body mass (M) bias in military physical fitness tests favoring lighter, not just leaner, service members. Mathematical modeling predicts that a distance run carrying a backpack of 30 lbs would eliminate M-bias. The purpose of this study was to empirically test this prediction for the U.S. Army push-ups and 2-mile run tests. Two tests were performed for both events for each of 56 university Reserve Officer Training Corps male cadets: with (loaded) and without backpack (unloaded). Results indicated significant M-bias in the unloaded and no M-bias in the loaded condition for both events. Allometrically scaled scores for both events were worse in the loaded vs. unloaded conditions, supporting a hypothesis not previously tested. The loaded push-ups and 2-mile run appear to remove M-bias and are probably more occupationally relevant as military personnel are often expected to carry external loads.

  2. Prediction of shock initiation thresholds and ignition probability of polymer-bonded explosives using mesoscale simulations

    NASA Astrophysics Data System (ADS)

    Kim, Seokpum; Wei, Yaochi; Horie, Yasuyuki; Zhou, Min

    2018-05-01

    The design of new materials requires establishment of macroscopic measures of material performance as functions of microstructure. Traditionally, this process has been an empirical endeavor. An approach to computationally predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs) using mesoscale simulations is developed. The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific mechanisms tracked include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to directly mimic relevant experiments for quantification of statistical variations of material behavior due to inherent material heterogeneities. The particular thresholds and ignition probabilities predicted are expressed in James type and Walker-Wasley type relations, leading to the establishment of explicit analytical expressions for the ignition probability as function of loading. Specifically, the ignition thresholds corresponding to any given level of ignition probability and ignition probability maps are predicted for PBX 9404 for the loading regime of Up = 200-1200 m/s where Up is the particle speed. The predicted results are in good agreement with available experimental measurements. A parametric study also shows that binder properties can significantly affect the macroscopic ignition behavior of PBXs. The capability to computationally predict the macroscopic engineering material response relations out of material microstructures and basic constituent and interfacial properties lends itself to the design of new materials as well as the analysis of existing materials.

  3. Probability-Based Design Criteria of the ASCE 7 Tsunami Loads and Effects Provisions (Invited)

    NASA Astrophysics Data System (ADS)

    Chock, G.

    2013-12-01

    Mitigation of tsunami risk requires a combination of emergency preparedness for evacuation in addition to providing structural resilience of critical facilities, infrastructure, and key resources necessary for immediate response and economic and social recovery. Critical facilities would include emergency response, medical, tsunami refuges and shelters, ports and harbors, lifelines, transportation, telecommunications, power, financial institutions, and major industrial/commercial facilities. The Tsunami Loads and Effects Subcommittee of the ASCE/SEI 7 Standards Committee is developing a proposed new Chapter 6 - Tsunami Loads and Effects for the 2016 edition of the ASCE 7 Standard. ASCE 7 provides the minimum design loads and requirements for structures subject to building codes such as the International Building Code utilized in the USA. In this paper we will provide a review emphasizing the intent of these new code provisions and explain the design methodology. The ASCE 7 provisions for Tsunami Loads and Effects enables a set of analysis and design methodologies that are consistent with performance-based engineering based on probabilistic criteria. . The ASCE 7 Tsunami Loads and Effects chapter will be initially applicable only to the states of Alaska, Washington, Oregon, California, and Hawaii. Ground shaking effects and subsidence from a preceding local offshore Maximum Considered Earthquake will also be considered prior to tsunami arrival for Alaska and states in the Pacific Northwest regions governed by nearby offshore subduction earthquakes. For national tsunami design provisions to achieve a consistent reliability standard of structural performance for community resilience, a new generation of tsunami inundation hazard maps for design is required. The lesson of recent tsunami is that historical records alone do not provide a sufficient measure of the potential heights of future tsunamis. Engineering design must consider the occurrence of events greater than scenarios in the historical record, and should properly be based on the underlying seismicity of subduction zones. Therefore, Probabilistic Tsunami Hazard Analysis (PTHA) consistent with source seismicity must be performed in addition to consideration of historical event scenarios. A method of Probabilistic Tsunami Hazard Analysis has been established that is generally consistent with Probabilistic Seismic Hazard Analysis in the treatment of uncertainty. These new tsunami design zone maps will define the coastal zones where structures of greater importance would be designed for tsunami resistance and community resilience. Structural member acceptability criteria will be based on performance objectives for a 2,500-year Maximum Considered Tsunami. The approach developed by the ASCE Tsunami Loads and Effects Subcommittee of the ASCE 7 Standard would result in the first national unification of tsunami hazard criteria for design codes reflecting the modern approach of Performance-Based Engineering.

  4. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 2

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.

  5. Image-based Lagrangian Particle Tracking in Bed-load Experiments.

    PubMed

    Radice, Alessio; Sarkar, Sankar; Ballio, Francesco

    2017-07-20

    Image analysis has been increasingly used for the measurement of river flows due to its capabilities to furnish detailed quantitative depictions at a relatively low cost. This manuscript describes an application of particle tracking velocimetry (PTV) to a bed-load experiment with lightweight sediment. The key characteristics of the investigated sediment transport conditions were the presence of a covered flow and of a fixed rough bed above which particles were released in limited number at the flume inlet. Under the applied flow conditions, the motion of the individual bed-load particles was intermittent, with alternating movement and stillness terms. The flow pattern was preliminarily characterized by acoustic measurements of vertical profiles of the stream-wise velocity. During process visualization, a large field of view was obtained using two action-cameras placed at different locations along the flume. The experimental protocol is described in terms of channel calibration, experiment realization, image pre-processing, automatic particle tracking, and post-processing of particle track data from the two cameras. The presented proof-of-concept results include probability distributions of the particle hop length and duration. The achievements of this work are compared to those of existing literature to demonstrate the validity of the protocol.

  6. Implications of Cognitive Load for Hypothesis Generation and Probability Judgment

    PubMed Central

    Sprenger, Amber M.; Dougherty, Michael R.; Atkins, Sharona M.; Franco-Watkins, Ana M.; Thomas, Rick P.; Lange, Nicholas; Abbs, Brandon

    2011-01-01

    We tested the predictions of HyGene (Thomas et al., 2008) that both divided attention at encoding and judgment should affect the degree to which participants’ probability judgments violate the principle of additivity. In two experiments, we showed that divided attention during judgment leads to an increase in subadditivity, suggesting that the comparison process for probability judgments is capacity limited. Contrary to the predictions of HyGene, a third experiment revealed that divided attention during encoding leads to an increase in later probability judgment made under full attention. The effect of divided attention during encoding on judgment was completely mediated by the number of hypotheses participants generated, indicating that limitations in both encoding and recall can cascade into biases in judgments. PMID:21734897

  7. Effects of Age and Working Memory Load on Syntactic Processing: An Event-Related Potential Study.

    PubMed

    Alatorre-Cruz, Graciela C; Silva-Pereyra, Juan; Fernández, Thalía; Rodríguez-Camacho, Mario A; Castro-Chavira, Susana A; Sanchez-Lopez, Javier

    2018-01-01

    Cognitive changes in aging include working memory (WM) decline, which may hamper language comprehension. An increase in WM demands in older adults would probably provoke a poorer sentence processing performance in this age group. A way to increase the WM load is to separate two lexical units in an agreement relation (i.e., adjective and noun), in a given sentence. To test this hypothesis, event-related potentials (ERPs) were collected from Spanish speakers (30 older adults, mean age = 66.06 years old; and 30 young adults, mean age = 25.7 years old) who read sentences to detect grammatical errors. The sentences varied with regard to (1) the gender agreement of the noun and adjective, where the gender of the adjective either agreed or disagreed with the noun, and (2) the WM load (i.e., the number of words between the noun and adjective in the sentence). No significant behavioral differences between groups were observed in the accuracy of the response, but older adults showed longer reaction times regardless of WM load condition. Compared with young participants, older adults showed a different pattern of ERP components characterized by smaller amplitudes of LAN, P600a, and P600b effects when the WM load was increased. A smaller LAN effect probably reflects greater difficulties in processing the morpho-syntactic features of the sentence, while smaller P600a and P600b effects could be related to difficulties in recovering and mapping all sentence constituents. We concluded that the ERP pattern in older adults showed subtle problems in syntactic processing when the WM load was increased, which was not sufficient to affect response accuracy but was only observed to result in a longer reaction time.

  8. Arsenic loads in Spearfish Creek, western South Dakota, water years 1989-91

    USGS Publications Warehouse

    Driscoll, Daniel G.; Hayes, Timothy S.

    1995-01-01

    Numerous small tributaries on the eastern flank of Spearfish Creek originate within a mineralized area with a long history of gold-mining activity. Some streams draining this area are known to have elevated concentrations of arsenic. One such tributary is Annie Creek, where arsenic concentrations regularly approach the Maximum Contaminant Level of 50 mg/L (micrograms per liter) established by the U.S. Environmental Protection Agency. A site on Annie Creek was proposed for inclusion on the National Priorities List by the Environmental Protection Agency in 1991. This report presents information about arsenic loads and concentrations in Spearfish Creek and its tributaries, including Annie Creek. Stream types were classified according to geologic characteris- tics and in-stream arsenic concentrations. The first type includes streams that lack significant arsenic sources and have low in-stream arsenic concentra- tions. The second type has abundant arsenic sources and high in-stream concentrations. The third type has abundant arsenic sources but only moderate in-stream concentrations. The fourth type is a mixture of the first three types. Annual loads of dissolved arsenic were calculated for two reaches of Spearfish Creek to quantify arsenic loads at selected gaging stations during water years 1989-91. Mass-balance calculations also were performed to estimate arsenic concentrations for ungaged inflows to Spearfish Creek. The drainage area of the upstream reach includes significant mineralized areas, whereas the drainage area of the downstream reach generally is without known arsenic sources. The average load of dissolved arsenic transported from the upstream reach of Spearfish Creek, which is representative of a type 4 stream, was 158 kilograms per year, calculated for station 06430900, Spearfish Creek above Spearfish. Gaged headwater tributaries draining unmineralized areas (type 1) contributed only 16 percent of the arsenic load in 63 percent of the discharge. Annie Creek (type 2), which has the highest measured arsenic concentra- tions in the Spearfish Creek drainage, contributed about 15 percent of the arsenic load in about 2 percent of the discharge of the upstream reach. Squaw Creek, which drains another mineralized area, but has only moderate in-stream concentrations (type 3), contributed 4 percent of the arsenic load in 5 percent of the discharge. Ungaged inflows to the reach contributed the remaining 65 percent of the arsenic load in 30 percent of the discharge. The calculated loads from ungaged inflows include all arsenic contributed by surface- and ground-water sources, as well as any additions of arsenic from dissolution of arsenic-bearing solid phases, or from desorption of arsenic from solid surfaces, within the streambed of the upstream reach. Mass-balance calculations indicate that dissolved arsenic concentrations of the ungaged inflows in the upstream reach averaged about 9 mg/L. In-stream arsenic concentrations of ungaged inflows from the unmineralized western flank of Spearfish Creek probably are generally low (type 1). Thus, in-stream arsenic concentrations for ungaged inflows draining the mineralized eastern flank of Spearfish probably average almost twice that level, or about 18 mg/L. Some ungaged, eastern-flank inflows probably are derived from type 3 drainages, with only moderate arsenic concentrations. If so, other ungaged, eastern-flank inflows could have in-stream arsenic concentrations similar to those of Annie Creek. No significant arsenic sources were apparent in the downstream reach of Spearfish Creek. Over the course of the downstream reach, arsenic concentrations decreased somewhat, probably resulting from dilution, as well as from possible chemical adsorption to sediment surfaces or arsenic-phase precipitation. A decrease in arsenic loads resulted from various diversions from the creek and from the potential chemical removal processes. Because of a large margin of error associated with calculation o

  9. [The Load of Injustice: A Longitudinal Analysis of the Impact of Subjectively Perceived Income Injustice on the Risk of Stress-Associated Diseases Based on the German Socio-Economic Panel Study].

    PubMed

    Boscher, Claudia; Arnold, Laura; Lange, Andreas; Szagun, Bertram

    2018-03-01

    Income injustice is regarded as a psychosocial strain and associated with an increased risk of stress-related diseases. The physiological stress response is thereby considered as a central link. The aim of the study is to reveal the influence of subjectively perceived income injustice on stress-associated diseases, taking into consideration the load duration. Based on the German Socio-Economic Panel Study, data on 5,657 workers in the survey years 2005-2013 were analyzed. The dependent variable reflect the doctor's diagnosed new cases of diabetes, asthma, cardiopathy, stroke, hypertension and depression in the years 2009-2013 as an index. Key predictor is the injustice perception of one's income. In order to operationalize the duration of the injustice perception, the values of the variable for the years 2005, 2007 and 2009 were accumulated. Using logit models, stratified for gender and volume of employment, factors were identified that affect the probability of stress-related diseases. If income was perceived as unjust for over 5 years, the odds of stress-related diseases were strongly enhanced for women (OR 1.64; 95% CI 1.17-2.30). Women working full-time seemed to be particularly affected (OR 2.43; 95% CI 1.54-3.84). Men working full-time perceiving their income as unjust also showed an increased risk for stress diseases (OR 1.43; CI 1.03-1.98). The more often income was assessed as unjust, the higher was the probability of stress-related diseases. Perceived income injustice seems to be a significant risk factor for stress-related diseases within a dose-response relationship with increasing duration of exposure. Findings of stress research indicate that this represents the 'allostatic load'. Gender-specific differences in stress reaction as well as in the appraisal of the stressors can be associated with gender-specific work and life conditions and therefore provide explanatory approaches for the revealed effects. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Uncertainty quantification for personalized analyses of human proximal femurs.

    PubMed

    Wille, Hagen; Ruess, Martin; Rank, Ernst; Yosibash, Zohar

    2016-02-29

    Computational models for the personalized analysis of human femurs contain uncertainties in bone material properties and loads, which affect the simulation results. To quantify the influence we developed a probabilistic framework based on polynomial chaos (PC) that propagates stochastic input variables through any computational model. We considered a stochastic E-ρ relationship and a stochastic hip contact force, representing realistic variability of experimental data. Their influence on the prediction of principal strains (ϵ1 and ϵ3) was quantified for one human proximal femur, including sensitivity and reliability analysis. Large variabilities in the principal strain predictions were found in the cortical shell of the femoral neck, with coefficients of variation of ≈40%. Between 60 and 80% of the variance in ϵ1 and ϵ3 are attributable to the uncertainty in the E-ρ relationship, while ≈10% are caused by the load magnitude and 5-30% by the load direction. Principal strain directions were unaffected by material and loading uncertainties. The antero-superior and medial inferior sides of the neck exhibited the largest probabilities for tensile and compression failure, however all were very small (pf<0.001). In summary, uncertainty quantification with PC has been demonstrated to efficiently and accurately describe the influence of very different stochastic inputs, which increases the credibility and explanatory power of personalized analyses of human proximal femurs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Science, practice, and human errors in controlling Clostridium botulinum in heat-preserved food in hermetic containers.

    PubMed

    Pflug, Irving J

    2010-05-01

    The incidence of botulism in canned food in the last century is reviewed along with the background science; a few conclusions are reached based on analysis of published data. There are two primary aspects to botulism control: the design of an adequate process and the delivery of the adequate process to containers of food. The probability that the designed process will not be adequate to control Clostridium botulinum is very small, probably less than 1.0 x 10(-6), based on containers of food, whereas the failure of the operator of the processing equipment to deliver the specified process to containers of food may be of the order of 1 in 40, to 1 in 100, based on processing units (retort loads). In the commercial food canning industry, failure to deliver the process will probably be of the order of 1.0 x 10(-4) to 1.0 x 10(-6) when U.S. Food and Drug Administration (FDA) regulations are followed. Botulism incidents have occurred in food canning plants that have not followed the FDA regulations. It is possible but very rare to have botulism result from postprocessing contamination. It may thus be concluded that botulism incidents in canned food are primarily the result of human failure in the delivery of the designed or specified process to containers of food that, in turn, result in the survival, outgrowth, and toxin production of C. botulinum spores. Therefore, efforts in C. botulinum control should be concentrated on reducing human errors in the delivery of the specified process to containers of food.

  12. Advanced Offshore Wind Turbine/Foundation Concept for the Great Lakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afjeh, Abdollah A.; Windpower, Nautica; Marrone, Joseph

    2013-08-29

    This project investigated a conceptual 2-bladed rotor wind turbine design and assessed its feasibility for installation in the Great Lakes. The levelized cost of energy was used for this purpose. A location in Lake Erie near the coast of Cleveland, Ohio was selected as the application site. The loading environment was defined using wind and wave data collected at a weather station in Lake Erie near Cleveland. In addition, the probability distributions of the annual significant wave height and wind speed were determined. A model of the dependence of the above two quantities was also developed and used in themore » study of wind turbine system loads. Loads from ice floes and ridges were also included.The NREL 5 MW 3-bladed rotor wind turbine concept was used as the baseline design. The proposed turbine design employs variable pitch blade control with tip-brakes and a teeter mechanism. The rotor diameter, rated power and the tower dimensions were selected to closely match those of the NREL 5 MW wind turbine.A semi-floating gravity base foundation was designed for this project primarily to adapt to regional logistical constraints to transport and install the gravity base foundation. This foundation consists of, from bottom to top, a base plate, a buoyancy chamber, a taper zone, a column (with ice cone), and a service platform. A compound upward-downward ice cone was selected to secure the foundation from moving because of ice impact.The turbine loads analysis was based on International ElectroTechnical Committee (IEC) Standard 61400-1, Class III winds. The NREL software FAST was the primary computational tool used in this study to determine all design load cases. An initial set of studies of the dynamics of wind turbines using Automatic Dynamic Analysis of Mechanical Systems (ADAMS) demonstrated that FAST and ADAMS load predictions were comparable. Because of its relative simplicity and short run times, FAST was selected for this study. For ice load calculations, a method was developed and implemented in FAST to extend its capability for ice load modeling.Both upwind and downwind 2-bladed rotor wind turbine designs were developed and studied. The new rotor blade uses a new twist angle distribution design and a new pitch control algorithm compared with the baseline model. The coning and tilt angles were selected for both the upwind and downwind configurations to maximize the annual energy production. The risk of blade-tower impact is greater for the downwind design, particularly under a power grid fault; however, this risk was effectively reduced by adjusting the tilt angle for the downwind configuration.« less

  13. Determination of barge impact probabilities for bridge design.

    DOT National Transportation Integrated Search

    2016-04-01

    Waterway bridges in the United States are designed to resist vessel collision loads according to design provisions released by the American Association of State : Highway and Transportation Officials (AASHTO). These provisions provide detailed proced...

  14. Damage evaluation by a guided wave-hidden Markov model based method

    NASA Astrophysics Data System (ADS)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  15. A Hybrid Demand Response Simulator Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-05-02

    A hybrid demand response simulator is developed to test different control algorithms for centralized and distributed demand response (DR) programs in a small distribution power grid. The HDRS is designed to model a wide variety of DR services such as peak having, load shifting, arbitrage, spinning reserves, load following, regulation, emergency load shedding, etc. The HDRS does not model the dynamic behaviors of the loads, rather, it simulates the load scheduling and dispatch process. The load models include TCAs (water heaters, air conditioners, refrigerators, freezers, etc) and non-TCAs (lighting, washer, dishwasher, etc.) The ambient temperature changes, thermal resistance, capacitance, andmore » the unit control logics can be modeled for TCA loads. The use patterns of the non-TCA can be modeled by probability of use and probabilistic durations. Some of the communication network characteristics, such as delays and errors, can also be modeled. Most importantly, because the simulator is modular and greatly simplified the thermal models for TCA loads, it is very easy and fast to be used to test and validate different control algorithms in a simulated environment.« less

  16. Alternate Methods in Refining the SLS Nozzle Plug Loads

    NASA Technical Reports Server (NTRS)

    Burbank, Scott; Allen, Andrew

    2013-01-01

    Numerical analysis has shown that the SLS nozzle environmental barrier (nozzle plug) design is inadequate for the prelaunch condition, which consists of two dominant loads: 1) the main engines startup pressure and 2) an environmentally induced pressure. Efforts to reduce load conservatisms included a dynamic analysis which showed a 31% higher safety factor compared to the standard static analysis. The environmental load is typically approached with a deterministic method using the worst possible combinations of pressures and temperatures. An alternate probabilistic approach, utilizing the distributions of pressures and temperatures, resulted in a 54% reduction in the environmental pressure load. A Monte Carlo simulation of environmental load that used five years of historical pressure and temperature data supported the results of the probabilistic analysis, indicating the probabilistic load is reflective of a 3-sigma condition (1 in 370 probability). Utilizing the probabilistic load analysis eliminated excessive conservatisms and will prevent a future overdesign of the nozzle plug. Employing a similar probabilistic approach to other design and analysis activities can result in realistic yet adequately conservative solutions.

  17. A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area

    NASA Astrophysics Data System (ADS)

    Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto

    2015-04-01

    In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.

  18. Crack propagation monitoring in a full-scale aircraft fatigue test based on guided wave-Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Qiu, Lei; Yuan, Shenfang; Bao, Qiao; Mei, Hanfei; Ren, Yuanqiang

    2016-05-01

    For aerospace application of structural health monitoring (SHM) technology, the problem of reliable damage monitoring under time-varying conditions must be addressed and the SHM technology has to be fully validated on real aircraft structures under realistic load conditions on ground before it can reach the status of flight test. In this paper, the guided wave (GW) based SHM method is applied to a full-scale aircraft fatigue test which is one of the most similar test status to the flight test. To deal with the time-varying problem, a GW-Gaussian mixture model (GW-GMM) is proposed. The probability characteristic of GW features, which is introduced by time-varying conditions is modeled by GW-GMM. The weak cumulative variation trend of the crack propagation, which is mixed in time-varying influence can be tracked by the GW-GMM migration during on-line damage monitoring process. A best match based Kullback-Leibler divergence is proposed to measure the GW-GMM migration degree to reveal the crack propagation. The method is validated in the full-scale aircraft fatigue test. The validation results indicate that the reliable crack propagation monitoring of the left landing gear spar and the right wing panel under realistic load conditions are achieved.

  19. Probabilistic structural analysis methods and applications

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.

    1988-01-01

    An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.

  20. Immediate effects of modified landing pattern on a probabilistic tibial stress fracture model in runners.

    PubMed

    Chen, T L; An, W W; Chan, Z Y S; Au, I P H; Zhang, Z H; Cheung, R T H

    2016-03-01

    Tibial stress fracture is a common injury in runners. This condition has been associated with increased impact loading. Since vertical loading rates are related to the landing pattern, many heelstrike runners attempt to modify their footfalls for a lower risk of tibial stress fracture. Such effect of modified landing pattern remains unknown. This study examined the immediate effects of landing pattern modification on the probability of tibial stress fracture. Fourteen experienced heelstrike runners ran on an instrumented treadmill and they were given augmented feedback for landing pattern switch. We measured their running kinematics and kinetics during different landing patterns. Ankle joint contact force and peak tibial strains were estimated using computational models. We used an established mathematical model to determine the effect of landing pattern on stress fracture probability. Heelstrike runners experienced greater impact loading immediately after landing pattern switch (P<0.004). There was an increase in the longitudinal ankle joint contact force when they landed with forefoot (P=0.003). However, there was no significant difference in both peak tibial strains and the risk of tibial stress fracture in runners with different landing patterns (P>0.986). Immediate transitioning of the landing pattern in heelstrike runners may not offer timely protection against tibial stress fracture, despite a reduction of impact loading. Long-term effects of landing pattern switch remains unknown. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. The Use of Input-Output Control System Analysis for Sustainable Development of Multivariable Environmental Systems

    NASA Astrophysics Data System (ADS)

    Koliopoulos, T. C.; Koliopoulou, G.

    2007-10-01

    We present an input-output solution for simulating the associated behavior and optimized physical needs of an environmental system. The simulations and numerical analysis determined the accurate boundary loads and areas that were required to interact for the proper physical operation of a complicated environmental system. A case study was conducted to simulate the optimum balance of an environmental system based on an artificial intelligent multi-interacting input-output numerical scheme. The numerical results were focused on probable further environmental management techniques, with the objective of minimizing any risks and associated environmental impact to protect the quality of public health and the environment. Our conclusions allowed us to minimize the associated risks, focusing on probable cases in an emergency to protect the surrounded anthropogenic or natural environment. Therefore, the lining magnitude could be determined for any useful associated technical works to support the environmental system under examination, taking into account its particular boundary necessities and constraints.

  2. Traffic handling capability of a broadband indoor wireless network using CDMA multiple access

    NASA Astrophysics Data System (ADS)

    Zhang, Chang G.; Hafez, H. M.; Falconer, David D.

    1994-05-01

    CDMA (code division multiple access) may be an attractive technique for wireless access to broadband services because of its multiple access simplicity and other appealing features. In order to investigate traffic handling capabilities of a future network providing a variety of integrated services, this paper presents a study of a broadband indoor wireless network supporting high-speed traffic using CDMA multiple access. The results are obtained through the simulation of an indoor environment and the traffic capabilities of the wireless access to broadband 155.5 MHz ATM-SONET networks using the mm-wave band. A distributed system architecture is employed and the system performance is measured in terms of call blocking probability and dropping probability. The impacts of the base station density, traffic load, average holding time, and variable traffic sources on the system performance are examined. The improvement of system performance by implementing various techniques such as handoff, admission control, power control and sectorization are also investigated.

  3. Improvement in precipitation-runoff model simulations by recalibration with basin-specific data, and subsequent model applications, Onondaga Lake Basin, Onondaga County, New York

    USGS Publications Warehouse

    Coon, William F.

    2011-01-01

    Simulation of streamflows in small subbasins was improved by adjusting model parameter values to match base flows, storm peaks, and storm recessions more precisely than had been done with the original model. Simulated recessional and low flows were either increased or decreased as appropriate for a given stream, and simulated peak flows generally were lowered in the revised model. The use of suspended-sediment concentrations rather than concentrations of the surrogate constituent, total suspended solids, resulted in increases in the simulated low-flow sediment concentrations and, in most cases, decreases in the simulated peak-flow sediment concentrations. Simulated orthophosphate concentrations in base flows generally increased but decreased for peak flows in selected headwater subbasins in the revised model. Compared with the original model, phosphorus concentrations simulated by the revised model were comparable in forested subbasins, generally decreased in developed and wetland-dominated subbasins, and increased in agricultural subbasins. A final revision to the model was made by the addition of the simulation of chloride (salt) concentrations in the Onondaga Creek Basin to help water-resource managers better understand the relative contributions of salt from multiple sources in this particular tributary. The calibrated revised model was used to (1) compute loading rates for the various land types that were simulated in the model, (2) conduct a watershed-management analysis that estimated the portion of the total load that was likely to be transported to Onondaga Lake from each of the modeled subbasins, (3) compute and assess chloride loads to Onondaga Lake from the Onondaga Creek Basin, and (4) simulate precolonization (forested) conditions in the basin to estimate the probable minimum phosphorus loads to the lake.

  4. Gaussian vs non-Gaussian turbulence: impact on wind turbine loads

    NASA Astrophysics Data System (ADS)

    Berg, J.; Mann, J.; Natarajan, A.; Patton, E. G.

    2014-12-01

    In wind energy applications the turbulent velocity field of the Atmospheric Boundary Layer (ABL) is often characterised by Gaussian probability density functions. When estimating the dynamical loads on wind turbines this has been the rule more than anything else. From numerous studies in the laboratory, in Direct Numerical Simulations, and from in-situ measurements of the ABL we know, however, that turbulence is not purely Gaussian: the smallest and fastest scales often exhibit extreme behaviour characterised by strong non-Gaussian statistics. In this contribution we want to investigate whether these non-Gaussian effects are important when determining wind turbine loads, and hence of utmost importance to the design criteria and lifetime of a wind turbine. We devise a method based on Principal Orthogonal Decomposition where non-Gaussian velocity fields generated by high-resolution pseudo-spectral Large-Eddy Simulation (LES) of the ABL are transformed so that they maintain the exact same second-order statistics including variations of the statistics with height, but are otherwise Gaussian. In that way we can investigate in isolation the question whether it is important for wind turbine loads to include non-Gaussian properties of atmospheric turbulence. As an illustration the Figure show both a non-Gaussian velocity field (left) from our LES, and its transformed Gaussian Counterpart (right). Whereas the horizontal velocity components (top) look close to identical, the vertical components (bottom) are not: the non-Gaussian case is much more fluid-like (like in a sketch by Michelangelo). The question is then: Does the wind turbine see this? Using the load simulation software HAWC2 with both the non-Gaussian and newly constructed Gaussian fields, respectively, we show that the Fatigue loads and most of the Extreme loads are unaltered when using non-Gaussian velocity fields. The turbine thus acts like a low-pass filter which average out the non-Gaussian behaviour on time scales close to and faster than the revolution time of the turbine. For a few of the Extreme load estimations there is, on the other hand, a tendency that non-Gaussian effects increase the overall dynamical load, and hence can be of importance in wind energy load estimations.

  5. Material Surface Damage under High Pulse Loads Typical for ELM Bursts and Disruptions in ITER

    NASA Astrophysics Data System (ADS)

    Landman, I. S.; Pestchanyi, S. E.; Safronov, V. M.; Bazylev, B. N.; Garkusha, I. E.

    The divertor armour material for the tokamak ITER will probably be carbon manufactured as fibre composites (CFC) and tungsten as either brush-like structures or thin plates. Disruptive pulse loads where the heat deposition Q may reach 102 MJ/m 2 on a time scale Ïä of 3 ms, or operation in the ELMy H-mode at repetitive loads with Q âe 1/4 3 MJ/m2 and Ïä âe 1/4 0.3 ms, deteriorate armour performance. This work surveys recent numerical and experimental investigations of erosion mechanisms at these off-normal regimes carried out at FZK, TRINITI, and IPP-Kharkov. The modelling uses the anisotropic thermomechanics code PEGASUS-3D for the simulation of CFC brittle destruction, the surface melt motion code MEMOS-1.5D for tungsten targets, and the radiation-magnetohydrodynamics code FOREV-2D for calculating the plasma impact and simulating the heat loads for the ITER regime. Experiments aimed at validating these codes are being carried out at the plasma gun facilities MK-200UG, QSPA-T, and QSPA-Kh50 which produce powerful streams of hydrogen plasma with Q = 10–30 MJ/m2 and Ïä = 0.03–0.5 ms. Essential results are, for CFC targets, the experiments at high heat loads and the development of a local overheating model incorporated in PEGASUS-3D, and for the tungsten targets the analysis of evaporation- and melt motion erosion on the base of MEMOS-1.5D calculations for repetitive ELMs.

  6. Using regression analysis to predict emergency patient volume at the Indianapolis 500 mile race.

    PubMed

    Bowdish, G E; Cordell, W H; Bock, H C; Vukov, L F

    1992-10-01

    Emergency physicians often plan and provide on-site medical care for mass gatherings. Most of the mass gathering literature is descriptive. Only a few studies have looked at factors such as crowd size, event characteristics, or weather in predicting numbers and types of patients at mass gatherings. We used regression analysis to relate patient volume on Race Day at the Indianapolis Motor Speedway to weather conditions and race characteristics. Race Day weather data for the years 1983 to 1989 were obtained from the National Oceanic and Atmospheric Administration. Data regarding patients treated on 1983 to 1989 Race Days were obtained from the facility hospital (Hannah Emergency Medical Center) data base. Regression analysis was performed using weather factors and race characteristics as independent variables and number of patients seen as the dependent variable. Data from 1990 were used to test the validity of the model. There was a significant relationship between dew point (which is calculated from temperature and humidity) and patient load (P less than .01). Dew point, however, failed to predict patient load during the 1990 race. No relationships could be established between humidity, sunshine, wind, or race characteristics and number of patients. Although higher dew point was associated with higher patient load during the 1983 to 1989 races, dew point was a poor predictor of patient load during the 1990 race. Regression analysis may be useful in identifying relationships between event characteristics and patient load but is probably inadequate to explain the complexities of crowd behavior and too simplified to use as a prediction tool.

  7. Modelling the vertical distribution of canopy fuel load using national forest inventory and low-density airbone laser scanning data.

    PubMed

    González-Ferreiro, Eduardo; Arellano-Pérez, Stéfano; Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Álvarez-González, Juan Gabriel; Ruiz-González, Ana Daría

    2017-01-01

    The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard.

  8. Computational Prediction of Shock Ignition Thresholds and Ignition Probability of Polymer-Bonded Explosives

    NASA Astrophysics Data System (ADS)

    Wei, Yaochi; Kim, Seokpum; Horie, Yasuyuki; Zhou, Min

    2017-06-01

    A computational approach is developed to predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs). The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific damage mechanisms considered include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to mimic relevant experiments for statistical variations of material behavior due to inherent material heterogeneities. The ignition thresholds and corresponding ignition probability maps are predicted for PBX 9404 and PBX 9501 for the impact loading regime of Up = 200 --1200 m/s. James and Walker-Wasley relations are utilized to establish explicit analytical expressions for the ignition probability as a function of load intensities. The predicted results are in good agreement with available experimental measurements. The capability to computationally predict the macroscopic response out of material microstructures and basic constituent properties lends itself to the design of new materials and the analysis of existing materials. The authors gratefully acknowledge the support from Air Force Office of Scientific Research (AFOSR) and the Defense Threat Reduction Agency (DTRA).

  9. Distributed photovoltaic system impact upon utility load/supply management practices

    NASA Astrophysics Data System (ADS)

    Vachtsevanos, G. J.; Meliopoulos, A. P.; Paraskevopoulos, B. K.

    A methodology is described for simulation of the economic and technical factors of photovoltaic (PV) installations interfacing with utility load/management operations. A probabalistic technique is used to model the expected demand, reliability of the generating units, costs and profits from each unit, expected unserviced energy, and the loss of load probability. The available power from PV arrays is treated stochastically with statistical weighting on the basis of site meteorological data. The goal is to include the PV power while minimizing operational costs, taking into account the level of penetration of the total PV output. Two sample simulations for a utility with a diverse generating mix demonstrate that overall costs would decrease in both cases with PVs on-line through the emphasis on cheaper-fueled generators and peak-load shaving when possible.

  10. Blocking performance of the hose model and the pipe model for VPN service provisioning over WDM optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Haibo; Swee Poo, Gee

    2004-08-01

    We study the provisioning of virtual private network (VPN) service over WDM optical networks. For this purpose, we investigate the blocking performance of the hose model versus the pipe model for the provisioning. Two techniques are presented: an analytical queuing model and a discrete event simulation. The queuing model is developed from the multirate reduced-load approximation technique. The simulation is done with the OPNET simulator. Several experimental situations were used. The blocking probabilities calculated from the two approaches show a close match, indicating that the multirate reduced-load approximation technique is capable of predicting the blocking performance for the pipe model and the hose model in WDM networks. A comparison of the blocking behavior of the two models shows that the hose model has superior blocking performance as compared with pipe model. By and large, the blocking probability of the hose model is better than that of the pipe model by a few orders of magnitude, particularly at low load regions. The flexibility of the hose model allowing for the sharing of resources on a link among all connections accounts for its superior performance.

  11. Load Balancing in Hypergraphs

    NASA Astrophysics Data System (ADS)

    Delgosha, Payam; Anantharam, Venkat

    2018-03-01

    Consider a simple locally finite hypergraph on a countable vertex set, where each edge represents one unit of load which should be distributed among the vertices defining the edge. An allocation of load is called balanced if load cannot be moved from a vertex to another that is carrying less load. We analyze the properties of balanced allocations of load. We extend the concept of balancedness from finite hypergraphs to their local weak limits in the sense of Benjamini and Schramm (Electron J Probab 6(23):13, 2001) and Aldous and Steele (in: Probability on discrete structures. Springer, Berlin, pp 1-72, 2004). To do this, we define a notion of unimodularity for hypergraphs which could be considered an extension of unimodularity in graphs. We give a variational formula for the balanced load distribution and, in particular, we characterize it in the special case of unimodular hypergraph Galton-Watson processes. Moreover, we prove the convergence of the maximum load under some conditions. Our work is an extension to hypergraphs of Anantharam and Salez (Ann Appl Probab 26(1):305-327, 2016), which considered load balancing in graphs, and is aimed at more comprehensively resolving conjectures of Hajek (IEEE Trans Inf Theory 36(6):1398-1414, 1990).

  12. Considering the ranges of uncertainties in the New Probabilistic Seismic Hazard Assessment of Germany - Version 2016

    NASA Astrophysics Data System (ADS)

    Grunthal, Gottfried; Stromeyer, Dietrich; Bosse, Christian; Cotton, Fabrice; Bindi, Dino

    2017-04-01

    The seismic load parameters for the upcoming National Annex to the Eurocode 8 result from the reassessment of the seismic hazard supported by the German Institution for Civil Engineering . This 2016 version of hazard assessment for Germany as target area was based on a comprehensive involvement of all accessible uncertainties in models and parameters into the approach and the provision of a rational framework for facilitating the uncertainties in a transparent way. The developed seismic hazard model represents significant improvements; i.e. it is based on updated and extended databases, comprehensive ranges of models, robust methods and a selection of a set of ground motion prediction equations of their latest generation. The output specifications were designed according to the user oriented needs as suggested by two review teams supervising the entire project. In particular, seismic load parameters were calculated for rock conditions with a vS30 of 800 ms-1 for three hazard levels (10%, 5% and 2% probability of occurrence or exceedance within 50 years) in form of, e.g., uniform hazard spectra (UHS) based on 19 sprectral periods in the range of 0.01 - 3s, seismic hazard maps for spectral response accelerations for different spectral periods or for macroseismic intensities. The developed hazard model consists of a logic tree with 4040 end branches and essential innovations employed to capture epistemic uncertainties and aleatory variabilities. The computation scheme enables the sound calculation of the mean and any quantile of required seismic load parameters. Mean, median and 84th percentiles of load parameters were provided together with the full calculation model to clearly illustrate the uncertainties of such a probabilistic assessment for a region of a low-to-moderate level of seismicity. The regional variations of these uncertainties (e.g. ratios between the mean and median hazard estimations) were analyzed and discussed.

  13. Use of herd information for predicting Salmonella status in pig herds.

    PubMed

    Baptista, F M; Alban, L; Nielsen, L R; Domingos, I; Pomba, C; Almeida, V

    2010-11-01

    Salmonella surveillance-and-control programs in pigs are highly resource demanding, so alternative cost-effective approaches are desirable. The aim of this study was to develop and evaluate a tool for predicting the Salmonella test status in pig herds based on herd information collected from 108 industrial farrow-to-finish pig herds in Portugal. A questionnaire including known risk factors for Salmonella was used. A factor analysis model was developed to identify relevant factors that were then tested for association with Salmonella status. Three factors were identified and labelled: general biosecurity (factor 1), herd size (factor 2) and sanitary gap implementation (factor 3). Based on the loadings in factor 1 and factor 3, herds were classified according to their biosecurity practices. In total, 59% of the herds had a good level of biosecurity (interpreted as a loading below zero in factor 1) and 37% of the farms had good biosecurity and implemented sanitary gap (loading below zero in factor 1 and loading above zero in factor 3). This implied that they, among other things, implemented preventive measures for visitors and workers entering the herd, controlled biological vectors, had hygiene procedures in place, water quality assessment, and sanitary gap in the fattening and growing sections. In total, 50 herds were tested for Salmonella. Logistic regression analysis showed that factor 1 was significantly associated with Salmonella test status (P = 0.04). Herds with poor biosecurity had a higher probability of testing Salmonella positive compared with herds with good biosecurity. This study shows the potential for using herd information to classify herds according to their Salmonella status in the absence of good testing options. The method might be used as a potentially cost-effective tool for future development of risk-based approaches to surveillance, targeting interventions to high-risk herds or differentiating sampling strategies in herds with different levels of infection. © 2010 Blackwell Verlag GmbH.

  14. Stereocomplex micelle from nonlinear enantiomeric copolymers efficiently transports antineoplastic drug

    NASA Astrophysics Data System (ADS)

    Wang, Jixue; Shen, Kexin; Xu, Weiguo; Ding, Jianxun; Wang, Xiaoqing; Liu, Tongjun; Wang, Chunxi; Chen, Xuesi

    2015-05-01

    Nanoscale polymeric micelles have attracted more and more attention as a promising nanocarrier for controlled delivery of antineoplastic drugs. Herein, the doxorubicin (DOX)-loaded poly(D-lactide)-based micelle (PDM/DOX), poly(L-lactide)-based micelle (PLM/DOX), and stereocomplex micelle (SCM/DOX) from the equimolar mixture of the enantiomeric four-armed poly(ethylene glycol)-polylactide (PEG-PLA) copolymers were successfully fabricated. In phosphate-buffered saline (PBS) at pH 7.4, SCM/DOX exhibited the smallest hydrodynamic diameter ( D h) of 90 ± 4.2 nm and the slowest DOX release compared with PDM/DOX and PLM/DOX. Moreover, PDM/DOX, PLM/DOX, and SCM/DOX exhibited almost stable D hs of around 115, 105, and 90 nm at above normal physiological condition, respectively, which endowed them with great potential in controlled drug delivery. The intracellular DOX fluorescence intensity after the incubation with the laden micelles was different degrees weaker than that incubated with free DOX · HCl within 12 h, probably due to the slow DOX release from micelles. As the incubation time reached to 24 h, all the cells incubated with the laden micelles, especially SCM/DOX, demonstrated a stronger intracellular DOX fluorescence intensity than free DOX · HCl-cultured ones. More importantly, all the DOX-loaded micelles, especially SCM/DOX, exhibited potent antineoplastic efficacy in vitro, excellent serum albumin-tolerance stability, and satisfactory hemocompatibility. These encouraging data indicated that the loading micelles from nonlinear enantiomeric copolymers, especially SCM/DOX, might be promising in clinical systemic chemotherapy through intravenous injection.

  15. Life Predicted in a Probabilistic Design Space for Brittle Materials With Transient Loads

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Palfi, Tamas; Reh, Stefan

    2005-01-01

    Analytical techniques have progressively become more sophisticated, and now we can consider the probabilistic nature of the entire space of random input variables on the lifetime reliability of brittle structures. This was demonstrated with NASA s CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code combined with the commercially available ANSYS/Probabilistic Design System (ANSYS/PDS), a probabilistic analysis tool that is an integral part of the ANSYS finite-element analysis program. ANSYS/PDS allows probabilistic loads, component geometry, and material properties to be considered in the finite-element analysis. CARES/Life predicts the time dependent probability of failure of brittle material structures under generalized thermomechanical loading--such as that found in a turbine engine hot-section. Glenn researchers coupled ANSYS/PDS with CARES/Life to assess the effects of the stochastic variables of component geometry, loading, and material properties on the predicted life of the component for fully transient thermomechanical loading and cyclic loading.

  16. Probabilistic Risk Assessment for Astronaut Post Flight Bone Fracture

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth; Myers, Jerry; Licata, Angelo

    2015-01-01

    Introduction: Space flight potentially reduces the loading that bone can resist before fracture. This reduction in bone integrity may result from a combination of factors, the most common reported as reduction in astronaut BMD. Although evaluating the condition of bones continues to be a critical aspect of understanding space flight fracture risk, defining the loading regime, whether on earth, in microgravity, or in reduced gravity on a planetary surface, remains a significant component of estimating the fracture risks to astronauts. This presentation summarizes the concepts, development, and application of NASA's Bone Fracture Risk Module (BFxRM) to understanding pre-, post, and in mission astronaut bone fracture risk. The overview includes an assessment of contributing factors utilized in the BFxRM and illustrates how new information, such as biomechanics of space suit design or better understanding of post flight activities may influence astronaut fracture risk. Opportunities for the bone mineral research community to contribute to future model development are also discussed. Methods: To investigate the conditions in which spaceflight induced changes to bone plays a critical role in post-flight fracture probability, we implement a modified version of the NASA Bone Fracture Risk Model (BFxRM). Modifications included incorporation of variations in physiological characteristics, post-flight recovery rate, and variations in lateral fall conditions within the probabilistic simulation parameter space. The modeled fracture probability estimates for different loading scenarios at preflight and at 0 and 365 days post-flight time periods are compared. Results: For simple lateral side falls, mean post-flight fracture probability is elevated over mean preflight fracture probability due to spaceflight induced BMD loss and is not fully recovered at 365 days post-flight. In the case of more energetic falls, such as from elevated heights or with the addition of lateral movement, the contribution of space flight quality changes is much less clear, indicating more granular assessments, such as Finite Element modeling, may be needed to further assess the risks in these scenarios.

  17. Influence of triaxial braid denier on ribbon-based fiber reinforced dental composites.

    PubMed

    Karbhari, Vistasp M; Wang, Qiang

    2007-08-01

    The aim of the study was to compare the mechanical characteristics of two ultrahigh molecular weight polyethylene (UHMWPE) fiber-based triaxial braided reinforcements having different denier braider yarns used in fiber reinforced dental composites to elucidate differences in response and damage under flexural loading. Two commercially available triaxial braided reinforcing systems, differing in denier of the axial and braider yarns, using ultra high molecular weight polyethylene (UHMWPE) were used to reinforce rectangular bars towards the tensile surface which were tested in flexure. Mechanical characteristics including energy absorption were determined and results were compared based on Tukey post-test analysis and Weibull probability. Limited fatigue testing was also conducted for 100, 1000, and 10,000 cycles at a level of 75% of peak load. The effect of the braid denier on damage mechanisms was studied microscopically. The use of the triaxially braided ribbon as fiber reinforcement in the dental composite results in significant enhancement in flexural performance over that of the unreinforced dental composite (179% and 183% increase for the "thin" and "dense" braid reinforced specimens, respectively), with a fairly ductile, non-catastrophic post-peak response. With the exception of strain at peak load, there was very little difference between the performance from the two braid architectures. The intrinsic nature of the triaxial braid also results in very little decrease in flexural strength as a result of fatigue cycling at 75% of peak load. Use of the braids results in peak load levels which are substantially higher than those corresponding to points at which the dentin and unreinforced dental composites would fail. The total energy at peak load level is 56.8 and 60.7 times that at the level that dentin would fail if the reinforcement were not placed for the "thin" and "dense" reinforced braid reinforced composites, respectively. The research shows that in addition to enhancement in flexural performance characteristics, the use of a triaxial braid provides significant damage tolerance and fatigue resistance through its characteristic architecture wherein axial fibers are uncrimped and braider yarns provide shear resistance and enable local arrest of microcracks. Further, it is demonstrated that the decrease in braider yarn denier does not have a detrimental effect, with differences in performance characteristics, being in the main, statistically insignificant. This allows use of thinner reinforcement which provides ease of placement and better bonding without loss in performance.

  18. Wind Energy Management System Integration Project Incorporating Wind Generation and Load Forecast Uncertainties into Power Grid Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.

    2010-09-01

    The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation) and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the load and windmore » forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. In order to improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively, by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. In this report, a new methodology to predict the uncertainty ranges for the required balancing capacity, ramping capability and ramp duration is presented. Uncertainties created by system load forecast errors, wind and solar forecast errors, generation forced outages are taken into account. The uncertainty ranges are evaluated for different confidence levels of having the actual generation requirements within the corresponding limits. The methodology helps to identify system balancing reserve requirement based on a desired system performance levels, identify system “breaking points”, where the generation system becomes unable to follow the generation requirement curve with the user-specified probability level, and determine the time remaining to these potential events. The approach includes three stages: statistical and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence intervals. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis incorporating all sources of uncertainty and parameters of a continuous (wind forecast and load forecast errors) and discrete (forced generator outages and failures to start up) nature. Preliminary simulations using California Independent System Operator (California ISO) real life data have shown the effectiveness of the proposed approach. A tool developed based on the new methodology described in this report will be integrated with the California ISO systems. Contractual work is currently in place to integrate the tool with the AREVA EMS system.« less

  19. Patent Foramen Ovale as a Risk Factor for Altitude Decompression Illness

    DTIC Science & Technology

    2001-06-01

    TTE ). In 48 volunteers at DCIEM screened for the PRP studies with a TTE , 14 (29%) were found to have an echo-probable PFO. In 29 altitude-exposed...subjects who had a TTE , there were 5 echo-probable PFOs. None of these 5 subjects experienced DCI. Two of these subjects had a high bubble load with grade...16% incidence (by TTE ) of PFO amongst 24 cases of Type II altitude DCI using trans- esophogeal echocardiography (TEE), suggesting that there was no

  20. Intradiscal pressure variation under spontaneous ventilation

    NASA Astrophysics Data System (ADS)

    Roriz, Paulo; Ferreira, J.; Potes, J. C.; Oliveira, M. T.; Santos, J. L.; Simões, J. A.; Frazão, O.

    2014-05-01

    The pressure measured in the intervertebral discs is a response to the loads acting on the spine. External loads, such as the reaction forces resulting from locomotion, manual handling and collisions are probably the most relevant in studying spine trauma. However, the physiological functions such as breathing and hearth rate also participate in subtle variations of intradiscal pressure that can be observed only in vivo at resting. Present work is an effort to measure the effect of breathing on intradiscal pressure of an anesthetized sheep.

  1. Tornado risks and design windspeeds for the Oak Ridge Plant Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1975-08-01

    The effects of tornadoes and other extreme winds should be considered in establishing design criteria for structures to resist wind loads. Design standards that are incorporated in building codes do not normally include the effects of tornadoes in their wind load criteria. Some tornado risk models ignore the presence of nontornadic extreme winds. The purpose of this study is to determine the probability of tornadic and straight winds exceeding a threshold value in the geographical region surrounding the Oak Ridge, Tennessee plant site.

  2. Long-term strength and damage accumulation in laminates

    NASA Astrophysics Data System (ADS)

    Dzenis, Yuris A.; Joshi, Shiv P.

    1993-04-01

    A modified version of the probabilistic model developed by authors for damage evolution analysis of laminates subjected to random loading is utilized to predict long-term strength of laminates. The model assumes that each ply in a laminate consists of a large number of mesovolumes. Probabilistic variation functions for mesovolumes stiffnesses as well as strengths are used in the analysis. Stochastic strains are calculated using the lamination theory and random function theory. Deterioration of ply stiffnesses is calculated on the basis of the probabilities of mesovolumes failures using the theory of excursions of random process beyond the limits. Long-term strength and damage accumulation in a Kevlar/epoxy laminate under tension and complex in-plane loading are investigated. Effects of the mean level and stochastic deviation of loading on damage evolution and time-to-failure of laminate are discussed. Long-term cumulative damage at the time of the final failure at low loading levels is more than at high loading levels. The effect of the deviation in loading is more pronounced at lower mean loading levels.

  3. Smisc - A collection of miscellaneous functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landon Sego, PNNL

    2015-08-31

    A collection of functions for statistical computing and data manipulation. These include routines for rapidly aggregating heterogeneous matrices, manipulating file names, loading R objects, sourcing multiple R files, formatting datetimes, multi-core parallel computing, stream editing, specialized plotting, etc. Smisc-package A collection of miscellaneous functions allMissing Identifies missing rows or columns in a data frame or matrix as.numericSilent Silent wrapper for coercing a vector to numeric comboList Produces all possible combinations of a set of linear model predictors cumMax Computes the maximum of the vector up to the current index cumsumNA Computes the cummulative sum of a vector without propogating NAsmore » d2binom Probability functions for the sum of two independent binomials dataIn A flexible way to import data into R. dbb The Beta-Binomial Distribution df2list Row-wise conversion of a data frame to a list dfplapply Parallelized single row processing of a data frame dframeEquiv Examines the equivalence of two dataframes or matrices dkbinom Probability functions for the sum of k independent binomials factor2character Converts all factor variables in a dataframe to character variables findDepMat Identify linearly dependent rows or columns in a matrix formatDT Converts date or datetime strings into alternate formats getExtension Filename manipulations: remove the extension or path, extract the extension or path getPath Filename manipulations: remove the extension or path, extract the extension or path grabLast Filename manipulations: remove the extension or path, extract the extension or path ifelse1 Non-vectorized version of ifelse integ Simple numerical integration routine interactionPlot Two-way Interaction Plot with Error Bar linearMap Linear mapping of a numerical vector or scalar list2df Convert a list to a data frame loadObject Loads and returns the object(s) in an ".Rdata" file more Display the contents of a file to the R terminal movAvg2 Calculate the moving average using a 2-sided window openDevice Opens a graphics device based on the filename extension p2binom Probability functions for the sum of two independent binomials padZero Pad a vector of numbers with zeros parseJob Parses a collection of elements into (almost) equal sized groups pbb The Beta-Binomial Distribution pcbinom A continuous version of the binomial cdf pkbinom Probability functions for the sum of k independent binomials plapply Simple parallelization of lapply plotFun Plot one or more functions on a single plot PowerData An example of power data pvar Prints the name and value of one or more objects qbb The Beta-Binomial Distribution rbb And numerous others (space limits reporting).« less

  4. Mechanical failure probability of glasses in Earth orbit

    NASA Technical Reports Server (NTRS)

    Kinser, Donald L.; Wiedlocher, David E.

    1992-01-01

    Results of five years of earth-orbital exposure on mechanical properties of glasses indicate that radiation effects on mechanical properties of glasses, for the glasses examined, are less than the probable error of measurement. During the 5 year exposure, seven micrometeorite or space debris impacts occurred on the samples examined. These impacts were located in locations which were not subjected to effective mechanical testing, hence limited information on their influence upon mechanical strength was obtained. Combination of these results with micrometeorite and space debris impact frequency obtained by other experiments permits estimates of the failure probability of glasses exposed to mechanical loading under earth-orbit conditions. This probabilistic failure prediction is described and illustrated with examples.

  5. System reliability of randomly vibrating structures: Computational modeling and laboratory testing

    NASA Astrophysics Data System (ADS)

    Sundar, V. S.; Ammanagi, S.; Manohar, C. S.

    2015-09-01

    The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.

  6. Analysis of composite laminates with multiple fasteners by boundary collocation technique

    NASA Astrophysics Data System (ADS)

    Sergeev, Boris Anatolievich

    Mechanical fasteners remain the primary means of load transfer between structural components made of composite laminates. As, in pursuit of increasing efficiency of the structure, the operational load continues to grow, the load carried by each fastener increases accordingly. This accelerates initiation of fatigue-related cracks near the fasteners holes and increases probability of failure. Therefore, the assessment of the stresses around the fastener holes and the stress intensity factors associated with edge cracks becomes critical for damage-tolerant design. Because of the presence of unknown contact stresses and the contact region between the fastener and the laminate, the analysis of a pin-loaded hole becomes considerably more complex than that of a traction-free hole. The accurate prediction of the contact stress distribution along the hole boundary is critical for determining the stress intensity factors and is essential for reliable strength evaluation and failure prediction. This study concerns the development of an analytical methodology, based on the boundary collocation technique, to determine the contact stresses and stress intensity factors required for strength and life prediction of bolted joints with many fasteners. It provides an analytical capability for determining the non-linear contact stresses in mechanically fastened composite laminates while capturing the effects of finite geometry, presence of edge cracks, interaction among fasteners, material anisotropy, fastener flexibility, fastener-hole clearance, friction between the pin and the laminate, and by-pass loading. Also, the proposed approach permits the determination of the fastener load distribution, which significantly influences the failure load of a multi-fastener joint. The well known phenomenon of the fastener tightening torque (clamping force) influence on the load distribution among the different fastener in a multi-fastener joints is taken into account by means of bi-linear representation of the elastic fastener deflection. Finally, two different failure criteria, maximum strains averaged over the characteristic distances and Tsai-Wu criterion, were used to predict the failure load and failure mode in two composite-aluminum joints. The comparison of the present predictions with the published experimental results reveals their agreement.

  7. Modulations of stratospheric ozone by volcanic eruptions

    NASA Technical Reports Server (NTRS)

    Blanchette, Christian; Mcconnell, John C.

    1994-01-01

    We have used a time series of aerosol surface based on the measurements of Hofmann to investigate the modulation of total column ozone caused by the perturbation to gas phase chemistry by the reaction N2O5(gas) + H2O(aero) yields 2HNO3(gas) on the surface of stratospheric aerosols. We have tested a range of values for its reaction probability, gamma = 0.02, 0.13, and 0.26 which we compared to unperturbed homogeneous chemistry. Our analysis spans a period from Jan. 1974 to Oct. 1994. The results suggest that if lower values of gamma are the norm then we would expect larger ozone losses for highly enhanced aerosol content that for larger values of gamma. The ozone layer is more sensitive to the magnitude of the reaction probability under background conditions than during volcanically active periods. For most conditions, the conversion of NO2 to HNO3 is saturated for reaction probability in the range of laboratory measurements, but is only absolutely saturated following major volcanic eruptions when the heterogeneous loss dominates the losses of N2O5. The ozone loss due to this heterogeneous reaction increases with the increasing chlorine load. Total ozone losses calculated are comparable to ozone losses reported from TOMS and Dobson data.

  8. Preliminary analysis of the span-distributed-load concept for cargo aircraft design

    NASA Technical Reports Server (NTRS)

    Whitehead, A. H., Jr.

    1975-01-01

    A simplified computer analysis of the span-distributed-load airplane (in which payload is placed within the wing structure) has shown that the span-distributed-load concept has high potential for application to future air cargo transport design. Significant increases in payload fraction over current wide-bodied freighters are shown for gross weights in excess of 0.5 Gg (1,000,000 lb). A cruise-matching calculation shows that the trend toward higher aspect ratio improves overall efficiency; that is, less thrust and fuel are required. The optimal aspect ratio probably is not determined by structural limitations. Terminal-area constraints and increasing design-payload density, however, tend to limit aspect ratio.

  9. Optimizing Aggregation Scenarios for Integrating Renewable Energy into the U.S. Electric Grid

    NASA Astrophysics Data System (ADS)

    Corcoran, B. A.; Jacobson, M. Z.

    2010-12-01

    This study is an analysis of 2006 and 2007 electric load data, wind speed and solar irradiance data, and existing hydroelectric, geothermal, and other power plant data to quantify benefits of aggregating clean electric power from various Federal Energy Regulatory Commission (FERC) regions in the contiguous United States. First, various time series, statistics, and probability methods are applied to the electric load data to determine if there are any desirable demand-side results—specifically reducing variability and/or coincidence of peak events, which could reduce the amount of required carbon-based generators—in combining the electricity demands from geographically and temporally diverse areas. Second, an optimization algorithm is applied to determine the least-cost portfolio of energy resources to meet the electric load for a range of renewable portfolio standards (RPS’s) for each FERC region and for various aggregation scenarios. Finally, the installed capacities, ramp rates, standard deviation, and corresponding generator requirements from these optimization test runs are compared against the transmission requirements to determine the most economical organizational structure of the contiguous U.S. electric grid. Ideally, results from this study will help to justify and identify a possible structure of a federal RPS and offer insight into how to best organize regions for transmission planning.

  10. Mesoscale Fracture Analysis of Multiphase Cementitious Composites Using Peridynamics

    PubMed Central

    Yaghoobi, Amin; Chorzepa, Mi G.; Kim, S. Sonny; Durham, Stephan A.

    2017-01-01

    Concrete is a complex heterogeneous material, and thus, it is important to develop numerical modeling methods to enhance the prediction accuracy of the fracture mechanism. In this study, a two-dimensional mesoscale model is developed using a non-ordinary state-based peridynamic (NOSBPD) method. Fracture in a concrete cube specimen subjected to pure tension is studied. The presence of heterogeneous materials consisting of coarse aggregates, interfacial transition zones, air voids and cementitious matrix is characterized as particle points in a two-dimensional mesoscale model. Coarse aggregates and voids are generated using uniform probability distributions, while a statistical study is provided to comprise the effect of random distributions of constituent materials. In obtaining the steady-state response, an incremental and iterative solver is adopted for the dynamic relaxation method. Load-displacement curves and damage patterns are compared with available experimental and finite element analysis (FEA) results. Although the proposed model uses much simpler material damage models and discretization schemes, the load-displacement curves show no difference from the FEA results. Furthermore, no mesh refinement is necessary, as fracture is inherently characterized by bond breakages. Finally, a sensitivity study is conducted to understand the effect of aggregate volume fraction and porosity on the load capacity of the proposed mesoscale model. PMID:28772518

  11. Thermal Stimuli-Triggered Drug Release from a Biocompatible Porous Metal-Organic Framework.

    PubMed

    Jiang, Ke; Zhang, Ling; Hu, Quan; Zhang, Qi; Lin, Wenxin; Cui, Yuanjing; Yang, Yu; Qian, Guodong

    2017-07-26

    Drug delivery carriers with a high drug loading capacity and biocompatibility, especially for controlled drug release, are urgently needed due to the side effects and frequently dose in the traditional therapeutic method. In our work, a Zr-based metal-organic framework named ZJU-801, which is isoreticular with NU-801, has been designed and further demonstrated as an excellent drug delivery system (DDS) with a high drug loading of 41.7 %. Such a high drug loading capacity may be ascribed to the appropriate match of the size and the large pore volume of this kind of Zr MOF material. Compared with DS@NU-801, this DDS has successfully achieved on-command heating-activated drug release, which was probably attributed to the bulkier ligand, the better stability, and the intense π-π interaction between ZJU-801 and diclofenac sodium (DS) demonstrated comprehensively by SEM, powder X-ray diffraction (PXRD), FTIR and 13 C solid-state NMR spectroscopy as well as computer simulations. It is worth noting that premature drug release was avoided effectively without any complicated post-modifications. The low cytotoxicity and good biocompatibility of our DDS were certificated by the in vitro favorable results from an MTT assay, a WST-1 assay, and confocal microscopy imaging. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Thermo-sensitively and magnetically ordered mesoporous carbon nanospheres for targeted controlled drug release and hyperthermia application.

    PubMed

    Chen, Lin; Zhang, Huan; Zheng, Jing; Yu, Shiping; Du, Jinglei; Yang, Yongzhen; Liu, Xuguang

    2018-03-01

    A multifunctional nanoplatform based on thermo-sensitively and magnetically ordered mesoporous carbon nanospheres (TMOMCNs) is developed for effective targeted controlled release of doxorubicin hydrochloride (DOX) and hyperthermia in this work. The morphology, specific surface area, porosity, thermo-stability, thermo-sensitivity, as well as magnetism properties of TMOMCNs were verified by high resolution transmission electron microscopy, field emission scanning electron microscopy, thermo-gravimetric analysis, X-ray diffraction, Brunauer-Emmeltt-Teller surface area analysis, dynamic light scattering and vibrating sample magnetometry measurement. The results indicate that TMOMCNs have an average diameter of ~146nm with a lower critical solution temperature at around 39.5°C. They are superparamagnetic with a magnetization of 10.15emu/g at 20kOe. They generate heat when inductive magnetic field is applied to them and have a normalized specific absorption rate of 30.23W/g at 230kHz and 290Oe, showing good potential for hyperthermia. The DOX loading and release results illustrate that the loading capacity is 135.10mg/g and release performance could be regulated by changing pH and temperature. The good targeting, DOX loading and release and hyperthermia properties of TMOMCNs offer new probabilities for high effectiveness and low toxicity of cancer chemotherapy. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Consecutive rebounds in plasma viral load are associated with virological failure at 52 weeks among HIV-infected patients.

    PubMed

    Raboud, Janet M; Rae, Sandra; Woods, Ryan; Harris, Marianne; Montaner, Julio S G

    2002-08-16

    To describe the characteristics and predictors of transient plasma viral load (pVL) rebounds among patients on stable antiretroviral therapy and to determine the effect of one or more pVL rebounds on virological response at week 52. Individual data were combined from 358 patients from the INCAS, AVANTI-2 and AVANTI-3 studies. Logistic regression models were used to determine the relationship between the magnitude of an increase in pVL and the probability of returning to the lower limit of quantification (LLOQ: 20-50 copies/ml) and to determine the odds of virological success at 52 weeks associated with single and consecutive pVL rebounds. A group of 165 patients achieved a pVL nadir < LLOQ; of these, 85 patients experienced pVL rebounds within 52 weeks. The probability of a pVL rebound was greater among patients who did not adhere to treatment (68% vs 49%; P < 0.05). The probability of reachieving virological suppression after a pVL rebound was not associated with the magnitude of the rebound [odds ratio (OR), 0.86; P = 0.56] but was associated with triple therapy (OR, 2.22; P = 0.06) or non-adherence (OR, 0.40; P = 0.04). The probability of virological success at week 52 was not associated with an isolated pVL rebound but was less likely after detectable pVL at two consecutive visits. An isolated pVL rebound was not associated with virological success at 52 weeks but rebounds at two consecutive visits decreased the probability of later virological success. Given their high risk of short-term virological failure, patients who present with consecutive detectable pVL measurements following complete suppression should be considered ideal candidates for intervention studies.

  14. Multi-Hazard Assessment of Scour Damaged Bridges with UAS-Based Measurements

    NASA Astrophysics Data System (ADS)

    Özcan, O.; Ozcan, O.

    2017-12-01

    Flood and stream induced scour occurring in bridge piers constructed on rivers is one of the mostly observed failure reasons in bridges. Scour induced failure risk in bridges and determination of the alterations in bridge safety under seismic effects has the ultimate importance. Thus, for the determination of bridge safety under the scour effects, the scour amount under bridge piers should be designated realistically and should be tracked and updated continuously. Hereby, the scour induced failures in bridge foundation systems will be prevented and bridge substructure design will be conducted safely. In this study, in order to measure the amount of scour in bridge load bearing system (pile foundations and pile abutments) and to attain very high definition 3 dimensional models of river flood plain for the flood analysis, unmanned aircraft system (UAS) based measurement methods were implemented. UAS based measurement systems provide new and practical approach and bring high precision and reliable solutions considering recent measurement systems. For this purpose, the reinforced concrete (RC) bridge that is located on Antalya Boğaçayı River, Turkey and that failed in 2003 due to flood-induced scour was selected as the case study. The amount of scour occurred in bridge piers and piles was determined realistically and the behavior of bridge piers under scour effects was investigated. Future flood effects and the resultant amount of scour was determined with HEC-RAS software by using digital surface models that were obtained at regular intervals using UAS for the riverbed. In the light of the attained scour measurements and expected scour after a probable flood event, the behavior of scour damaged RC bridge was investigated by pushover and time history analyses under lateral and vertical seismic loadings. In the analyses, the load and displacement capacity of bridge was observed to diminish significantly under expected scour. Thus, the deterioration in multi hazard performance of the bridge was monitored significantly in the light of updated bridge load bearing system capacity. Regarding the case study, UAS based and continuously updated bridge multi hazard risk detection system was established that can be used for bridges located on riverbed.

  15. Meso-Scale Modeling of Spall in a Heterogeneous Two-Phase Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, Harry Keo

    2008-07-11

    The influence of the heterogeneous second-phase particle structure and applied loading conditions on the ductile spall response of a model two-phase material was investigated. Quantitative metallography, three-dimensional (3D) meso-scale simulations (MSS), and small-scale spall experiments provided the foundation for this study. Nodular ductile iron (NDI) was selected as the model two-phase material for this study because it contains a large and readily identifiable second- phase particle population. Second-phase particles serve as the primary void nucleation sites in NDI and are, therefore, central to its ductile spall response. A mathematical model was developed for the NDI second-phase volume fraction that accountedmore » for the non-uniform particle size and spacing distributions within the framework of a length-scale dependent Gaussian probability distribution function (PDF). This model was based on novel multiscale sampling measurements. A methodology was also developed for the computer generation of representative particle structures based on their mathematical description, enabling 3D MSS. MSS were used to investigate the effects of second-phase particle volume fraction and particle size, loading conditions, and physical domain size of simulation on the ductile spall response of a model two-phase material. MSS results reinforce existing model predictions, where the spall strength metric (SSM) logarithmically decreases with increasing particle volume fraction. While SSM predictions are nearly independent of applied load conditions at lower loading rates, which is consistent with previous studies, loading dependencies are observed at higher loading rates. There is also a logarithmic decrease in SSM for increasing (initial) void size, as well. A model was developed to account for the effects of loading rate, particle size, matrix sound-speed, and, in the NDI-specific case, the probabilistic particle volume fraction model. Small-scale spall experiments were designed and executed for the purpose of validating closely-coupled 3D MSS. While the spall strength is nearly independent of specimen thickness, the fragment morphology varies widely. Detailed MSS demonstrate that the interactions between the tensile release waves are altered by specimen thickness and that these interactions are primarily responsible for fragment formation. MSS also provided insights on the regional amplification of damage, which enables the development of predictive void evolution models.« less

  16. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  17. 25-Hydroxycholecalciferol response to single oral cholecalciferol loading in the normal weight, overweight, and obese.

    PubMed

    Camozzi, V; Frigo, A C; Zaninotto, M; Sanguin, F; Plebani, M; Boscaro, M; Schiavon, L; Luisetto, G

    2016-08-01

    After a single cholecalciferol load, peak serum 25-hydroxycholecalciferol (25OHD) is lower in individuals with a higher body mass index (BMI), probably due to it being distributed in a greater volume. Its subsequent disappearance from the serum is slower the higher the individual's BMI, probably due to the combination of a larger body volume and a slower release into the circulation of vitamin D stored in adipose tissue. The aim of the study is to examine 25-hydroxycholecalciferol (25OHD) response to a single oral load of cholecalciferol in the normal weight, overweight, and obese. We considered 55 healthy women aged from 25 to 67 years (mean ± SD, 50.8 ± 9.5) with a BMI ranging from 18.7 to 42 kg/m(2) (mean ± SD, 27.1 ± 6.0). The sample was divided into three groups by BMI: 20 were normal weight (BMI ≤ 25 kg/m(2)), 21 overweight (25.1 ≤ BMI ≤ 29.9 kg/ m(2)), and 14 obese (BMI ≥ 30 kg/m(2)). Each subject was given 300,000 IU of cholecalciferol orally during lunch. A fasting blood test was obtained before cholecalciferol loading and then 7, 30, and 90 days afterwards to measure serum 25OHD, 1,25 dihydroxyvitamin D [1,25 (OH)2D], parathyroid hormone (PTH), calcium (Ca), and phosphorus (P). Participants' absolute fat mass was measured using dual energy X-ray absorptiometry (DEXA). The fat mass of the normal weight subjects was significantly lower than that of the overweight, which in turn was lower than that of the obese participants. Serum 25OHD levels increased significantly in all groups, peaking 1 week after the cholecalciferol load. Peak serum 25OHD levels were lower the higher the individuals' BMI. After peaking, the 25OHD levels gradually decreased, following a significantly different trend in the three groups. The slope was similar for the overweight and obese, declining significantly more slowly than in the normal weight group. In the sample as a whole, there was a weakly significant negative correlation between fat mass and baseline 25OHD level, while this correlation became strongly significant at all time points after cholecalciferol loading. The lower peak 25OHD levels seen in the obese and overweight is probably due to the cholecalciferol load being distributed in a larger body volume. The longer persistence of 25OHD in their serum could be due to both their larger body volume and a slower release into the circulation of the vitamin D stored in their adipose tissue.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  19. Load and resistance factor design of bridge foundations accounting for pile group-soil interaction.

    DOT National Transportation Integrated Search

    2015-11-01

    Pile group foundations are used in most foundation solutions for transportation structures. Rigorous and reliable pile design methods are : required to produce designs whose level of safety (probability of failure) is known. By utilizing recently dev...

  20. Feasibility of Estimating Constituent Concentrations and Loads Based on Data Recorded by Acoustic Instrumentation

    USGS Publications Warehouse

    Lietz, A.C.

    2002-01-01

    The acoustic Doppler current profiler (ADCP) and acoustic Doppler velocity meter (ADVM) were used to estimate constituent concentrations and loads at a sampling site along the Hendry-Collier County boundary in southwestern Florida. The sampling site is strategically placed within a highly managed canal system that exhibits low and rapidly changing water conditions. With the ADCP and ADVM, flow can be gaged more accurately rather than by conventional field-data collection methods. An ADVM velocity rating relates measured velocity determined by the ADCP (dependent variable) with the ADVM velocity (independent variable) by means of regression analysis techniques. The coefficient of determination (R2) for this rating is 0.99 at the sampling site. Concentrations and loads of total phosphorus, total Kjeldahl nitrogen, and total nitrogen (dependent variables) were related to instantaneous discharge, acoustic backscatter, stage, or water temperature (independent variables) recorded at the time of sampling. Only positive discharges were used for this analysis. Discharges less than 100 cubic feet per second generally are considered inaccurate (probably as a result of acoustic ray bending and vertical temperature gradients in the water column). Of the concentration models, only total phosphorus was statistically significant at the 95-percent confidence level (p-value less than 0.05). Total phosphorus had an adjusted R2 of 0.93, indicating most of the variation in the concentration can be explained by the discharge. All of the load models for total phosphorus, total Kjeldahl nitrogen, and total nitrogen were statistically significant. Most of the variation in load can be explained by the discharge as reflected in the adjusted R2 for total phosphorus (0.98), total Kjeldahl nitrogen (0.99), and total nitrogen (0.99).

  1. Modelling the vertical distribution of canopy fuel load using national forest inventory and low-density airbone laser scanning data

    PubMed Central

    Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Ruiz-González, Ana Daría

    2017-01-01

    The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard. PMID:28448524

  2. Sensitivity Analysis of the Bone Fracture Risk Model

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including environmental factors, factors associated with the fall event, mass and anthropometric values of the astronaut, BMD characteristics, characteristics of the relationship between BMD and bone strength and bone fracture characteristics. The uncertainty in these factors is captured through the use of parameter distributions and the fracture predictions are probability distributions with a mean value and an associated uncertainty. To determine parameter sensitivity, a correlation coefficient is found between the sample set of each model parameter and the calculated fracture probabilities. Each parameters contribution to the variance is found by squaring the correlation coefficients, dividing by the sum of the squared correlation coefficients, and multiplying by 100. Results: Sensitivity analyses of BFxRM simulations of preflight, 0 days post-flight and 365 days post-flight falls onto the hip revealed a subset of the twelve factors within the model which cause the most variation in the fracture predictions. These factors include the spring constant used in the hip biomechanical model, the midpoint FRI parameter within the equation used to convert FRI to fracture probability and preflight BMD values. Future work: Plans are underway to update the BFxRM by incorporating bone strength information from finite element models (FEM) into the bone strength portion of the BFxRM. Also, FEM bone strength information along with fracture outcome data will be incorporated into the FRI to fracture probability.

  3. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.

    PubMed

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-09-24

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.

  4. Bayesian decision and mixture models for AE monitoring of steel-concrete composite shear walls

    NASA Astrophysics Data System (ADS)

    Farhidzadeh, Alireza; Epackachi, Siamak; Salamone, Salvatore; Whittaker, Andrew S.

    2015-11-01

    This paper presents an approach based on an acoustic emission technique for the health monitoring of steel-concrete (SC) composite shear walls. SC composite walls consist of plain (unreinforced) concrete sandwiched between steel faceplates. Although the use of SC system construction has been studied extensively for nearly 20 years, little-to-no attention has been devoted to the development of structural health monitoring techniques for the inspection of damage of the concrete behind the steel plates. In this work an unsupervised pattern recognition algorithm based on probability theory is proposed to assess the soundness of the concrete infill, and eventually provide a diagnosis of the SC wall’s health. The approach is validated through an experimental study on a large-scale SC shear wall subjected to a displacement controlled reversed cyclic loading.

  5. A study of environmental characterization of conventional and advanced aluminum alloys for selection and design. Phase 2: The breaking load test method

    NASA Technical Reports Server (NTRS)

    Sprowls, D. O.; Bucci, R. J.; Ponchel, B. M.; Brazill, R. L.; Bretz, P. E.

    1984-01-01

    A technique is demonstrated for accelerated stress corrosion testing of high strength aluminum alloys. The method offers better precision and shorter exposure times than traditional pass fail procedures. The approach uses data from tension tests performed on replicate groups of smooth specimens after various lengths of exposure to static stress. The breaking strength measures degradation in the test specimen load carrying ability due to the environmental attack. Analysis of breaking load data by extreme value statistics enables the calculation of survival probabilities and a statistically defined threshold stress applicable to the specific test conditions. A fracture mechanics model is given which quantifies depth of attack in the stress corroded specimen by an effective flaw size calculated from the breaking stress and the material strength and fracture toughness properties. Comparisons are made with experimental results from three tempers of 7075 alloy plate tested by the breaking load method and by traditional tests of statistically loaded smooth tension bars and conventional precracked specimens.

  6. Time-Dependent Stress Rupture Strength Degradation of Hi-Nicalon Fiber-Reinforced Silicon Carbide Composites at Intermediate Temperatures

    NASA Technical Reports Server (NTRS)

    Sullivan, Roy M.

    2016-01-01

    The stress rupture strength of silicon carbide fiber-reinforced silicon carbide composites with a boron nitride fiber coating decreases with time within the intermediate temperature range of 700 to 950 degree Celsius. Various theories have been proposed to explain the cause of the time-dependent stress rupture strength. The objective of this paper is to investigate the relative significance of the various theories for the time-dependent strength of silicon carbide fiber-reinforced silicon carbide composites. This is achieved through the development of a numerically based progressive failure analysis routine and through the application of the routine to simulate the composite stress rupture tests. The progressive failure routine is a time-marching routine with an iterative loop between a probability of fiber survival equation and a force equilibrium equation within each time step. Failure of the composite is assumed to initiate near a matrix crack and the progression of fiber failures occurs by global load sharing. The probability of survival equation is derived from consideration of the strength of ceramic fibers with randomly occurring and slow growing flaws as well as the mechanical interaction between the fibers and matrix near a matrix crack. The force equilibrium equation follows from the global load sharing presumption. The results of progressive failure analyses of the composite tests suggest that the relationship between time and stress-rupture strength is attributed almost entirely to the slow flaw growth within the fibers. Although other mechanisms may be present, they appear to have only a minor influence on the observed time-dependent behavior.

  7. GNSS Signal Tracking Performance Improvement for Highly Dynamic Receivers by Gyroscopic Mounting Crystal Oscillator.

    PubMed

    Abedi, Maryam; Jin, Tian; Sun, Kewen

    2015-08-31

    In this paper, the efficiency of the gyroscopic mounting method is studied for a highly dynamic GNSS receiver's reference oscillator for reducing signal loss. Analyses are performed separately in two phases, atmospheric and upper atmospheric flights. Results show that the proposed mounting reduces signal loss, especially in parts of the trajectory where its probability is the highest. This reduction effect appears especially for crystal oscillators with a low elevation angle g-sensitivity vector. The gyroscopic mounting influences frequency deviation or jitter caused by dynamic loads on replica carrier and affects the frequency locked loop (FLL) as the dominant tracking loop in highly dynamic GNSS receivers. In terms of steady-state load, the proposed mounting mostly reduces the frequency deviation below the one-sigma threshold of FLL (1σ(FLL)). The mounting method can also reduce the frequency jitter caused by sinusoidal vibrations and reduces the probability of signal loss in parts of the trajectory where the other error sources accompany this vibration load. In the case of random vibration, which is the main disturbance source of FLL, gyroscopic mounting is even able to suppress the disturbances greater than the three-sigma threshold of FLL (3σ(FLL)). In this way, signal tracking performance can be improved by the gyroscopic mounting method for highly dynamic GNSS receivers.

  8. Probabilistic assessment of landslide tsunami hazard for the northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Pampell-Manis, A.; Horrillo, J.; Shigihara, Y.; Parambath, L.

    2016-01-01

    The devastating consequences of recent tsunamis affecting Indonesia and Japan have prompted a scientific response to better assess unexpected tsunami hazards. Although much uncertainty exists regarding the recurrence of large-scale tsunami events in the Gulf of Mexico (GoM), geological evidence indicates that a tsunami is possible and would most likely come from a submarine landslide triggered by an earthquake. This study customizes for the GoM a first-order probabilistic landslide tsunami hazard assessment. Monte Carlo Simulation (MCS) is employed to determine landslide configurations based on distributions obtained from observational submarine mass failure (SMF) data. Our MCS approach incorporates a Cholesky decomposition method for correlated landslide size parameters to capture correlations seen in the data as well as uncertainty inherent in these events. Slope stability analyses are performed using landslide and sediment properties and regional seismic loading to determine landslide configurations which fail and produce a tsunami. The probability of each tsunamigenic failure is calculated based on the joint probability of slope failure and probability of the triggering earthquake. We are thus able to estimate sizes and return periods for probabilistic maximum credible landslide scenarios. We find that the Cholesky decomposition approach generates landslide parameter distributions that retain the trends seen in observational data, improving the statistical validity and relevancy of the MCS technique in the context of landslide tsunami hazard assessment. Estimated return periods suggest that probabilistic maximum credible SMF events in the north and northwest GoM have a recurrence of 5000-8000 years, in agreement with age dates of observed deposits.

  9. Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design

    NASA Technical Reports Server (NTRS)

    Kuguoglu, Latife; Ludwiczak, Damian

    2006-01-01

    The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.

  10. Comparison of cadmium hydroxide nanowires and silver nanoparticles loaded on activated carbon as new adsorbents for efficient removal of Sunset yellow: Kinetics and equilibrium study.

    PubMed

    Ghaedi, Mehrorang

    2012-08-01

    Adsorption of Sunset yellow (SY) onto cadmium hydroxide nanowires loaded on activated carbon (Cd(OH)(2)-NW-AC) and silver nanoparticles loaded on activated carbon (Ag-NP-AC) was investigated. The effects of pH, contact time, amount of adsorbents, initial dye concentration, agitation speed and temperature on Sunset yellow removal on both adsorbents were studied. Following the optimization of variables, the experimental data were fitted to different conventional isotherm models like Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich (D-R) based on linear regression coefficient R(2) the Langmuir isotherm was found to be the best fitting isotherm model and the maximum monolayer adsorption capacities calculated based on this model for Cd(OH)(2)-NW-AC and Ag-NP-AC were found to be 76.9 and 37.03mg g(-1) at room temperatures, respectively. The experimental fitting of time dependency of adsorption of SY onto both adsorbent shows the applicability of second order kinetic model for interpretation of kinetic data. The pseudo-second order model best fits the adsorption kinetics. Thermodynamic parameters such as enthalpy, entropy, activation energy, sticking probability, and Gibb's free energy changes were also calculated. It was found that the sorption of SY over (Cd(OH)(2)-NW-AC) and (Ag-NP-AC) was spontaneous and endothermic in nature. Efficiency of the adsorbent was also investigated using real effluents and more than 95% SY removal for both adsorbents was observed. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Computational modeling of unsteady loads in tidal boundary layers

    NASA Astrophysics Data System (ADS)

    Alexander, Spencer R.

    As ocean current turbines move from the design stage into production and installation, a better understanding of oceanic turbulent flows and localized loading is required to more accurately predict turbine performance and durability. In the present study, large eddy simulations (LES) are used to measure the unsteady loads and bending moments that would be experienced by an ocean current turbine placed in a tidal channel. The LES model captures currents due to winds, waves, thermal convection, and tides, thereby providing a high degree of physical realism. Probability density functions, means, and variances of unsteady loads are calculated, and further statistical measures of the turbulent environment are also examined, including vertical profiles of Reynolds stresses, two-point correlations, and velocity structure functions. The simulations show that waves and tidal velocity had the largest impact on the strength of off-axis turbine loads. By contrast, boundary layer stability and wind speeds were shown to have minimal impact on the strength of off- axis turbine loads. It is shown both analytically and using simulation results that either transverse velocity structure functions or two-point transverse velocity spatial correlations are good predictors of unsteady loading in tidal channels.

  12. 14 CFR 29.563 - Structural ditching provisions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  13. 14 CFR 27.563 - Structural ditching provisions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  14. 14 CFR 27.563 - Structural ditching provisions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  15. 14 CFR 27.563 - Structural ditching provisions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  16. 14 CFR 27.563 - Structural ditching provisions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  17. 14 CFR 29.563 - Structural ditching provisions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  18. 14 CFR 29.563 - Structural ditching provisions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  19. 14 CFR 29.563 - Structural ditching provisions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  20. 14 CFR 29.563 - Structural ditching provisions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  1. 14 CFR 27.563 - Structural ditching provisions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... speed landing conditions. The rotorcraft must initially contact the most critical wave for reasonably... the mean water surface. Rotor lift may be used to act through the center of gravity throughout the..., unsymmetrical rotorcraft loading, water wave action, rotorcraft inertia, and probable structural damage and...

  2. Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.

  3. Evolution of the microstructure during the process of consolidation and bonding in soft granular solids.

    PubMed

    Yohannes, B; Gonzalez, M; Abebe, A; Sprockel, O; Nikfar, F; Kiang, S; Cuitiño, A M

    2016-04-30

    The evolution of microstructure during powder compaction process was investigated using a discrete particle modeling, which accounts for particle size distribution and material properties, such as plasticity, elasticity, and inter-particle bonding. The material properties were calibrated based on powder compaction experiments and validated based on tensile strength test experiments for lactose monohydrate and microcrystalline cellulose, which are commonly used excipient in pharmaceutical industry. The probability distribution function and the orientation of contact forces were used to study the evolution of the microstructure during the application of compaction pressure, unloading, and ejection of the compact from the die. The probability distribution function reveals that the compression contact forces increase as the compaction force increases (or the relative density increases), while the maximum value of the tensile contact forces remains the same. During unloading of the compaction pressure, the distribution approaches a normal distribution with a mean value of zero. As the contact forces evolve, the anisotropy of the powder bed also changes. Particularly, during loading, the compression contact forces are aligned along the direction of the compaction pressure, whereas the tensile contact forces are oriented perpendicular to direction of the compaction pressure. After ejection, the contact forces become isotropic. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan

    2013-02-01

    As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.

  5. Probabilistic models for reactive behaviour in heterogeneous condensed phase media

    NASA Astrophysics Data System (ADS)

    Baer, M. R.; Gartling, D. K.; DesJardin, P. E.

    2012-02-01

    This work presents statistically-based models to describe reactive behaviour in heterogeneous energetic materials. Mesoscale effects are incorporated in continuum-level reactive flow descriptions using probability density functions (pdfs) that are associated with thermodynamic and mechanical states. A generalised approach is presented that includes multimaterial behaviour by treating the volume fraction as a random kinematic variable. Model simplifications are then sought to reduce the complexity of the description without compromising the statistical approach. Reactive behaviour is first considered for non-deformable media having a random temperature field as an initial state. A pdf transport relationship is derived and an approximate moment approach is incorporated in finite element analysis to model an example application whereby a heated fragment impacts a reactive heterogeneous material which leads to a delayed cook-off event. Modelling is then extended to include deformation effects associated with shock loading of a heterogeneous medium whereby random variables of strain, strain-rate and temperature are considered. A demonstrative mesoscale simulation of a non-ideal explosive is discussed that illustrates the joint statistical nature of the strain and temperature fields during shock loading to motivate the probabilistic approach. This modelling is derived in a Lagrangian framework that can be incorporated in continuum-level shock physics analysis. Future work will consider particle-based methods for a numerical implementation of this modelling approach.

  6. Cellular and molecular mechanisms for the bone response to mechanical loading

    NASA Technical Reports Server (NTRS)

    Bloomfield, S. A.

    2001-01-01

    To define the cellular and molecular mechanisms for the osteogenic response of bone to increased loading, several key steps must be defined: sensing of the mechanical signal by cells in bone, transduction of the mechanical signal to a biochemical one, and transmission of that biochemical signal to effector cells. Osteocytes are likely to serve as sensors of loading, probably via interstitial fluid flow produced during loading. Evidence is presented for the role of integrins, the cell's actin cytoskeleton, G proteins, and various intracellular signaling pathways in transducing that mechanical signal to a biochemical one. Nitric oxide, prostaglandins, and insulin-like growth factors all play important roles in these pathways. There is growing evidence for modulation of these mechanotransduction steps by endocrine factors, particularly parathyroid hormone and estrogen. The efficiency of this process is also impaired in the aged animal, yet what remains undefined is at what step mechanotransduction is affected.

  7. Estimating annual suspended-sediment loads in the northern and central Appalachian Coal region

    USGS Publications Warehouse

    Koltun, G.F.

    1985-01-01

    Multiple-regression equations were developed for estimating the annual suspended-sediment load, for a given year, from small to medium-sized basins in the northern and central parts of the Appalachian coal region. The regression analysis was performed with data for land use, basin characteristics, streamflow, rainfall, and suspended-sediment load for 15 sites in the region. Two variables, the maximum mean-daily discharge occurring within the year and the annual peak discharge, explained much of the variation in the annual suspended-sediment load. Separate equations were developed employing each of these discharge variables. Standard errors for both equations are relatively large, which suggests that future predictions will probably have a low level of precision. This level of precision, however, may be acceptable for certain purposes. It is therefore left to the user to asses whether the level of precision provided by these equations is acceptable for the intended application.

  8. Stochastic modeling of total suspended solids (TSS) in urban areas during rain events.

    PubMed

    Rossi, Luca; Krejci, Vladimir; Rauch, Wolfgang; Kreikenbaum, Simon; Fankhauser, Rolf; Gujer, Willi

    2005-10-01

    The load of total suspended solids (TSS) is one of the most important parameters for evaluating wet-weather pollution in urban sanitation systems. In fact, pollutants such as heavy metals, polycyclic aromatic hydrocarbons (PAHs), phosphorous and organic compounds are adsorbed onto these particles so that a high TSS load indicates the potential impact on the receiving waters. In this paper, a stochastic model is proposed to estimate the TSS load and its dynamics during rain events. Information on the various simulated processes was extracted from different studies of TSS in urban areas. The model thus predicts the probability of TSS loads arising from combined sewer overflows (CSOs) in combined sewer systems as well as from stormwater in separate sewer systems in addition to the amount of TSS retained in treatment devices in both sewer systems. The results of this TSS model illustrate the potential of the stochastic modeling approach for assessing environmental problems.

  9. Retro-dimension-cue benefit in visual working memory.

    PubMed

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-10-24

    In visual working memory (VWM) tasks, participants' performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants' performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis.

  10. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Technical Reports Server (NTRS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  11. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Astrophysics Data System (ADS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  12. Retro-dimension-cue benefit in visual working memory

    PubMed Central

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-01-01

    In visual working memory (VWM) tasks, participants’ performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants’ performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis. PMID:27774983

  13. Comparison of human and southern sea otter (Enhydra lutris nereis) health risks for infection with protozoa in nearshore waters.

    PubMed

    Adell, A D; McBride, G; Wuertz, S; Conrad, P A; Smith, W A

    2016-11-01

    Cryptosporidium and Giardia spp. are waterborne, fecally-transmitted pathogens that cause economic loss due to gastroenteritis and beach closures. We applied quantitative microbial risk assessment (QMRA) to determine the health risks for humans and sea otters due to waterborne exposure of Cryptosporidium and Giardia spp. when swimming in three types of surface waters: river, stormwater and wastewater effluent during the wet and dry seasons in the central coast of California. This is the first application of QMRA to estimate both the probability of infection in Southern sea otters and the probability of illness in humans, using microbial source tracking (MST) as a variable. Children swimming close to stormwater discharges had an estimated Cryptosporidium-associated illness probability that exceeded the accepted U.S. EPA criteria (32 illnesses/1000 swimmers or 3.2%). Based on the assumption that sea otters are as susceptible as humans to Cryptosporidium infection, the infection probabilities were close to 2% and 16% when sea otters were swimming at the end of points of rivers and stormwater discharges, respectively. In the case of Giardia, infection probabilities of 11% and 23% were estimated for sea otters swimming at the end of point of wastewater discharges, assuming that sea otters are as susceptible as gerbils and humans, respectively. The results of this QMRA suggest that 1) humans and sea otters are at risk when swimming at outflow sites for rivers, stormwater and treated wastewater effluent; 2) reduced loads of viable protozoan cysts and oocysts in recreational water can lessen the probability of infection of humans and sea otters; and 3) the risk of infection of humans and sea otters can be reduced with the treatment of wastewater to decrease oocyst and cyst viability before effluent is released into the sea. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A pragmatic multi-centre randomised controlled trial of fluid loading in high-risk surgical patients undergoing major elective surgery--the FOCCUS study.

    PubMed

    Cuthbertson, Brian H; Campbell, Marion K; Stott, Stephen A; Elders, Andrew; Hernández, Rodolfo; Boyers, Dwayne; Norrie, John; Kinsella, John; Brittenden, Julie; Cook, Jonathan; Rae, Daniela; Cotton, Seonaidh C; Alcorn, David; Addison, Jennifer; Grant, Adrian

    2011-01-01

    Fluid strategies may impact on patient outcomes in major elective surgery. We aimed to study the effectiveness and cost-effectiveness of pre-operative fluid loading in high-risk surgical patients undergoing major elective surgery. This was a pragmatic, non-blinded, multi-centre, randomised, controlled trial. We sought to recruit 128 consecutive high-risk surgical patients undergoing major abdominal surgery. The patients underwent pre-operative fluid loading with 25 ml/kg of Ringer's solution in the six hours before surgery. The control group had no pre-operative fluid loading. The primary outcome was the number of hospital days after surgery with cost-effectiveness as a secondary outcome. A total of 111 patients were recruited within the study time frame in agreement with the funder. The median pre-operative fluid loading volume was 1,875 ml (IQR 1,375 to 2,025) in the fluid group compared to 0 (IQR 0 to 0) in controls with days in hospital after surgery 12.2 (SD 11.5) days compared to 17.4 (SD 20.0) and an adjusted mean difference of 5.5 days (median 2.2 days; 95% CI -0.44 to 11.44; P = 0.07). There was a reduction in adverse events in the fluid intervention group (P = 0.048) and no increase in fluid based complications. The intervention was less costly and more effective (adjusted average cost saving: £2,047; adjusted average gain in benefit: 0.0431 quality adjusted life year (QALY)) and has a high probability of being cost-effective. Pre-operative intravenous fluid loading leads to a non-significant reduction in hospital length of stay after high-risk major surgery and is likely to be cost-effective. Confirmatory work is required to determine whether these effects are reproducible, and to confirm whether this simple intervention could allow more cost-effective delivery of care. Prospective Clinical Trials, ISRCTN32188676.

  15. A pragmatic multi-centre randomised controlled trial of fluid loading in high-risk surgical patients undergoing major elective surgery - the FOCCUS study

    PubMed Central

    2011-01-01

    Introduction Fluid strategies may impact on patient outcomes in major elective surgery. We aimed to study the effectiveness and cost-effectiveness of pre-operative fluid loading in high-risk surgical patients undergoing major elective surgery. Methods This was a pragmatic, non-blinded, multi-centre, randomised, controlled trial. We sought to recruit 128 consecutive high-risk surgical patients undergoing major abdominal surgery. The patients underwent pre-operative fluid loading with 25 ml/kg of Ringer's solution in the six hours before surgery. The control group had no pre-operative fluid loading. The primary outcome was the number of hospital days after surgery with cost-effectiveness as a secondary outcome. Results A total of 111 patients were recruited within the study time frame in agreement with the funder. The median pre-operative fluid loading volume was 1,875 ml (IQR 1,375 to 2,025) in the fluid group compared to 0 (IQR 0 to 0) in controls with days in hospital after surgery 12.2 (SD 11.5) days compared to 17.4 (SD 20.0) and an adjusted mean difference of 5.5 days (median 2.2 days; 95% CI -0.44 to 11.44; P = 0.07). There was a reduction in adverse events in the fluid intervention group (P = 0.048) and no increase in fluid based complications. The intervention was less costly and more effective (adjusted average cost saving: £2,047; adjusted average gain in benefit: 0.0431 quality adjusted life year (QALY)) and has a high probability of being cost-effective. Conclusions Pre-operative intravenous fluid loading leads to a non-significant reduction in hospital length of stay after high-risk major surgery and is likely to be cost-effective. Confirmatory work is required to determine whether these effects are reproducible, and to confirm whether this simple intervention could allow more cost-effective delivery of care. Trial registration Prospective Clinical Trials, ISRCTN32188676 PMID:22177541

  16. Bacteria survival probability in bactericidal filter paper.

    PubMed

    Mansur-Azzam, Nura; Hosseinidoust, Zeinab; Woo, Su Gyeong; Vyhnalkova, Renata; Eisenberg, Adi; van de Ven, Theo G M

    2014-05-01

    Bactericidal filter papers offer the simplicity of gravity filtration to simultaneously eradicate microbial contaminants and particulates. We previously detailed the development of biocidal block copolymer micelles that could be immobilized on a filter paper to actively eradicate bacteria. Despite the many advantages offered by this system, its widespread use is hindered by its unknown mechanism of action which can result in non-reproducible outcomes. In this work, we sought to investigate the mechanism by which a certain percentage of Escherichia coli cells survived when passing through the bactericidal filter paper. Through the process of elimination, the possibility that the bacterial survival probability was controlled by the initial bacterial load or the existence of resistant sub-populations of E. coli was dismissed. It was observed that increasing the thickness or the number of layers of the filter significantly decreased bacterial survival probability for the biocidal filter paper but did not affect the efficiency of the blank filter paper (no biocide). The survival probability of bacteria passing through the antibacterial filter paper appeared to depend strongly on the number of collision between each bacterium and the biocide-loaded micelles. It was thus hypothesized that during each collision a certain number of biocide molecules were directly transferred from the hydrophobic core of the micelle to the bacterial lipid bilayer membrane. Therefore, each bacterium must encounter a certain number of collisions to take up enough biocide to kill the cell and cells that do not undergo the threshold number of collisions are expected to survive. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. 14 CFR 23.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... pilots to recognize the loss of generated power and take appropriate load shedding action. [Doc. No. 4080... and pressures must be maintained during any probable charging and discharging condition. No... shown that maintaining safe cell temperatures and pressures presents no problem. (d) No explosive or...

  18. 14 CFR 23.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... pilots to recognize the loss of generated power and take appropriate load shedding action. [Doc. No. 4080... and pressures must be maintained during any probable charging and discharging condition. No... shown that maintaining safe cell temperatures and pressures presents no problem. (d) No explosive or...

  19. Efficacy of Visual Surveys for White-Nose Syndrome at Bat Hibernacula.

    PubMed

    Janicki, Amanda F; Frick, Winifred F; Kilpatrick, A Marm; Parise, Katy L; Foster, Jeffrey T; McCracken, Gary F

    2015-01-01

    White-Nose Syndrome (WNS) is an epizootic disease in hibernating bats caused by the fungus Pseudogymnoascus destructans. Surveillance for P. destructans at bat hibernacula consists primarily of visual surveys of bats, collection of potentially infected bats, and submission of these bats for laboratory testing. Cryptic infections (bats that are infected but display no visual signs of fungus) could lead to the mischaracterization of the infection status of a site and the inadvertent spread of P. destructans. We determined the efficacy of visual detection of P. destructans by examining visual signs and molecular detection of P. destructans on 928 bats of six species at 27 sites during surveys conducted from January through March in 2012-2014 in the southeastern USA on the leading edge of the disease invasion. Cryptic infections were widespread with 77% of bats that tested positive by qPCR showing no visible signs of infection. The probability of exhibiting visual signs of infection increased with sampling date and pathogen load, the latter of which was substantially higher in three species (Myotis lucifugus, M. septentrionalis, and Perimyotis subflavus). In addition, M. lucifugus was more likely to show visual signs of infection than other species given the same pathogen load. Nearly all infections were cryptic in three species (Eptesicus fuscus, M. grisescens, and M. sodalis), which had much lower fungal loads. The presence of M. lucifugus or M. septentrionalis at a site increased the probability that P. destructans was visually detected on bats. Our results suggest that cryptic infections of P. destructans are common in all bat species, and visible infections rarely occur in some species. However, due to very high infection prevalence and loads in some species, we estimate that visual surveys examining at least 17 individuals of M. lucifugus and M. septentrionalis, or 29 individuals of P. subflavus are still effective to determine whether a site has bats infected with P. destructans. In addition, because the probability of visually detecting the fungus was higher later in winter, surveys should be done as close to the end of the hibernation period as possible.

  20. Efficacy of Visual Surveys for White-Nose Syndrome at Bat Hibernacula

    PubMed Central

    Janicki, Amanda F.; Frick, Winifred F.; Kilpatrick, A. Marm; Parise, Katy L.; Foster, Jeffrey T.; McCracken, Gary F.

    2015-01-01

    White-Nose Syndrome (WNS) is an epizootic disease in hibernating bats caused by the fungus Pseudogymnoascus destructans. Surveillance for P. destructans at bat hibernacula consists primarily of visual surveys of bats, collection of potentially infected bats, and submission of these bats for laboratory testing. Cryptic infections (bats that are infected but display no visual signs of fungus) could lead to the mischaracterization of the infection status of a site and the inadvertent spread of P. destructans. We determined the efficacy of visual detection of P. destructans by examining visual signs and molecular detection of P. destructans on 928 bats of six species at 27 sites during surveys conducted from January through March in 2012–2014 in the southeastern USA on the leading edge of the disease invasion. Cryptic infections were widespread with 77% of bats that tested positive by qPCR showing no visible signs of infection. The probability of exhibiting visual signs of infection increased with sampling date and pathogen load, the latter of which was substantially higher in three species (Myotis lucifugus, M. septentrionalis, and Perimyotis subflavus). In addition, M. lucifugus was more likely to show visual signs of infection than other species given the same pathogen load. Nearly all infections were cryptic in three species (Eptesicus fuscus, M. grisescens, and M. sodalis), which had much lower fungal loads. The presence of M. lucifugus or M. septentrionalis at a site increased the probability that P. destructans was visually detected on bats. Our results suggest that cryptic infections of P. destructans are common in all bat species, and visible infections rarely occur in some species. However, due to very high infection prevalence and loads in some species, we estimate that visual surveys examining at least 17 individuals of M. lucifugus and M. septentrionalis, or 29 individuals of P. subflavus are still effective to determine whether a site has bats infected with P. destructans. In addition, because the probability of visually detecting the fungus was higher later in winter, surveys should be done as close to the end of the hibernation period as possible. PMID:26197236

  1. Divided Attention and Processes Underlying Sense of Agency

    PubMed Central

    Wen, Wen; Yamashita, Atsushi; Asama, Hajime

    2016-01-01

    Sense of agency refers to the subjective feeling of controlling events through one’s behavior or will. Sense of agency results from matching predictions of one’s own actions with actual feedback regarding the action. Furthermore, when an action involves a cued goal, performance-based inference contributes to sense of agency. That is, if people achieve their goal, they would believe themselves to be in control. Previous studies have shown that both action-effect comparison and performance-based inference contribute to sense of agency; however, the dominance of one process over the other may shift based on task conditions such as the presence or absence of specific goals. In this study, we examined the influence of divided attention on these two processes underlying sense of agency in two conditions. In the experimental task, participants continuously controlled a moving dot for 10 s while maintaining a string of three or seven digits in working memory. We found that when there was no cued goal (no-cued-goal condition), sense of agency was impaired by high cognitive load. Contrastingly, when participants controlled the dot based on a cued goal (cued-goal-directed condition), their sense of agency was lower than in the no-cued-goal condition and was not affected by cognitive load. The results suggest that the action-effect comparison process underlying sense of agency requires attention. On the other hand, the weaker influence of divided attention in the cued-goal-directed condition could be attributed to the dominance of performance-based inference, which is probably automatic. PMID:26858680

  2. Effects of footwear and stride length on metatarsal strains and failure in running.

    PubMed

    Firminger, Colin R; Fung, Anita; Loundagin, Lindsay L; Edwards, W Brent

    2017-11-01

    The metatarsal bones of the foot are particularly susceptible to stress fracture owing to the high strains they experience during the stance phase of running. Shoe cushioning and stride length reduction represent two potential interventions to decrease metatarsal strain and thus stress fracture risk. Fourteen male recreational runners ran overground at a 5-km pace while motion capture and plantar pressure data were collected during four experimental conditions: traditional shoe at preferred and 90% preferred stride length, and minimalist shoe at preferred and 90% preferred stride length. Combined musculoskeletal - finite element modeling based on motion analysis and computed tomography data were used to quantify metatarsal strains and the probability of failure was determined using stress-life predictions. No significant interactions between footwear and stride length were observed. Running in minimalist shoes increased strains for all metatarsals by 28.7% (SD 6.4%; p<0.001) and probability of failure for metatarsals 2-4 by 17.3% (SD 14.3%; p≤0.005). Running at 90% preferred stride length decreased strains for metatarsal 4 by 4.2% (SD 2.0%; p≤0.007), and no differences in probability of failure were observed. Significant increases in metatarsal strains and the probability of failure were observed for recreational runners acutely transitioning to minimalist shoes. Running with a 10% reduction in stride length did not appear to be a beneficial technique for reducing the risk of metatarsal stress fracture, however the increased number of loading cycles for a given distance was not detrimental either. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Nutrient Loads Flowing into Coastal Waters from the Main Rivers of China (2006–2012)

    PubMed Central

    Tong, Yindong; Zhao, Yue; Zhen, Gengchong; Chi, Jie; Liu, Xianhua; Lu, Yiren; Wang, Xuejun; Yao, Ruihua; Chen, Junyue; Zhang, Wei

    2015-01-01

    Based on monthly monitoring data of unfiltered water, the nutrient discharges of the eight main rivers flowing into the coastal waters of China were calculated from 2006 to 2012. In 2012, the total load of NH3-N (calculated in nitrogen), total nitrogen (TN, calculated in nitrogen) and total phosphorus (TP, calculated in phosphorus) was 5.1 × 105, 3.1 × 106 and 2.8 × 105 tons, respectively, while in 2006, the nutrient load was 7.4 × 105, 2.2 × 106 and 1.6 × 105 tons, respectively. The nutrient loading from the eight major rivers into the coastal waters peaked in summer and autumn, probably due to the large water discharge in the wet season. The Yangtze River was the largest riverine nutrient source for the coastal waters, contributing 48% of the NH3-N discharges, 66% of the TN discharges and 84% of the TP discharges of the eight major rivers in 2012. The East China Sea received the majority of the nutrient discharges, i.e. 50% of NH3-N (2.7 × 105 tons), 70% of TN (2.2 × 106 tons) and 87% of TP (2.5 × 105 tons) in 2012. The riverine discharge of TN into the Yellow Sea and Bohai Sea was lower than that from the direct atmospheric deposition, while for the East China Sea, the riverine TN input was larger. PMID:26582206

  4. Nutrient Loads Flowing into Coastal Waters from the Main Rivers of China (2006-2012).

    PubMed

    Tong, Yindong; Zhao, Yue; Zhen, Gengchong; Chi, Jie; Liu, Xianhua; Lu, Yiren; Wang, Xuejun; Yao, Ruihua; Chen, Junyue; Zhang, Wei

    2015-11-19

    Based on monthly monitoring data of unfiltered water, the nutrient discharges of the eight main rivers flowing into the coastal waters of China were calculated from 2006 to 2012. In 2012, the total load of NH3-N (calculated in nitrogen), total nitrogen (TN, calculated in nitrogen) and total phosphorus (TP, calculated in phosphorus) was 5.1 × 10(5), 3.1 × 10(6) and 2.8 × 10(5) tons, respectively, while in 2006, the nutrient load was 7.4 × 10(5), 2.2 × 10(6) and 1.6 × 10(5) tons, respectively. The nutrient loading from the eight major rivers into the coastal waters peaked in summer and autumn, probably due to the large water discharge in the wet season. The Yangtze River was the largest riverine nutrient source for the coastal waters, contributing 48% of the NH3-N discharges, 66% of the TN discharges and 84% of the TP discharges of the eight major rivers in 2012. The East China Sea received the majority of the nutrient discharges, i.e. 50% of NH3-N (2.7 × 10(5) tons), 70% of TN (2.2 × 10(6) tons) and 87% of TP (2.5 × 10(5) tons) in 2012. The riverine discharge of TN into the Yellow Sea and Bohai Sea was lower than that from the direct atmospheric deposition, while for the East China Sea, the riverine TN input was larger.

  5. Docetaxel-Loaded Nanoparticles Assembled from β-Cyclodextrin/Calixarene Giant Surfactants: Physicochemical Properties and Cytotoxic Effect in Prostate Cancer and Glioblastoma Cells.

    PubMed

    Gallego-Yerga, Laura; Posadas, Inmaculada; de la Torre, Cristina; Ruiz-Almansa, Jesús; Sansone, Francesco; Ortiz Mellet, Carmen; Casnati, Alessandro; García Fernández, José M; Ceña, Valentín

    2017-01-01

    Giant amphiphiles encompassing a hydrophilic β-cyclodextrin (βCD) component and a hydrophobic calix[4]arene (CA 4 ) module undergo self-assembly in aqueous media to afford core-shell nanospheres or nanocapsules, depending on the nanoprecipitation protocol, with high docetaxel (DTX) loading capacity. The blank and loaded nanoparticles have been fully characterized by dynamic light scattering (DLS), ζ-potential measurements and cryo-transmission electron microscopy (cryo-TEM). The data are compatible with the distribution of the drug between the nanoparticle core and the shell, where it is probably anchored by inclusion of the DTX aromatic moieties in βCD cavities. Indeed, the release kinetics profiles evidenced an initial fast release of the drug, which likely accounts for the fraction hosted on the surface, followed by a slow and sustained release rate, corresponding to diffusion of DTX in the core, which can be finely tuned by modification of the giant amphiphile chemical structure. The ability of the docetaxel-loaded nanoparticles to induce cellular death in different prostate (human LnCap and PC3) and glioblastoma (human U87 and rat C6) cells was also explored. Giant amphiphile-based DTX formulations surpassing or matching the antitumoral activity of the free DTX formulation were identified in all cases with no need to employ any organic co-solvent, thus overcoming the DTX water solubility problems. Moreover, the presence of the βCD shell at the surface of the assemblies is intended to impart stealth properties against serum proteins while permitting nanoparticle surface decoration by supramolecular approaches, paving the way for a new generation of molecularly well-defined antitumoral drug delivery systems with improved specificity and efficiency. Altogether, the results provide a proof of concept of the suitability of the approach based on βCD-CA 4 giant amphiphiles to access DTX carriers with tunable properties.

  6. Markers of preparatory attention predict visual short-term memory performance.

    PubMed

    Murray, Alexandra M; Nobre, Anna C; Stokes, Mark G

    2011-05-01

    Visual short-term memory (VSTM) is limited in capacity. Therefore, it is important to encode only visual information that is most likely to be relevant to behaviour. Here we asked which aspects of selective biasing of VSTM encoding predict subsequent memory-based performance. We measured EEG during a selective VSTM encoding task, in which we varied parametrically the memory load and the precision of recall required to compare a remembered item to a subsequent probe item. On half the trials, a spatial cue indicated that participants only needed to encode items from one hemifield. We observed a typical sequence of markers of anticipatory spatial attention: early attention directing negativity (EDAN), anterior attention directing negativity (ADAN), late directing attention positivity (LDAP); as well as of VSTM maintenance: contralateral delay activity (CDA). We found that individual differences in preparatory brain activity (EDAN/ADAN) predicted cue-related changes in recall accuracy, indexed by memory-probe discrimination sensitivity (d'). Importantly, our parametric manipulation of memory-probe similarity also allowed us to model the behavioural data for each participant, providing estimates for the quality of the memory representation and the probability that an item could be retrieved. We found that selective encoding primarily increased the probability of accurate memory recall; that ERP markers of preparatory attention predicted the cue-related changes in recall probability. Copyright © 2011. Published by Elsevier Ltd.

  7. Extreme prices in electricity balancing markets from an approach of statistical physics

    NASA Astrophysics Data System (ADS)

    Mureddu, Mario; Meyer-Ortmanns, Hildegard

    2018-01-01

    An increase in energy production from renewable energy sources is viewed as a crucial achievement in most industrialized countries. The higher variability of power production via renewables leads to a rise in ancillary service costs over the power system, in particular costs within the electricity balancing markets, mainly due to an increased number of extreme price spikes. This study analyzes the impact of an increased share of renewable energy sources on the behavior of price and volumes of the Italian balancing market. Starting from configurations of load and power production, which guarantee a stable performance, we implement fluctuations in the load and in renewables; in particular we artificially increase the contribution of renewables as compared to conventional power sources to cover the total load. We then determine the amount of requested energy in the balancing market and its fluctuations, which are induced by production and consumption. Within an approach of agent-based modeling we estimate the resulting energy prices and costs. While their average values turn out to be only slightly affected by an increased contribution from renewables, the probability for extreme price events is shown to increase along with undesired peaks in the costs. Our methodology provides a tool for estimating outliers in prices obtained in the energy balancing market, once data of consumption, production and their typical fluctuations are provided.

  8. Mechanics of evolutionary digit reduction in fossil horses (Equidae).

    PubMed

    McHorse, Brianna K; Biewener, Andrew A; Pierce, Stephanie E

    2017-08-30

    Digit reduction is a major trend that characterizes horse evolution, but its causes and consequences have rarely been quantitatively tested. Using beam analysis on fossilized centre metapodials, we tested how locomotor bone stresses changed with digit reduction and increasing body size across the horse lineage. Internal bone geometry was captured from 13 fossil horse genera that covered the breadth of the equid phylogeny and the spectrum of digit reduction and body sizes, from Hyracotherium to Equus To account for the load-bearing role of side digits, a novel, continuous measure of digit reduction was also established-toe reduction index (TRI). Our results show that without accounting for side digits, three-toed horses as late as Parahippus would have experienced physiologically untenable bone stresses. Conversely, when side digits are modelled as load-bearing, species at the base of the horse radiation through Equus probably maintained a similar safety factor to fracture stress. We conclude that the centre metapodial compensated for evolutionary digit reduction and body mass increases by becoming more resistant to bending through substantial positive allometry in internal geometry. These results lend support to two historical hypotheses: that increasing body mass selected for a single, robust metapodial rather than several smaller ones; and that, as horse limbs became elongated, the cost of inertia from the side toes outweighed their utility for stabilization or load-bearing. © 2017 The Author(s).

  9. Today's sediment budget of the Rhine River channel, focusing on the Upper Rhine Graben and Rhenish Massif

    NASA Astrophysics Data System (ADS)

    Frings, Roy M.; Gehres, Nicole; Promny, Markus; Middelkoop, Hans; Schüttrumpf, Holger; Vollmer, Stefan

    2014-01-01

    The river bed of the Rhine River is subject to severe erosion and sedimentation. Such high geomorphological process rates are unwanted for economical, ecological, and safety reasons. The objectives of this study were (1) to quantify the geomorphological development of the Rhine River between 1985 and 2006; (2) to investigate the bed erosion process; and (3) to distinguish between tectonic, hydrological, and human controls. We used a unique data set with thousands of bedload and suspended-load measurements and quantified the fluxes of gravel, sand, silt, and clay through the northern Upper Rhine Graben and the Rhenish Massif. Furthermore, we calculated bed level changes and evaluated the sediment budget of the channel. Sediment transport rates were found to change in the downstream direction: silt and clay loads increase because of tributary supply; sand loads increase because of erosion of sand from the bed; and gravel loads decrease because of reduced sediment mobility caused by the base-level control exerted by the uplifting Rhenish Massif. This base-level control shows tectonic setting, in addition to hydrology and human interventions, to represent a major control on morphodynamics in the Rhine. The Rhine bed appears to be in a state of disequilibrium, with an average net bed degradation of 3 mm/a. Sand being eroded from the bed is primarily washed away in suspension, indicating a rapid supply of sand to the Rhine delta. The degradation is the result of an increased sediment transport capacity caused by nineteenth and twentieth century's river training works. In order to reduce degradation, huge amounts of sediment are fed into the river by river managers. Bed degradation and artificial sediment feeding represent the major sources of sand and gravel to the study area; only small amounts of sediment are supplied naturally from upstream or by tributaries. Sediment sinks include dredging, abrasion, and the sediment output to the downstream area. Large uncertainties exist about the amounts of sediment deposited on floodplains and in groyne fields. Compared to the natural situation during the middle Holocene, the present-day gravel and sand loads seem to be lower, whereas the silt and clay loads seem to be higher. This is probably caused by the present-day absence of meander migration, the deforestation, and the reduced sediment trapping efficiency of the floodplains. Even under natural conditions no equilibrium bed level existed.

  10. Nutrient Concentrations, Loads, and Yields in the Eucha-Spavinaw Basin, Arkansas and Oklahoma, 2002-2006

    USGS Publications Warehouse

    Tortorelli, Robert L.

    2008-01-01

    The City of Tulsa, Oklahoma, uses Lake Eucha and Spavinaw Lake in the Eucha-Spavinaw basin in northwestern Arkansas and northeastern Oklahoma for public water supply. Taste and odor problems in the water attributable to blue-green algae have increased in frequency. Changes in the algae community in the lakes may be attributable to increases in nutrient levels in the lakes, and in the waters feeding the lakes. The U.S. Geological Survey, in cooperation with the City of Tulsa, investigated and summarized nitrogen and phosphorus concentrations and provided estimates of nitrogen and phosphorus loads, yields, and flow-weighted concentrations in the Eucha-Spavinaw basin for three 3-year periods - 2002-2004, 2003-2005, and 2004-2006, to update a previous report that used data from water-quality samples for a 3-year period from January 2002 through December 2004. This report provides information needed to advance knowledge of the regional hydrologic system and understanding of hydrologic processes, and provides hydrologic data and results useful to multiple agencies for interstate agreements. Nitrogen and phosphorus concentrations were significantly greater in runoff samples than in base-flow samples for all three periods at Spavinaw Creek near Maysville, Arkansas; Spavinaw Creek near Colcord, Oklahoma, and Beaty Creek near Jay, Oklahoma. Runoff concentrations were not significantly greater than base-flow concentrations at Spavinaw Creek near Cherokee, Arkansas; and Spavinaw Creek near Sycamore, Oklahoma except for phosphorus during 2003-2005. Nitrogen concentrations in base-flow samples significantly increased downstream in Spavinaw Creek from the Maysville to Sycamore stations then significantly decreased from the Sycamore to the Colcord stations for all three periods. Nitrogen in base-flow samples from Beaty Creek was significantly less than in samples from Spavinaw Creek. Phosphorus concentrations in base-flow samples significantly increased from the Maysville to Cherokee stations in Spavinaw Creek for all three periods, probably because of a wastewater-treatment plant point source between those stations, and then significantly decreased downstream from the Cherokee to Colcord stations. Phosphorus in base-flow samples from Beaty Creek was significantly less than phosphorus in base-flow samples from Spavinaw Creek downstream from the Maysville station. Nitrogen concentrations in runoff samples were not significantly different among the stations on Spavinaw Creek for most of the three periods, except during 2003-2005 when runoff samples at the Colcord station were less than at the Sycamore station; however, the concentrations at Beaty Creek were significantly less than at all other stations. Phosphorus concentrations in runoff samples were not significantly different among the three downstream stations on Spavinaw Creek and were significantly different at the Maysville station on Spavinaw Creek and the Beaty Creek station, only during 2004-2006. Phosphorus and nitrogen concentrations in runoff samples from all stations generally increased with increasing streamflow. Estimated mean annual nitrogen total loads for the three 3-year periods were substantially greater at the Spavinaw Creek stations than at Beaty Creek and increased downstream from Maysville to Colcord in Spavinaw Creek, with the load at the Colcord station about 2 times that at Maysville station. Estimated mean annual nitrogen base-flow loads at the Spavinaw Creek stations were about 5 to 11 times greater than base-flow loads at Beaty Creek. The runoff component of the annual nitrogen total load for Beaty Creek was 85 to 89 percent; whereas, the range in the runoff component at the Spavinaw Creek stations was 60 to 71 percent. Estimated mean annual phosphorus total loads for the three 3-year periods were greater at the Spavinaw Creek stations from Cherokee to Colcord than at Beaty Creek and increased downstream from Maysville to Colcord in Spavinaw Creek, wit

  11. 14 CFR 27.571 - Fatigue evaluation of flight structure.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... § 27.309, except that maneuvering load factors need not exceed the maximum values expected in operation... paragraph (a)(3) of this section. (b) Fatigue tolerance evaluation. It must be shown that the fatigue tolerance of the structure ensures that the probability of catastrophic fatigue failure is extremely remote...

  12. Effect of topographic characteristics on compound topographic index for identification of gully channel initiation locations

    USDA-ARS?s Scientific Manuscript database

    Sediment loads from gully erosion can be a significant sediment source within watershed resulting in major contributions to water quality problems, reduction of crop productivity by removal of nutrient rich top soil, and damaging downstream ecosystems. Areas containing a high probability of forming ...

  13. 14 CFR 29.729 - Retracting mechanism.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...

  14. 14 CFR 27.729 - Retracting mechanism.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...

  15. 14 CFR 29.729 - Retracting mechanism.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...

  16. 14 CFR 27.729 - Retracting mechanism.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...

  17. 14 CFR 29.729 - Retracting mechanism.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...

  18. 14 CFR 29.729 - Retracting mechanism.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...

  19. 14 CFR 27.729 - Retracting mechanism.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...

  20. 14 CFR 27.729 - Retracting mechanism.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...

  1. 14 CFR 27.729 - Retracting mechanism.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...

  2. 14 CFR 29.729 - Retracting mechanism.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...

  3. 14 CFR 23.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... take appropriate load shedding action. [Doc. No. 4080, 29 FR 17955, Dec. 18, 1964; 30 FR 258, Jan. 9... any probable charging and discharging condition. No uncontrolled increase in cell temperature may... temperatures and pressures presents no problem. (d) No explosive or toxic gases emitted by any battery in...

  4. 14 CFR 23.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... generated power and to take appropriate load shedding action. [Doc. No. 4080, 29 FR 17955, Dec. 18, 1964; 30... and pressures must be maintained during any probable charging and discharging condition. No... shown that maintaining safe cell temperatures and pressures presents no problem. (d) No explosive or...

  5. 14 CFR 23.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... generated power and to take appropriate load shedding action. [Doc. No. 4080, 29 FR 17955, Dec. 18, 1964; 30... and pressures must be maintained during any probable charging and discharging condition. No... shown that maintaining safe cell temperatures and pressures presents no problem. (d) No explosive or...

  6. Lighten the Load

    ERIC Educational Resources Information Center

    Kennedy, Mike

    2008-01-01

    The green movement in school design encompasses many techniques to improve the environmental friendliness and energy efficiency of a facility. Some are more complicated than others--probably not many people can explain the intricacies of a geothermal heating system, or the specifics of how solar or wind energy is harnessed. Most people, however,…

  7. 30. Photocopy of blueprint. PLAN, ELEVATION, END SECTION, DETAIL OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    30. Photocopy of blueprint. PLAN, ELEVATION, END SECTION, DETAIL OF DECK SYSTEM AND LOAD COMPUTATION. Preparer unknown, date unknown, but probably ca. 1932. (Original in possession of the Washington County Highway Department.) - Hegeman-Hill Street Bridge, Spanning Batten Kill, .65 mile West of Greenwich, Easton, Washington County, NY

  8. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Petrik, Edward J.; Roth, Mary Ellen; Truong, Long Van; Quinn, Todd; Krawczonek, Walter M.

    1990-01-01

    The Autonomous Power Expert (APEX) system was designed to monitor and diagnose fault conditions that occur within the Space Station Freedom Electrical Power System (SSF/EPS) Testbed. APEX is designed to interface with SSF/EPS testbed power management controllers to provide enhanced autonomous operation and control capability. The APEX architecture consists of three components: (1) a rule-based expert system, (2) a testbed data acquisition interface, and (3) a power scheduler interface. Fault detection, fault isolation, justification of probable causes, recommended actions, and incipient fault analysis are the main functions of the expert system component. The data acquisition component requests and receives pertinent parametric values from the EPS testbed and asserts the values into a knowledge base. Power load profile information is obtained from a remote scheduler through the power scheduler interface component. The current APEX design and development work is discussed. Operation and use of APEX by way of the user interface screens is also covered.

  9. Understanding the effects of different HIV transmission models in individual-based microsimulation of HIV epidemic dynamics in people who inject drugs

    PubMed Central

    MONTEIRO, J.F.G.; ESCUDERO, D.J.; WEINREB, C.; FLANIGAN, T.; GALEA, S.; FRIEDMAN, S.R.; MARSHALL, B.D.L.

    2017-01-01

    SUMMARY We investigated how different models of HIV transmission, and assumptions regarding the distribution of unprotected sex and syringe-sharing events (‘risk acts’), affect quantitative understanding of HIV transmission process in people who inject drugs (PWID). The individual-based model simulated HIV transmission in a dynamic sexual and injecting network representing New York City. We constructed four HIV transmission models: model 1, constant probabilities; model 2, random number of sexual and parenteral acts; model 3, viral load individual assigned; and model 4, two groups of partnerships (low and high risk). Overall, models with less heterogeneity were more sensitive to changes in numbers risk acts, producing HIV incidence up to four times higher than that empirically observed. Although all models overestimated HIV incidence, micro-simulations with greater heterogeneity in the HIV transmission modelling process produced more robust results and better reproduced empirical epidemic dynamics. PMID:26753627

  10. Probabilistic structural analysis methods of hot engine structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Hopkins, D. A.

    1989-01-01

    Development of probabilistic structural analysis methods for hot engine structures at Lewis Research Center is presented. Three elements of the research program are: (1) composite load spectra methodology; (2) probabilistic structural analysis methodology; and (3) probabilistic structural analysis application. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) turbine blade temperature, pressure, and torque of the space shuttle main engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; and (3) evaluation of the failure probability. Collectively, the results demonstrate that the structural durability of hot engine structural components can be effectively evaluated in a formal probabilistic/reliability framework.

  11. Bending and Shear Behavior of Pultruded Glass Fiber Reinforced Polymer Composite Beams With Closed and Open Sections

    NASA Astrophysics Data System (ADS)

    Estep, Daniel Douglas

    Several advantages, such as high strength-to-weight ratio, high stiffness, superior corrosion resistance, and high fatigue and impact resistance, among others, make FRPs an attractive alternative to conventional construction materials for use in developing new structures as well as rehabilitating in-service infrastructure. As the number of infrastructure applications using FRPs grows, the need for the development of a uniform Load and Resistance Factor Design (LRFD) approach, including design procedures and examples, has become paramount. Step-by-step design procedures and easy-to-use design formulas are necessary to assure the quality and safety of FRP structural systems by reducing the possibility of design and construction errors. Since 2008, the American Society of Civil Engineers (ASCE), in coordination with the American Composites Manufacturers Association (ACMA), has overseen the development of the Pre-Standard for Load and Resistance Factor Design (LRFD) of Pultruded Fiber Reinforced Polymer (FRP) Structures using probability-based limit states design. The fifth chapter of the pre-standard focuses on the design of members in flexure and shear under different failure modes, where the current failure load prediction models proposed within have been shown to be highly inaccurate based on experimental data and evaluation performed by researchers at the West Virginia University Constructed Facilities Center. A new prediction model for determining the critical flexural load capacity of pultruded GFRP square and rectangular box beams is presented within. This model shows that the type of failure can be related to threshold values of the beam span-to-depth ratio (L/h) and total flange width-to-thickness ratio (bf /t), resulting in three governing modes of failure: local buckling failure in the compression flange (4 ≤ L/h < 6), combined strain failure at the web-flange junction (6 ≤ L/h ≤ 10), and bending failure in the tension flange (10 < L/h ≤ 42). Broadly, the proposed equations are predicting critical flexural load capacities within +/-22.3% of experimental data for all cases, with over 70% of all experimental data with within +/-10% error. A second prediction model was developed for predicting the critical lateral-torsional buckling (LTB) load for pultruded GFRP open sections, including wide flange (WF) sections and channels. Multiple LTB equations from several sources were considered and applied but yielded inaccurate results, leading to the development of this new critical buckling load prediction model based on the well-established elastic LTB strength equation for steel. By making a series of modifications to equations for calculating the weak axis moment of inertia, torsional warping constant, and torsion constant for open sections, as well as recognizing the influence of the shear lag phenomenon, the critical LTB load is predicted within +/-15.2% of experimental data for all channel and WF specimens tested and evaluated in the study.

  12. Fracture mechanics analysis of cracked structures using weight function and neural network method

    NASA Astrophysics Data System (ADS)

    Chen, J. G.; Zang, F. G.; Yang, Y.; Shi, K. K.; Fu, X. L.

    2018-06-01

    Stress intensity factors(SIFs) due to thermal-mechanical load has been established by using weight function method. Two reference stress states sere used to determine the coefficients in the weight function. Results were evaluated by using data from literature and show a good agreement between them. So, the SIFs can be determined quickly using the weight function obtained when cracks subjected to arbitrary loads, and presented method can be used for probabilistic fracture mechanics analysis. A probabilistic methodology considering Monte-Carlo with neural network (MCNN) has been developed. The results indicate that an accurate probabilistic characteristic of the KI can be obtained by using the developed method. The probability of failure increases with the increasing of loads, and the relationship between is nonlinear.

  13. The Stress-strain Behavior of Polymer-Nanotube Composites from Molecular Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Frankland, S. J. V.; Harik, V. M.; Odegard, G. M.; Brenner, D. W.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Stress-strain curves of polymer-carbon nanotube composites are derived from molecular dynamics simulations of a single-walled carbon nanotube embedded in polyethylene. A comparison is made between the response to mechanical loading of a composite with a long, continuous nanotube (replicated via periodic boundary conditions) and the response of a composite with a short, discontinuous nanotube. Both composites are mechanically loaded in the direction of and transverse to the NT axis. The long-nanotube composite shows an increase in the stiffness relative to the polymer and behaves anisotropically under the different loading conditions. The short-nanotube composite shows no enhancement relative to the polymer, most probably because of its low aspect ratio. The stress-strain curves are compared with rule-of-mixtures predictions.

  14. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Králik, Juraj, E-mail: juraj.kralik@stuba.sk

    2016-06-08

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  15. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    NASA Astrophysics Data System (ADS)

    Králik, Juraj

    2016-06-01

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  16. Performance evaluation of an importance sampling technique in a Jackson network

    NASA Astrophysics Data System (ADS)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  17. Probability mass first flush evaluation for combined sewer discharges.

    PubMed

    Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong

    2010-01-01

    The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.

  18. Survival Model for Foot and Leg High Rate Axial Impact Injury Data.

    PubMed

    Bailey, Ann M; McMurry, Timothy L; Poplin, Gerald S; Salzar, Robert S; Crandall, Jeff R

    2015-01-01

    Understanding how lower extremity injuries from automotive intrusion and underbody blast (UBB) differ is of key importance when determining whether automotive injury criteria can be applied to blast rate scenarios. This article provides a review of existing injury risk analyses and outlines an approach to improve injury prediction for an expanded range of loading rates. This analysis will address issues with existing injury risk functions including inaccuracies due to inertial and potential viscous resistance at higher loading rates. This survival analysis attempts to minimize these errors by considering injury location statistics and a predictor variable selection process dependent upon failure mechanisms of bone. Distribution of foot/ankle/leg injuries induced by axial impact loading at rates characteristic of UBB as well as automotive intrusion was studied and calcaneus injuries were found to be the most common injury; thus, footplate force was chosen as the main predictor variable because of its proximity to injury location to prevent inaccuracies associated with inertial differences due to loading rate. A survival analysis was then performed with age, sex, dorsiflexion angle, and mass as covariates. This statistical analysis uses data from previous axial postmortem human surrogate (PMHS) component leg tests to provide perspectives on how proximal boundary conditions and loading rate affect injury probability in the foot/ankle/leg (n = 82). Tibia force-at-fracture proved to be up to 20% inaccurate in previous analyses because of viscous resistance and inertial effects within the data set used, suggesting that previous injury criteria are accurate only for specific rates of loading and boundary conditions. The statistical model presented in this article predicts 50% probability of injury for a plantar force of 10.2 kN for a 50th percentile male with a neutral ankle position. Force rate was found to be an insignificant covariate because of the limited range of loading rate differences within the data set; however, compensation for inertial effects caused by measuring the force-at-fracture in a location closer to expected injury location improved the model's predictive capabilities for the entire data set. This study provides better injury prediction capabilities for both automotive and blast rates because of reduced sensitivity to inertial effects and tibia-fibula load sharing. Further, a framework is provided for future injury criteria generation for high rate loading scenarios. This analysis also suggests key improvements to be made to existing anthropomorphic test device (ATD) lower extremities to provide accurate injury prediction for high rate applications such as UBB.

  19. Design for cyclic loading endurance of composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.

    1993-01-01

    The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.

  20. Transient Reliability of Ceramic Structures For Heat Engine Applications

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Jadaan, Osama M.

    2002-01-01

    The objectives of this report was to develop a methodology to predict the time-dependent reliability (probability of failure) of brittle material components subjected to transient thermomechanical loading, taking into account the change in material response with time. This methodology for computing the transient reliability in ceramic components subjected to fluctuation thermomechanical loading was developed, assuming SCG (Slow Crack Growth) as the delayed mode of failure. It takes into account the effect of varying Weibull modulus and materials with time. It was also coded into a beta version of NASA's CARES/Life code, and an example demonstrating its viability was presented.

  1. Temporal trends in postseroconversion CD4 cell count and HIV load: the Concerted Action on Seroconversion to AIDS and Death in Europe Collaboration, 1985-2002.

    PubMed

    Dorrucci, Maria; Rezza, Giovanni; Porter, Kholoud; Phillips, Andrew

    2007-02-15

    To determine whether early postseroconversion CD4 cell counts and human immunodeficiency virus (HIV) loads have changed over time. Our analysis was based on 22 cohorts of people with known dates of seroconversion from Europe, Australia, and Canada (Concerted Action on Seroconversion to AIDS and Death in Europe Collaboration). We focused on individuals seroconverting between 1985 and 2002 who had the first CD4 cell count (n=3687) or HIV load (n=1584) measured within 2 years of seroconversion and before antiretroviral use. Linear regression models were used to assess time trends in postseroconversion CD4 cell count and HIV load. Trends in time to key thresholds were also assessed, using survival analysis. The overall median initial CD4 cell count was 570 cells/ microL (interquartile range [IQR], 413-780 cells/ microL). The median initial HIV load was 35,542 copies/mL (IQR, 7600-153,050 copies/mL; on log(10) scale, 3.9-5.2 log(10) copies/mL). The postseroconversion CD4 cell count changed by an average of -6.33 cells/ microL/year (95% confidence interval [CI], -8.47 to -4.20 cells/ microL/year; P<.001), whereas an increase was observed in log(10) HIV load (+0.044 log(10) copies/mL/year; 95% CI, +0.034 to +0.053 log(10) copies/mL/year). These trends remained after adjusting for potential confounders. The probability of progressing to a CD4 cell count of <500 cells/ microL by 24 months from seroconversion increased from 0.66 (95% CI, 0.63-0.69) for individuals who seroconverted before 1991 to 0.80 (95% CI, 0.75-0.84) for those who seroconverted during 1999-2002. These data suggest that, in Europe, there has been a trend of decrease in the early CD4 cell count and of increase in the early HIV load. Additional research will be necessary to determine whether similar trends exist in other geographical areas.

  2. GNSS Signal Tracking Performance Improvement for Highly Dynamic Receivers by Gyroscopic Mounting Crystal Oscillator

    PubMed Central

    Abedi, Maryam; Jin, Tian; Sun, Kewen

    2015-01-01

    In this paper, the efficiency of the gyroscopic mounting method is studied for a highly dynamic GNSS receiver’s reference oscillator for reducing signal loss. Analyses are performed separately in two phases, atmospheric and upper atmospheric flights. Results show that the proposed mounting reduces signal loss, especially in parts of the trajectory where its probability is the highest. This reduction effect appears especially for crystal oscillators with a low elevation angle g-sensitivity vector. The gyroscopic mounting influences frequency deviation or jitter caused by dynamic loads on replica carrier and affects the frequency locked loop (FLL) as the dominant tracking loop in highly dynamic GNSS receivers. In terms of steady-state load, the proposed mounting mostly reduces the frequency deviation below the one-sigma threshold of FLL (1σFLL). The mounting method can also reduce the frequency jitter caused by sinusoidal vibrations and reduces the probability of signal loss in parts of the trajectory where the other error sources accompany this vibration load. In the case of random vibration, which is the main disturbance source of FLL, gyroscopic mounting is even able to suppress the disturbances greater than the three-sigma threshold of FLL (3σFLL). In this way, signal tracking performance can be improved by the gyroscopic mounting method for highly dynamic GNSS receivers. PMID:26404286

  3. Early fluid loading for septic patients: Any safety limit needed?

    PubMed

    Gong, Yi-Chun; Liu, Jing-Tao; Ma, Peng-Lin

    2018-02-01

    Early adequate fluid loading was the corner stone of hemodynamic optimization for sepsis and septic shock. Meanwhile, recent recommended protocol for fluid resuscitation was increasingly debated on hemodynamic stability vs risk of overloading. In recent publications, it was found that a priority was often given to hemodynamic stability rather than organ function alternation in the early fluid resuscitation of sepsis. However, no safety limits were used at all in most of these reports. In this article, the rationality and safety of early aggressive fluid loading for septic patients were discussed. It was concluded that early aggressive fluid loading improved hemodynamics transitorily, but was probably traded off with a follow-up organ function impairment, such as worsening oxygenation by reduction of lung aeration, in a part of septic patients at least. Thus, a safeguard is needed against unnecessary excessive fluids in early aggressive fluid loading for septic patients. Copyright © 2017 Daping Hospital and the Research Institute of Surgery of the Third Military Medical University. Production and hosting by Elsevier B.V. All rights reserved.

  4. Response of removal rates to various organic carbon and ammonium loads in laboratory-scale constructed wetlands treating artificial wastewater.

    PubMed

    Wu, Shubiao; Kuschk, Peter; Wiessner, Arndt; Kästner, Matthias; Pang, Changle; Dong, Renjie

    2013-01-01

    High levels (92 and 91%) of organic carbon were successfully removed from artificial wastewater by a laboratory-scale constructed wetland under inflow loads of 670 mg/m2 x d (100 mg/d) and 1600 mg/m2d (240 mg/d), respectively. Acidification to pH 3.0 was observed at the low organic carbon load, which further inhibited the denitrification process. An increase in carbon load, however, was associated with a significant elevation of pH to 6.0. In general, sulfate and nitrate reduction were relatively high, with mean levels of 87 and 90%, respectively. However, inhibition of nitrification was initiated with an increase in carbon loads. This effect was probably a result of competition for oxygen by heterotrophic bacteria and an inhibitory effect of sulfide (S2) toxicity (concentration approximately 3 mg/L). In addition, numbers of healthy stalks of Juncus effusus (common rush) decreased from 14 000 to 10 000/m2 with an increase of sulfide concentration, indicating the negative effect of sulfide toxicity on the wetland plants.

  5. Consideration effect of wind farms on the network reconfiguration in the distribution systems in an uncertain environment

    NASA Astrophysics Data System (ADS)

    Rahmani, Kianoosh; Kavousifard, Farzaneh; Abbasi, Alireza

    2017-09-01

    This article proposes a novel probabilistic Distribution Feeder Reconfiguration (DFR) based method to consider the uncertainty impacts into account with high accuracy. In order to achieve the set aim, different scenarios are generated to demonstrate the degree of uncertainty in the investigated elements which are known as the active and reactive load consumption and the active power generation of the wind power units. Notably, a normal Probability Density Function (PDF) based on the desired accuracy is divided into several class intervals for each uncertain parameter. Besides, the Weiball PDF is utilised for modelling wind generators and taking the variation impacts of the power production in wind generators. The proposed problem is solved based on Fuzzy Adaptive Modified Particle Swarm Optimisation to find the most optimal switching scheme during the Multi-objective DFR. Moreover, this paper holds two suggestions known as new mutation methods to adjust the inertia weight of PSO by the fuzzy rules to enhance its ability in global searching within the entire search space.

  6. Probabilistic analysis for fatigue strength degradation of materials

    NASA Technical Reports Server (NTRS)

    Royce, Lola

    1989-01-01

    This report presents the results of the first year of a research program conducted for NASA-LeRC by the University of Texas at San Antonio. The research included development of methodology that provides a probabilistic treatment of lifetime prediction of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Linear elastic fracture mechanics is utilized in the latter model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs, RANDOM2, RANDOM3, and RANDOM4. These programs determine the random lifetime of an engine component, in mechanical load cycles, to reach a critical fatigue strength or crack size. The material considered was a cast nickel base superalloy, one typical of those used in the Space Shuttle Main Engine.

  7. Objective and subjective methods for quantifying training load in wheelchair basketball small-sided games.

    PubMed

    Iturricastillo, Aitor; Granados, Cristina; Los Arcos, Asier; Yanci, Javier

    2017-04-01

    The aim of the present study was to analyse the training load in wheelchair basketball small-sided games and determine the relationship between heart rate (HR)-based training load and perceived exertion (RPE)-based training load methods among small-sided games bouts. HR-based measurements of training load included Edwards' training load and Stagno's training impulses (TRIMP MOD ) while RPE-based training load measurements included cardiopulmonary (session RPEres) and muscular (session RPEmus) values. Data were collected from 12 wheelchair basketball players during five consecutive weeks. The total load for the small-sided games sessions was 67.5 ± 6.7 and 55.3 ± 12.5 AU in HR-based training load (Edwards' training load and TRIMP MOD ), while the RPE-based training loads were 99.3 ± 26.9 (session RPEres) and 100.8 ± 31.2 AU (session RPEmus). Bout-to-bout analysis identified greater session RPEmus in the third [P < 0.05; effect size (ES) = 0.66, moderate] and fourth bouts (P < 0.05; ES = 0.64, moderate) than in the first bout, but other measures did not differ. Mean correlations indicated a trivial and small relationship among HR-based and RPE-based training loads. It is suggested that HR-based and RPE-based training loads provide different information, but these two methods could be complementary because one method could help us to understand the limitations of the other.

  8. White-nose syndrome pathology grading in Nearctic and Palearctic bats

    PubMed Central

    Pikula, Jiri; Amelon, Sybill K.; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G.; Zukal, Jan; Martínková, Natália

    2017-01-01

    While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity. PMID:28767673

  9. Damage prognosis of adhesively-bonded joints in laminated composite structural components of unmanned aerial vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrar, Charles R; Gobbato, Maurizio; Conte, Joel

    2009-01-01

    The extensive use of lightweight advanced composite materials in unmanned aerial vehicles (UAVs) drastically increases the sensitivity to both fatigue- and impact-induced damage of their critical structural components (e.g., wings and tail stabilizers) during service life. The spar-to-skin adhesive joints are considered one of the most fatigue sensitive subcomponents of a lightweight UAV composite wing with damage progressively evolving from the wing root. This paper presents a comprehensive probabilistic methodology for predicting the remaining service life of adhesively-bonded joints in laminated composite structural components of UAVs. Non-destructive evaluation techniques and Bayesian inference are used to (i) assess the current statemore » of damage of the system and, (ii) update the probability distribution of the damage extent at various locations. A probabilistic model for future loads and a mechanics-based damage model are then used to stochastically propagate damage through the joint. Combined local (e.g., exceedance of a critical damage size) and global (e.g.. flutter instability) failure criteria are finally used to compute the probability of component failure at future times. The applicability and the partial validation of the proposed methodology are then briefly discussed by analyzing the debonding propagation, along a pre-defined adhesive interface, in a simply supported laminated composite beam with solid rectangular cross section, subjected to a concentrated load applied at mid-span. A specially developed Eliler-Bernoulli beam finite element with interlaminar slip along the damageable interface is used in combination with a cohesive zone model to study the fatigue-induced degradation in the adhesive material. The preliminary numerical results presented are promising for the future validation of the methodology.« less

  10. Geologic Assessment of Undiscovered Oil and Gas Resources of the North Cuba Basin, Cuba

    USGS Publications Warehouse

    Schenk, Christopher J.

    2010-01-01

    Petroleum generation in the North Cuba Basin is primarily the result of thrust loading of Jurassic and Cretaceous source rocks during formation of the North Cuba fold and thrust belt in the Late Cretaceous to Paleogene. The fold and thrust belt formed as Cuban arc-forearc rocks along the leading edge of the Caribbean plate translated northward during the opening of the Yucatan Basin and collided with the passive margin of southern North America in the Paleogene. Petroleum fluids generated during thrust loading migrated vertically into complex structures in the fold and thrust belt, into structures in the foreland basin, and possibly into carbonate reservoirs along the margins of the Yucatan and Bahama carbonate platforms. The U.S. Geological Survey (USGS) defined a Jurassic-Cretaceous Composite Total Petroleum System (TPS) and three assessment units (AU)-North Cuba Fold and Thrust Belt AU, North Cuba Foreland Basin AU, and the North Cuba Platform Margin Carbonate AU-within this TPS based mainly on structure and reservoir type (fig. 1). There is considerable geologic uncertainty as to the extent of petroleum migration that might have occurred within this TPS to form potential petroleum accumulations. Taking this geologic uncertainty into account, especially in the offshore area, the mean volumes of undiscovered resources in the composite TPS of the North Cuba Basin are estimated at (1) 4.6 billion barrels of oil (BBO), with means ranging from an F95 probability of 1 BBO to an F5 probability of 9 BBO; and (2) 8.6 trillion cubic feet of of gas (TCFG), of which 8.6 TCFG is associated with oil fields, and about 1.2 TCFG is in nonassociated gas fields in the North Cuba Foreland Basin AU.

  11. White-nose syndrome pathology grading in Nearctic and Palearctic bats.

    PubMed

    Pikula, Jiri; Amelon, Sybill K; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G; Zukal, Jan; Martínková, Natália

    2017-01-01

    While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity.

  12. Design prediction for long term stress rupture service of composite pressure vessels

    NASA Technical Reports Server (NTRS)

    Robinson, Ernest Y.

    1992-01-01

    Extensive stress rupture studies on glass composites and Kevlar composites were conducted by the Lawrence Radiation Laboratory beginning in the late 1960's and extending to about 8 years in some cases. Some of the data from these studies published over the years were incomplete or were tainted by spurious failures, such as grip slippage. Updated data sets were defined for both fiberglass and Kevlar composite stand test specimens. These updated data are analyzed in this report by a convenient form of the bivariate Weibull distribution, to establish a consistent set of design prediction charts that may be used as a conservative basis for predicting the stress rupture life of composite pressure vessels. The updated glass composite data exhibit an invariant Weibull modulus with lifetime. The data are analyzed in terms of homologous service load (referenced to the observed median strength). The equations relating life, homologous load, and probability are given, and corresponding design prediction charts are presented. A similar approach is taken for Kevlar composites, where the updated stand data do show a turndown tendency at long life accompanied by a corresponding change (increase) of the Weibull modulus. The turndown characteristic is not present in stress rupture test data of Kevlar pressure vessels. A modification of the stress rupture equations is presented to incorporate a latent, but limited, strength drop, and design prediction charts are presented that incorporate such behavior. The methods presented utilize Cartesian plots of the probability distributions (which are a more natural display for the design engineer), based on median normalized data that are independent of statistical parameters and are readily defined for any set of test data.

  13. Gain drift compensation with no-feedback-loop developed for the X-IFU/ATHENA readout chain

    NASA Astrophysics Data System (ADS)

    Prêle, D.; Voisin, F.; Beillimaz, C.; Chen, S.; Goldwurm, A.

    2016-07-01

    The focal plane of the X-ray Integral Field Unit (X-IFU) instrument of the Athena observatory is composed of about 4000 micro-calorimeters. These sensors, based on superconducting Transition Edge Sensors, are read out through a frequency multiplexer and a base-band feedback to linearize SQUIDs. However, the loop gain of this feedback is lower than 10 in the modulated TES signal bandwidth, which is not enough to fix the gain of the full readout chain. Calibration of the instrument is planned to be done at a time scale larger than a dozen minutes and the challenging energy resolution goal of 2.5 eV at 6 keV will probably require a gain stability larger than 10-4 over a long duration. A large part of this gain is provided by a Low-Noise Amplifier (LNA) in the Warm Front-End Electronics (WFEE). To reach such gain stability over more than a dozen minutes, this non-cooled amplifier has to cope with the temperature and supply voltage variations. Moreover, mainly for noise reasons, common large loop gain with feedback can not be used. We propose a new amplifier topology using diodes as loads of a differential amplifier to provide a fixed voltage gain, independent of the temperature and of the bias fluctuations. This amplifier is designed using a 350 nm SiGe BiCMOS technology and is part of an integrated circuit developed for the WFEE. Our simulations provide the expected gain drift and noise performances of such structure. Comparison with standard resistive loaded differential pair clearly shows the advantages of the proposed amplifier topology with a gain drift decreasing by more than an order of magnitude. Performances of this diode loaded amplifier are discussed in the context of the X-IFU requirements.

  14. The Specific Features of design and process engineering in branch of industrial enterprise

    NASA Astrophysics Data System (ADS)

    Sosedko, V. V.; Yanishevskaya, A. G.

    2017-06-01

    Production output of industrial enterprise is organized in debugged working mechanisms at each stage of product’s life cycle from initial design documentation to product and finishing it with utilization. The topic of article is mathematical model of the system design and process engineering in branch of the industrial enterprise, statistical processing of estimated implementation results of developed mathematical model in branch, and demonstration of advantages at application at this enterprise. During the creation of model a data flow about driving of information, orders, details and modules in branch of enterprise groups of divisions were classified. Proceeding from the analysis of divisions activity, a data flow, details and documents the state graph of design and process engineering was constructed, transitions were described and coefficients are appropriated. To each condition of system of the constructed state graph the corresponding limiting state probabilities were defined, and also Kolmogorov’s equations are worked out. When integration of sets of equations of Kolmogorov the state probability of system activity the specified divisions and production as function of time in each instant is defined. On the basis of developed mathematical model of uniform system of designing and process engineering and manufacture, and a state graph by authors statistical processing the application of mathematical model results was carried out, and also advantage at application at this enterprise is shown. Researches on studying of loading services probability of branch and third-party contractors (the orders received from branch within a month) were conducted. The developed mathematical model of system design and process engineering and manufacture can be applied to definition of activity state probability of divisions and manufacture as function of time in each instant that will allow to keep account of loading of performance of work in branches of the enterprise.

  15. Role of epistasis on the fixation probability of a non-mutator in an adapted asexual population.

    PubMed

    James, Ananthu

    2016-10-21

    The mutation rate of a well adapted population is prone to reduction so as to have a lower mutational load. We aim to understand the role of epistatic interactions between the fitness affecting mutations in this process. Using a multitype branching process, the fixation probability of a single non-mutator emerging in a large asexual mutator population is analytically calculated here. The mutator population undergoes deleterious mutations at constant, but at a much higher rate than that of the non-mutator. We find that antagonistic epistasis lowers the chances of mutation rate reduction, while synergistic epistasis enhances it. Below a critical value of epistasis, the fixation probability behaves non-monotonically with variation in the mutation rate of the background population. Moreover, the variation of this critical value of the epistasis parameter with the strength of the mutator is discussed in the appendix. For synergistic epistasis, when selection is varied, the fixation probability reduces overall, with damped oscillations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Micro-macro correlations and anisotropy in granular assemblies under uniaxial loading and unloading.

    PubMed

    Imole, Olukayode I; Wojtkowski, Mateusz; Magnanimo, Vanessa; Luding, Stefan

    2014-04-01

    The influence of contact friction on the behavior of dense, polydisperse granular assemblies under uniaxial (oedometric) loading and unloading deformation is studied using discrete element simulations. Even though the uniaxial deformation protocol is one of the "simplest" element tests possible, the evolution of the structural anisotropy necessitates its careful analysis and understanding, since it is the source of interesting and unexpected observations. On the macroscopic, homogenized, continuum scale, the deviatoric stress ratio and the deviatoric fabric, i.e., the microstructure behave in a different fashion during uniaxial loading and unloading. The maximal stress ratio and strain increase with increasing contact friction. In contrast, the deviatoric fabric reaches its maximum at a unique strain level independent of friction, with the maximal value decreasing with friction. For unloading, both stress and fabric respond to unloading strain with a friction-dependent delay but at different strains. On the micro-level, a friction-dependent non-symmetry of the proportion of weak (strong) and sliding (sticking) contacts with respect to the total contacts during loading and unloading is observed. Coupled to this, from the directional probability distribution, the "memory" and history-dependent behavior of granular systems is confirmed. Surprisingly, while a rank-2 tensor is sufficient to describe the evolution of the normal force directions, a sixth order harmonic approximation is necessary to describe the probability distribution of contacts, tangential force, and mobilized friction. We conclude that the simple uniaxial deformation activates microscopic phenomena not only in the active Cartesian directions, but also at intermediate orientations, with the tilt angle being dependent on friction, so that this microstructural features cause the interesting, nontrivial macroscopic behavior.

  17. 29 CFR 782.5 - Loaders.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... come within the exemption as a partial-duty loader. (Levinson v. Spector Motor Service, 330 U.S. 649... limited handling of them, that his activities will not come within the kind of “loading” which directly... it is light in weight, probably could not be loaded in a manner which would adversely affect “safety...

  18. 29 CFR 782.5 - Loaders.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... come within the exemption as a partial-duty loader. (Levinson v. Spector Motor Service, 330 U.S. 649... limited handling of them, that his activities will not come within the kind of “loading” which directly... it is light in weight, probably could not be loaded in a manner which would adversely affect “safety...

  19. Physical consequences of large organic debris in Pacific Northwest streams.

    Treesearch

    Frederick J. Swanson; George W. Lienkaemper

    1978-01-01

    Large organic debris in streams controls the distribution of aquatic habitats, the routing of sediment through stream systems, and the stability of streambed and banks. Management activities directly alter debris loading by addition or removal of material and indirectly by increasing the probability of debris torrents and removing standing streamside trees. We propose...

  20. Development of the rules governing the strength of airplanes. Part III : loading conditions in France, Italy, Holland, and Russia - aims at standardization

    NASA Technical Reports Server (NTRS)

    Kussner, H G; Thalau, Karl

    1933-01-01

    The historical development of the rules for structural strength of aircraft in the leading countries is traced from the beginning of flight to date. The term "factor of safety" is critically analyzed; its replacement by probability considerations has been considered desirable.

  1. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... certification or under operating rules and that requires a power supply is an “essential load” on the power supply. The power sources and the system must be able to supply the following power loads in probable... source of power is required, after any failure or malfunction in any one power supply system...

  2. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... certification or under operating rules and that requires a power supply is an “essential load” on the power supply. The power sources and the system must be able to supply the following power loads in probable... source of power is required, after any failure or malfunction in any one power supply system...

  3. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  4. Fastener load tests and retention systems tests for cryogenic wind-tunnel models

    NASA Technical Reports Server (NTRS)

    Wallace, J. W.

    1984-01-01

    A-286 stainless steel screws were tested to determine the tensile load capability and failure mode of various screw sizes and types at both cryogenic and room temperature. Additionally, five fastener retention systems were tested by using A-286 screws with specimens made from the primary metallic alloys that are currently used for cryogenic models. The locking system effectiveness was examined by simple no-load cycling to cryogenic temperatures (-275 F) as well as by dynamic and static loading at cryogenic temperatures. In general, most systems were found to be effective retention devices. There are some differences between the various devices with respect to ease of application, cleanup, and reuse. Results of tests at -275 F imply that the cold temperatures act to improve screw retention. The improved retention is probably the result of differential thermal contraction and/or increased friction (thread-binding effects). The data provided are useful in selecting screw sizes, types, and locking devices for model systems to be tested in cryogenic wind tunnels.

  5. Failure Maps for Rectangular 17-4PH Stainless Steel Sandwiched Foam Panels

    NASA Technical Reports Server (NTRS)

    Raj, S. V.; Ghosn, L. J.

    2007-01-01

    A new and innovative concept is proposed for designing lightweight fan blades for aircraft engines using commercially available 17-4PH precipitation hardened stainless steel. Rotating fan blades in aircraft engines experience a complex loading state consisting of combinations of centrifugal, distributed pressure and torsional loads. Theoretical failure plastic collapse maps, showing plots of the foam relative density versus face sheet thickness, t, normalized by the fan blade span length, L, have been generated for rectangular 17-4PH sandwiched foam panels under these three loading modes assuming three failure plastic collapse modes. These maps show that the 17-4PH sandwiched foam panels can fail by either the yielding of the face sheets, yielding of the foam core or wrinkling of the face sheets depending on foam relative density, the magnitude of t/L and the loading mode. The design envelop of a generic fan blade is superimposed on the maps to provide valuable insights on the probable failure modes in a sandwiched foam fan blade.

  6. Role of Grooming in Reducing Tick Load in Wild Baboons (Papio cynocephalus)

    PubMed Central

    Akinyi, Mercy Y.; Tung, Jenny; Jeneby, Maamun; Patel, Nilesh B.; Altmann, Jeanne; Alberts, Susan C.

    2013-01-01

    Nonhuman primate species spend a conspicuous amount of time grooming during social interactions, a behavior that probably serves both social and health-related functions. While the social implications of grooming have been relatively well studied, less attention has been paid to the health benefits, especially the removal of ectoparasites, which may act as vectors in disease transmission. In this study, we examined the relationship between grooming behavior, tick load (number of ticks), and haemoprotozoan infection status in a population of wild free-ranging baboons (Papio cynocephalus). We found that the amount of grooming received was influenced by an individual’s age, sex and dominance rank. The amount of grooming received, in turn, affected the tick load of an individual. Baboons with higher tick loads had lower packed red cell volume (PCV or haematocrit), one general measure of health status. We detected a tick-borne haemoprotozoan, Babesia microti, but its low prevalence in the population precluded identifying sources of variance in infection. PMID:24659824

  7. Probability based models for estimation of wildfire risk

    Treesearch

    Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit

    2004-01-01

    We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 km²-day cell level. We fit a spatially and temporally explicit non-...

  8. The contractile adaption to preload depends on the amount of afterload

    PubMed Central

    Schotola, Hanna; Sossalla, Samuel T.; Renner, André; Gummert, Jan; Danner, Bernhard C.; Schott, Peter

    2017-01-01

    Abstract Aims The Frank–Starling mechanism (rapid response (RR)) and the secondary slow response (SR) are known to contribute to increases contractile performance. The contractility of the heart muscle is influenced by pre‐load and after‐load. Because of the effect of pre‐load vs. after‐load on these mechanisms in not completely understood, we studied the effect in isolated muscle strips. Methods and results Progressive stretch lead to an increase in shortening/force development under isotonic (only pre‐load) and isometric conditions (pre‐ and after‐load). Muscle length with maximal function was reached earlier under isotonic (L max‐isotonic) compared with isometric conditions (L max‐isometric) in nonfailing rabbit, in human atrial and in failing ventricular muscles. Also, SR after stretch from slack to L max‐isotonic was comparable under isotonic and isometric conditions (human: isotonic 10 ± 4%, isometric 10 ± 4%). Moreover, a switch from isotonic to isometric conditions at L max‐isometric showed no SR proving independence of after‐load. To further analyse the degree of SR on the total contractile performance at higher pre‐load muscles were stretched from slack to 98% L max‐isometric under isotonic conditions. Thereby, the SR was 60 ± 9% in rabbit and 51 ± 14% in human muscle strips. Conclusions This work shows that the acute contractile response largely depends on the degree and type of mechanical load. Increased filling of the heart elevates pre‐load and prolongs the isotonic part of contraction. The reduction in shortening at higher levels of pre‐load is thereby partially compensated by the pre‐load‐induced SR. After‐load shifts the contractile curve to a better ‘myofilament function’ by probably influencing thin fibers and calcium sensitivity, but has no effect on the SR. PMID:29154423

  9. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States.

    PubMed

    Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente

    2017-04-29

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.

  10. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States

    PubMed Central

    Vargas-Melendez, Leandro; Boada, Beatriz L.; Boada, Maria Jesus L.; Gauchia, Antonio; Diaz, Vicente

    2017-01-01

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33% of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle’s parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle’s roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle’s states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm. PMID:28468252

  11. Trends in Marine Debris along the U.S. Pacific Coast and Hawai’i 1998-2007

    USGS Publications Warehouse

    Ribic, Christine; Seba B. Sheavly,; Rugg, David J.; Erdmann, Eric S.

    2012-01-01

    We assessed amounts, composition, and trends of marine debris for the U.S. Pacific Coast and Hawai’i using National Marine Debris Monitoring Program data. Hawai’i had the highest debris loads; the North Pacific Coast region had the lowest debris loads. The Southern California Bight region had the highest land-based debris loads. Debris loads decreased over time for all source categories in all regions except for land-based and general-source loads in the North Pacific Coast region, which were unchanged. General-source debris comprised 30–40% of the items in all regions. Larger local populations were associated with higher land-based debris loads across regions; the effect declined at higher population levels. Upwelling affected deposition of ocean-based and general-source debris loads but not land-based loads along the Pacific Coast. LNSO decreased debris loads for both land-based and ocean-based debris but not general-source debris in Hawai’i, a more complex climate-ocean effect than had previously been found.

  12. A mechanistic understanding of the wear coefficient: From single to multiple asperities contact

    NASA Astrophysics Data System (ADS)

    Frérot, Lucas; Aghababaei, Ramin; Molinari, Jean-François

    2018-05-01

    Sliding contact between solids leads to material detaching from their surfaces in the form of debris particles, a process known as wear. According to the well-known Archard wear model, the wear volume (i.e. the volume of detached particles) is proportional to the load and the sliding distance, while being inversely proportional to the hardness. The influence of other parameters are empirically merged into a factor, referred to as wear coefficient, which does not stem from any theoretical development, thus limiting the predictive capacity of the model. Based on a recent understanding of a critical length-scale controlling wear particle formation, we present two novel derivations of the wear coefficient: one based on Archard's interpretation of the wear coefficient as the probability of wear particle detachment and one that follows naturally from the up-scaling of asperity-level physics into a generic multi-asperity wear model. As a result, the variation of wear rate and wear coefficient are discussed in terms of the properties of the interface, surface roughness parameters and applied load for various rough contact situations. Both new wear interpretations are evaluated analytically and numerically, and recover some key features of wear observed in experiments. This work shines new light on the understanding of wear, potentially opening a pathway for calculating the wear coefficient from first principles.

  13. Strength of inserts in titanium alloy machining

    NASA Astrophysics Data System (ADS)

    Kozlov, V.; Huang, Z.; Zhang, J.

    2016-04-01

    In this paper, a stressed state of a non-worn cutting wedge in a machined titanium alloy (Ti6Al2Mo2Cr) is analyzed. The distribution of contact loads on the face of a cutting tool was obtained experimentally with the use of a ‘split cutting tool’. Calculation of internal stresses in the indexable insert made from cemented carbide (WC8Co) was carried out with the help of ANSYS 14.0 software. Investigations showed that a small thickness of the cutting insert leads to extremely high compressive stresses near the cutting edge, stresses that exceed the ultimate compressive strength of cemented carbide. The face and the base of the insert experience high tensile stresses, which approach the ultimate tensile strength of cemented carbide and increase a probability of cutting insert destruction. If the thickness of the cutting insert is bigger than 5 mm, compressive stresses near the cutting edge decrease, and tensile stresses on the face and base decrease to zero. The dependences of the greatest normal and tangential stresses on thickness of the cutting insert were found. Abbreviation and symbols: m/s - meter per second (cutting speed v); mm/r - millimeter per revolution (feed rate f); MPa - mega Pascal (dimension of specific contact loads and stresses); γ - rake angle of the cutting tool [°] α - clearance angle of the sharp cutting tool [°].

  14. Anomalous diffusion for bed load transport with a physically-based model

    NASA Astrophysics Data System (ADS)

    Fan, N.; Singh, A.; Foufoula-Georgiou, E.; Wu, B.

    2013-12-01

    Diffusion of bed load particles shows both normal and anomalous behavior for different spatial-temporal scales. Understanding and quantifying these different types of diffusion is important not only for the development of theoretical models of particle transport but also for practical purposes, e.g., river management. Here we extend a recently proposed physically-based model of particle transport by Fan et al. [2013] to further develop an Episodic Langevin equation (ELE) for individual particle motion which reproduces the episodic movement (start and stop) of sediment particles. Using the proposed ELE we simulate particle movements for a large number of uniform size particles, incorporating different probability distribution functions (PDFs) of particle waiting time. For exponential PDFs of waiting times, particles reveal ballistic motion in short time scales and turn to normal diffusion at long time scales. The PDF of simulated particle travel distances also shows a change in its shape from exponential to Gamma to Gaussian with a change in timescale implying different diffusion scaling regimes. For power-law PDF (with power - μ) of waiting times, the asymptotic behavior of particles at long time scales reveals both super-diffusion and sub-diffusion, however, only very heavy tailed waiting times (i.e. 1.0 < μ < 1.5) could result in sub-diffusion. We suggest that the contrast between our results and previous studies (for e.g., studies based on fractional advection-diffusion models of thin/heavy tailed particle hops and waiting times) results could be due the assumption in those studies that the hops are achieved instantaneously, but in reality, particles achieve their hops within finite times (as we simulate here) instead of instantaneously, even if the hop times are much shorter than waiting times. In summary, this study stresses on the need to rethink the alternative models to the previous models, such as, fractional advection-diffusion equations, for studying the anomalous diffusion of bed load particles. The implications of these results for modeling sediment transport are discussed.

  15. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks.

    PubMed

    He, Jieyue; Wang, Chunyan; Qiu, Kunpu; Zhong, Wei

    2014-01-01

    Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies.

  16. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    PubMed Central

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies. PMID:25350277

  17. Sources of metal loads to the Alamosa River and estimation of seasonal and annual metal loads for the Alamosa River basin, Colorado, 1995-97

    USGS Publications Warehouse

    Ortiz, Roderick F.; Edelmann, Patrick; Ferguson, Sheryl; Stogner, Robert

    2002-01-01

    Metal contamination in the upper Alamosa River Basin has occurred for decades from the Summitville Mine site, from other smaller mines, and from natural, metal-enriched acidic drainage in the basin. In 1995, the need to quantify contamination from various source areas in the basin and to quantify the spatial, seasonal, and annual metal loads in the basin was identified. Data collection occurred from 1995 through 1997 at numerous sites to address data gaps. Metal loads were calculated and the percentages of metal load contributions from tributaries to three risk exposure areas were determined. Additionally, a modified time-interval method was used to estimate seasonal and annual metal loads in the Alamosa River and Wightman Fork. Sources of dissolved and total-recoverable aluminum, copper, iron, and zinc loads were determined for Exposure Areas 3a, 3b, and 3c. Alum Creek is the predominant contributor of aluminum, copper, iron, and zinc loads to Exposure Area 3a. In general, Wightman Fork was the predominant source of metals to Exposure Area 3b, particularly during the snowmelt and summer-flow periods. During the base-flow period, however, aluminum and iron loads from Exposure Area 3a were the dominant source of these metals to Exposure Area 3b. Jasper and Burnt Creeks generally contributed less than 10 percent of the metal loads to Exposure Area 3b. On a few occasions, however, Jasper and Burnt Creeks contributed a substantial percentage of the loads to the Alamosa River. The metal loads calculated for Exposure Area 3c result from upstream sources; the primary upstream sources are Wightman Fork, Alum Creek, and Iron Creek. Tributaries in Exposure Area 3c did not contribute substantially to the metal load in the Alamosa River. In many instances, the percentage of dissolved and/or total-recoverable metal load contribution from a tributary or the combined percentage of metal load contribution was greater than 100 percent of the metal load at the nearest downstream site on the Alamosa River. These data indicate that metal partitioning and metal deposition from the water column to the streambed may be occurring in Exposure Areas 3a, 3b, and 3c. Metals that are deposited to the streambed probably are resuspended and transported downstream during high streamflow periods such as during snowmelt runoff and rainfall runoff. Seasonal and annual dissolved and totalrecoverable aluminum, copper, iron, and zinc loads> for 1995?97 were estimated for Exposure Areas 1, 2, 3a, 3b, and 3c. During 1995?97, many tons of metals were transported annually through each exposure area. Generally, the largest estimated annual totalrecoverable metal mass for most metals was in 1995. The smallest estimated annual total-recoverable metal mass was in 1996, which also had the smallest annual streamflow. In 1995 and 1997, more than 60 percent of the annual total-recoverable metal loads generally was transported through each exposure area during the snowmelt period. A comparison of the estimated storm load at each site to the corresponding annual load indicated that storms contribute less than 2 percent of the annual load at any site and about 5 to 20 percent of the load during the summer-flow period.

  18. CD4 responses in the setting or suboptimal virological responses to antiretroviral therapy: features, outcomes, and associated factors.

    PubMed

    Collazos, Julio; Asensi, Víctor; Cartón, José Antonio

    2009-07-01

    The factors associated with discordant viroimmunological responses following antiretroviral therapy are unclear. We studied 1380 patients who initiated a protease inhibitor (PI)-based antiretroviral regimen and who fulfilled the criteria for inclusion. Of them, 255 (18.5%) had CD4 increases > or =100 cells/microl after 1 year of therapy despite detectable viral load (immunological responders); they were compared with 669 patients (48.5%) who had CD4 increases <100 cells/microl regardless of their final viral load (immunological nonresponders). Immunological responders had higher rates of sexual acquisition of HIV (p = 0.03), lower rates of clinical progression (p = 0.02), higher probabilities of being naive to antiretroviral therapy (p = 0.006) or to PI if antiretroviral experienced (p = 0.03), higher rates of receiving only nucleoside reverse transcriptase inhibitors in addition to the PI (p = 0.04), and lower baseline CD4 counts (p = 0.007) and higher viral loads (p = 0.009), as compared with nonresponders. Multivariate analysis revealed that sexual transmission of HIV (homosexual p = 0.004, heterosexual p = 0.03), no prior PI experience (p = 0.005), absence of clinical progression (p = 0.02), and lower baseline CD4 counts (p = 0.03) were independently associated with immunological response. However, these factors differed according to the patients' prior antiretroviral status, as higher baseline viral load was also associated with immunological response in antiretroviral-experienced patients (p = 0.02), whereas baseline CD4 count (p = 0.007) was the only predictive parameter in antiretroviral-naive patients. We conclude that immunological responses despite suboptimal viral suppression are common. Prior PI experience, HIV transmission category, baseline CD4 counts, and clinical progression were independently predictive of this condition, although the associated factors were different depending on the patient's prior antiretroviral history.

  19. A Experimental Study of Fluctuating Pressure Loads Beneath Swept Shock Wave/boundary Layer Interactions

    NASA Astrophysics Data System (ADS)

    Garg, Sanjay

    An experimental research program providing basic knowledge and establishing a database on the fluctuating pressure loads produced on aerodynamic surfaces beneath three-dimensional shock wave/boundary layer interactions is described. Such loads constitute a fundamental problem of critical concern to future supersonic and hypersonic flight vehicles. A turbulent boundary layer on a flat plate is subjected to interactions with swept planar shock waves generated by sharp fins. Fin angles from 10 ^circ to 20^circ at freestream Mach numbers of 3 and 4 produce a variety of interaction strengths from weak to very strong. Miniature pressure transducers flush-mounted in the flat plate have been used to measure interaction-induced wall pressure fluctuations. The distributions of properties of the pressure fluctuations, such as their rms level, amplitude distribution and power spectra, are also determined. Measurements have been made for the first time in the aft regions of these interactions, revealing fluctuating pressure levels as high as 155 dB, which places them in the category of significant aeroacoustic load generators. The fluctuations near the foot of the fin are dominated by low frequency (0-5 kHz) components, and are caused by a previously unrecognized random motion of the primary attachment line. This phenomenon is probably intimately linked to the unsteadiness of the separation shock at the start of the interaction. The characteristics of the pressure fluctuations are explained in light of the features of the interaction flowfield. In particular, physical mechanisms responsible for the generation of high levels of surface pressure fluctuations are proposed based on the results of the study. The unsteadiness of the flowfield of the surface is also examined via a novel, non-intrusive optical technique. Results show that the entire shock structure generated by the interaction undergoes relatively low-frequency oscillations.

  20. Longitudinal Study of Hepatitis A Infection by Saliva Sampling: The Kinetics of HAV Markers in Saliva Revealed the Application of Saliva Tests for Hepatitis A Study.

    PubMed

    Amado Leon, Luciane Almeida; de Almeida, Adilson José; de Paula, Vanessa Salete; Tourinho, Renata Santos; Villela, Daniel Antunes Maciel; Gaspar, Ana Maria Coimbra; Lewis-Ximenez, Lia Laura; Pinto, Marcelo Alves

    2015-01-01

    Despite the increasing numbers of studies investigating hepatitis A diagnostic through saliva, the frequency and the pattern of hepatitis A virus (HAV) markers in this fluid still remains unknown. To address this issue, we carried on a longitudinal study to examine the kinetics of HAV markers in saliva, in comparison with serum samples. The present study followed-up ten patients with acute hepatitis A infection during 180 days post diagnosis (dpd). Total anti-HAV was detected in paired serum and saliva samples until the end of the follow-up, showing a peak titer at 90th. However, total anti-HAV level was higher in serum than in saliva samples. This HAV marker showed a probability of 100% to be detected in both serum and saliva during 180 dpd. The IgM anti-HAV could be detected in saliva up to 150 dpd, showing the highest frequency at 30th, when it was detected in all individuals. During the first month of HAV infection, this acute HAV marker showed a detection probability of 100% in paired samples. The detection of IgM anti-HAV in saliva was not dependent on its level in serum, HAV-RNA detection and/or viral load, since no association was found between IgM anti-HAV positivity in saliva and any of these parameter (p>0.05). Most of the patients (80%) were found to contain HAV-RNA in saliva, mainly at early acute phase (30th day). However, it was possible to demonstrate the HAV RNA presence in paired samples for more than 90 days, even after seroconversion. No significant relationship was observed between salivary HAV-RNA positivity and serum viral load, demonstrating that serum viral load is not predictive of HAV-RNA detection in saliva. Similar viral load was seen in paired samples (on average 104 copies/mL). These data demonstrate that the best diagnostic coverage can be achieved by salivary anti-HAV antibodies and HAV-RNA tests during 30-90 dpd. The long detection and high probability of specific-HAV antibodies positivity in saliva samples make the assessment of salivary antibodies a useful tool for diagnosis and epidemiological studies. The high frequency of HAV-RNA in saliva and the probability of detection of about 50%, during the first 30 dpd, demonstrate that saliva is also useful for molecular investigation of hepatitis A cases, mainly during the early course of infection. Therefore, the collection of saliva may provide a simple, cheap and non-invasive means of diagnosis, epidemiological surveys and monitoring of hepatitis A infection purposes.

  1. Longitudinal Study of Hepatitis A Infection by Saliva Sampling: The Kinetics of HAV Markers in Saliva Revealed the Application of Saliva Tests for Hepatitis A Study

    PubMed Central

    Amado Leon, Luciane Almeida; de Almeida, Adilson José; de Paula, Vanessa Salete; Tourinho, Renata Santos; Villela, Daniel Antunes Maciel; Gaspar, Ana Maria Coimbra; Lewis-Ximenez, Lia Laura; Pinto, Marcelo Alves

    2015-01-01

    Despite the increasing numbers of studies investigating hepatitis A diagnostic through saliva, the frequency and the pattern of hepatitis A virus (HAV) markers in this fluid still remains unknown. To address this issue, we carried on a longitudinal study to examine the kinetics of HAV markers in saliva, in comparison with serum samples. The present study followed-up ten patients with acute hepatitis A infection during 180 days post diagnosis (dpd). Total anti-HAV was detected in paired serum and saliva samples until the end of the follow-up, showing a peak titer at 90th. However, total anti-HAV level was higher in serum than in saliva samples. This HAV marker showed a probability of 100% to be detected in both serum and saliva during 180 dpd. The IgM anti-HAV could be detected in saliva up to 150 dpd, showing the highest frequency at 30th, when it was detected in all individuals. During the first month of HAV infection, this acute HAV marker showed a detection probability of 100% in paired samples. The detection of IgM anti-HAV in saliva was not dependent on its level in serum, HAV-RNA detection and/or viral load, since no association was found between IgM anti-HAV positivity in saliva and any of these parameter (p>0.05). Most of the patients (80%) were found to contain HAV-RNA in saliva, mainly at early acute phase (30th day). However, it was possible to demonstrate the HAV RNA presence in paired samples for more than 90 days, even after seroconversion. No significant relationship was observed between salivary HAV-RNA positivity and serum viral load, demonstrating that serum viral load is not predictive of HAV-RNA detection in saliva. Similar viral load was seen in paired samples (on average 104 copies/mL). These data demonstrate that the best diagnostic coverage can be achieved by salivary anti-HAV antibodies and HAV-RNA tests during 30–90 dpd. The long detection and high probability of specific-HAV antibodies positivity in saliva samples make the assessment of salivary antibodies a useful tool for diagnosis and epidemiological studies. The high frequency of HAV-RNA in saliva and the probability of detection of about 50%, during the first 30 dpd, demonstrate that saliva is also useful for molecular investigation of hepatitis A cases, mainly during the early course of infection. Therefore, the collection of saliva may provide a simple, cheap and non-invasive means of diagnosis, epidemiological surveys and monitoring of hepatitis A infection purposes. PMID:26690904

  2. A robust method to forecast volcanic ash clouds

    USGS Publications Warehouse

    Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin

    2012-01-01

    Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.

  3. Residual stresses and vector hysteresis modeling

    NASA Astrophysics Data System (ADS)

    Ktena, Aphrodite

    2016-04-01

    Residual stresses in magnetic materials, whether the result of processing or intentional loading, leave their footprint on macroscopic data, such hysteresis loops and differential permeability measurements. A Preisach-type vector model is used to reproduce the phenomenology observed based on assumptions deduced from the data: internal stresses lead to smaller and misaligned grains, hence increased domain wall pinning and angular dispersion of local easy axes, favouring rotation as a magnetization reversal mechanism; misaligned grains contribute to magnetostatic fields opposing the direction of the applied field. The model is using a vector operator which accounts for both reversible and irreversible processes; the Preisach concept for interactions for the role of stress related demagnetizing fields; and a characteristic probability density function which is constructed as a weighed sum of constituent functions: the material is modeled as consisting of various subsystems, e.g. reversal mechanisms or areas subject to strong/weak long range interactions and each subsystem is represented by a constituent probability density function. Our assumptions are validated since the model reproduces the hysteresis loops and differential permeability curves observed experimentally and calculations involving rotating inputs at various residual stress levels are consistent and in agreement with experimental evidence.

  4. Enhanced Handoff Scheme for Downlink-Uplink Asymmetric Channels in Cellular Systems

    PubMed Central

    2013-01-01

    In the latest cellular networks, data services like SNS and UCC can create asymmetric packet generation rates over the downlink and uplink channels. This asymmetry can lead to a downlink-uplink asymmetric channel condition being experienced by cell edge users. This paper proposes a handoff scheme to cope effectively with downlink-uplink asymmetric channels. The proposed handoff scheme exploits the uplink channel quality as well as the downlink channel quality to determine the appropriate timing and direction of handoff. We first introduce downlink and uplink channel models that consider the intercell interference, to verify the downlink-uplink channel asymmetry. Based on these results, we propose an enhanced handoff scheme that exploits both the uplink and downlink channel qualities to reduce the handoff-call dropping probability and the service interruption time. The simulation results show that the proposed handoff scheme reduces the handoff-call dropping probability about 30% and increases the satisfaction of the service interruption time requirement about 7% under high-offered load, compared to conventional mobile-assisted handoff. Especially, the proposed handoff scheme is more efficient when the uplink QoS requirement is much stricter than the downlink QoS requirement or uplink channel quality is worse than downlink channel quality. PMID:24501576

  5. Fracture of Reduced-Diameter Zirconia Dental Implants Following Repeated Insertion.

    PubMed

    Karl, Matthias; Scherg, Stefan; Grobecker-Karl, Tanja

    Achievement of high insertion torque values indicating good primary stability is a goal during dental implant placement. The objective of this study was to evaluate whether or not two-piece implants made from zirconia ceramic may be damaged as a result of torque application. A total of 10 two-piece zirconia implants were repeatedly inserted into polyurethane foam material with increasing density and decreasing osteotomy size. The insertion torque applied was measured, and implants were checked for fractures by applying the fluorescent penetrant method. Weibull probability of failure was calculated based on the recorded insertion torque values. Catastrophic failures could be seen in five of the implants from two different batches at insertion torques ranging from 46.0 to 70.5 Ncm, while the remaining implants (all belonging to one batch) survived. Weibull probability of failure seems to be low at the manufacturer-recommended maximum insertion torque of 35 Ncm. Chipping fractures at the thread tips as well as tool marks were the only otherwise observed irregularities. While high insertion torques may be desirable for immediate loading protocols, zirconia implants may fracture when manufacturer-recommended insertion torques are exceeded. Evaluating bone quality prior to implant insertion may be useful.

  6. Enhanced Handover Decision Algorithm in Heterogeneous Wireless Network

    PubMed Central

    Abdullah, Radhwan Mohamed; Zukarnain, Zuriati Ahmad

    2017-01-01

    Transferring a huge amount of data between different network locations over the network links depends on the network’s traffic capacity and data rate. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is the Received Signal Strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load and an inefficient vertical handover. In this paper, we propose an enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN). It also employs three types of vertical handover decision algorithms: equal priority, mobile priority and network priority. The simulation results illustrate that the three types of decision algorithms outperform the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to the equal priority and the mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model. PMID:28708067

  7. A Monte Carlo analysis of the Viking lander dynamics at touchdown. [soft landing simulation

    NASA Technical Reports Server (NTRS)

    Muraca, R. J.; Campbell, J. W.; King, C. A.

    1975-01-01

    The performance of the Viking lander has been evaluated by using a Monte Carlo simulation, and all results are presented in statistical form. The primary objectives of this analysis were as follows: (1) to determine the three sigma design values of maximum rigid body accelerations and the minimum clearance of the lander body during landing; (2) to determine the probability of an unstable landing; and (3) to determine the probability of the lander body striking a rock. Two configurations were analyzed with the only difference being in the ability of the primary landing gear struts to carry tension loads.

  8. Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA

    PubMed Central

    Baixauli-Pérez, Mª Piedad

    2017-01-01

    The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325

  9. A quantitative risk analysis approach to port hydrocarbon logistics.

    PubMed

    Ronza, A; Carol, S; Espejo, V; Vílchez, J A; Arnaldos, J

    2006-01-16

    A method is presented that allows quantitative risk analysis to be performed on marine hydrocarbon terminals sited in ports. A significant gap was identified in the technical literature on QRA for the handling of hazardous materials in harbours published prior to this work. The analysis is extended to tanker navigation through port waters and loading and unloading facilities. The steps of the method are discussed, beginning with data collecting. As to accident scenario identification, an approach is proposed that takes into account minor and massive spills due to loading arm failures and tank rupture. Frequency estimation is thoroughly reviewed and a shortcut approach is proposed for frequency calculation. This allows for the two-fold possibility of a tanker colliding/grounding at/near the berth or while navigating to/from the berth. A number of probability data defining the possibility of a cargo spill after an external impact on a tanker are discussed. As to consequence and vulnerability estimates, a scheme is proposed for the use of ratios between the numbers of fatal victims, injured and evacuated people. Finally, an example application is given, based on a pilot study conducted in the Port of Barcelona, where the method was tested.

  10. Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.

    PubMed

    Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad

    2017-06-30

    The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.

  11. Variability in large-scale wind power generation: Variability in large-scale wind power generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiviluoma, Juha; Holttinen, Hannele; Weir, David

    2015-10-25

    The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1more » h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.« less

  12. Probabilistic evaluation of fuselage-type composite structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    A methodology is developed to computationally simulate the uncertain behavior of composite structures. The uncertain behavior includes buckling loads, natural frequencies, displacements, stress/strain etc., which are the consequences of the random variation (scatter) of the primitive (independent random) variables in the constituent, ply, laminate and structural levels. This methodology is implemented in the IPACS (Integrated Probabilistic Assessment of Composite Structures) computer code. A fuselage-type composite structure is analyzed to demonstrate the code's capability. The probability distribution functions of the buckling loads, natural frequency, displacement, strain and stress are computed. The sensitivity of each primitive (independent random) variable to a given structural response is also identified from the analyses.

  13. Fatigue damage prognosis of internal delamination in composite plates under cyclic compression loadings using affine arithmetic as uncertainty propagation tool

    NASA Astrophysics Data System (ADS)

    Gbaguidi, Audrey J.-M.

    Structural health monitoring (SHM) has become indispensable for reducing maintenance costs and increasing the in-service capacity of a structure. The increased use of lightweight composite materials in aircraft structures drastically increased the effects of fatigue induced damage on their critical structural components and thus the necessity to predict the remaining life of those components. Damage prognosis, one of the least investigated fields in SHM, uses the current damage state of the system to forecast its future performance by estimating the expected loading environments. A successful damage prediction model requires the integration of technologies in areas like measurements, materials science, mechanics of materials, and probability theories, but most importantly the quantification of uncertainty in all these areas. In this study, Affine Arithmetic is used as a method for incorporating the uncertainties due to the material properties into the fatigue life prognosis of composite plates subjected to cyclic compressive loadings. When loadings are compressive in nature, the composite plates undergo repeated buckling-unloading of the delaminated layer which induces mixed modes I and II states of stress at the tip of the delamination in the plates. The Kardomateas model-based prediction law is used to predict the growth of the delamination, while the integration of the effects of the uncertainties for modes I and II coefficients in the fatigue life prediction model is handled using Affine arithmetic. The Mode I and Mode II interlaminar fracture toughness and fatigue characterization of the composite plates are first experimentally studied to obtain the material coefficients and fracture toughness, respectively. Next, these obtained coefficients are used in the Kardomateas law to predict the delamination lengths in the composite plates while using Affine Arithmetic to handle their uncertainties. At last, the fatigue characterization of the composite plates during compressive-buckling loadings is experimentally studied, and the delamination lengths obtained are compared with the predicted values to check the performance of Affine Arithmetic as an uncertainty propagation tool.

  14. Summary of sediment data from the Yampa river and upper Green river basins, Colorado and Utah, 1993-2002

    USGS Publications Warehouse

    Elliott, John G.; Anders, Steven P.

    2004-01-01

    The water resources of the Upper Colorado River Basin have been extensively developed for water supply, irrigation, and power generation through water storage in upstream reservoirs during spring runoff and subsequent releases during the remainder of the year. The net effect of water-resource development has been to substantially modify the predevelopment annual hydrograph as well as the timing and amount of sediment delivery from the upper Green River and the Yampa River Basins tributaries to the main-stem reaches where endangered native fish populations have been observed. The U.S. Geological Survey, in cooperation with the Colorado Division of Wildlife and the U.S. Fish and Wildlife Service, began a study to identify sediment source reaches in the Green River main stem and the lower Yampa and Little Snake Rivers and to identify sediment-transport relations that would be useful in assessing the potential effects of hydrograph modification by reservoir operation on sedimentation at identified razorback spawning bars in the Green River. The need for additional data collection is evaluated at each sampling site. Sediment loads were calculated at five key areas within the watershed by using instantaneous measurements of streamflow, suspended-sediment concentration, and bedload. Sediment loads were computed at each site for two modes of transport (suspended load and bedload), as well as for the total-sediment load (suspended load plus bedload) where both modes were sampled. Sediment loads also were calculated for sediment particle-size range (silt-and-clay, and sand-and-gravel sizes) if laboratory size analysis had been performed on the sample, and by hydrograph season. Sediment-transport curves were developed for each type of sediment load by a least-squares regression of logarithmic-transformed data. Transport equations for suspended load and total load had coefficients of determination of at least 0.72 at all of the sampling sites except Little Snake River near Lily, Colorado. Bedload transport equations at the five sites had coefficients of determination that ranged from 0.40 (Yampa River at Deerlodge Park, Colorado) to 0.80 (Yampa River above Little Snake River near Maybell, Colorado). Transport equations for silt and clay-size material had coefficients of determination that ranged from 0.46 to 0.82. Where particle-size data were available (Yampa River at Deerlodge Park, Colorado, and Green River near Jensen, Utah), transport equations for the smaller particle sizes (fine sand) tended to have higher coefficients of determination than the equations for coarser sizes (medium and coarse sand, and very coarse sand and gravel). Because the data had to be subdivided into at least two subsets (rising-limb, falling-limb and, occasionally, base-flow periods), the seasonal transport equations generally were based on relatively few samples. All transport equations probably could be improved by additional data collected at strategically timed periods.

  15. Occurrence and load of selected herbicides and metabolites in the lower Mississippi River

    USGS Publications Warehouse

    Clark, G.M.; Goolsby, D.A.

    2000-01-01

    Analyses of water samples collected from the Mississippi River at Baton Rouge, Louisiana, during 1991-1997 indicate that hundreds of metric tons of herbicides and herbicide metabolites are being discharged annually to the Gulf of Mexico. Atrazine, metolachlor, and the ethane-sulfonic acid metabolite of alachlor (alachlor ESA) were the most frequently detected herbicides and, in general, were present in the largest concentrations. Almost 80% of the annual herbicide load to the Gulf of Mexico occurred during the growing season from May to August. The concentrations and loads of alachlor in the Mississippi River decreased dramatically after 1993 in response to decreased use in the basin. In contrast, the concentrations and loads of acetochlor increased after 1994, reflecting its role as a replacement for alachlor. The peak annual herbicide load occurred in 1993, when approximately 640 metric tons (t) of atrazine, 320 t of cyanazine, 215 t of metolachlor, 53 t of simazine, and 50 t of alachlor were discharged to the Gulf of Mexico. The annual loads of atrazine and cyanazine were generally 1-2% of the amount annually applied in the Mississippi River drainage basin; the annual loads of acetochlor, alachlor, and metolachlor were generally less than 1%. Despite a reduction in atrazine use, historical data do not indicate a long-term downward trend in the atrazine load to the Gulf of Mexico. Although a relation (r2=0.62) exists between the atrazine load and stream discharge during May to August, variations in herbicide use and rainfall patterns within subbasins can have a large effect on herbicide loads in the Mississippi River Basin and probably explain a large part of the annual variation in atrazine load to the Gulf of Mexico. Copyright (C) 2000 Elsevier Science B.V.

  16. Occurrence and load of selected herbicides and metabolites in the lower Mississippi River

    USGS Publications Warehouse

    Clark, Gregory M.; Goolsby, Donald A.

    2000-01-01

    Analyses of water samples collected from the Mississippi River at Baton Rouge, Louisiana, during 1991–1997 indicate that hundreds of metric tons of herbicides and herbicide metabolites are being discharged annually to the Gulf of Mexico. Atrazine, metolachlor, and the ethane-sulfonic acid metabolite of alachlor (alachlor ESA) were the most frequently detected herbicides and, in general, were present in the largest concentrations. Almost 80% of the annual herbicide load to the Gulf of Mexico occurred during the growing season from May to August. The concentrations and loads of alachlor in the Mississippi River decreased dramatically after 1993 in response to decreased use in the basin. In contrast, the concentrations and loads of acetochlor increased after 1994, reflecting its role as a replacement for alachlor. The peak annual herbicide load occurred in 1993, when approximately 640 metric tons (t) of atrazine, 320 t of cyanazine, 215 t of metolachlor, 53 t of simazine, and 50 t of alachlor were discharged to the Gulf of Mexico. The annual loads of atrazine and cyanazine were generally 1–2% of the amount annually applied in the Mississippi River drainage basin; the annual loads of acetochlor, alachlor, and metolachlor were generally less than 1%. Despite a reduction in atrazine use, historical data do not indicate a long-term downward trend in the atrazine load to the Gulf of Mexico. Although a relation (r2=0.62) exists between the atrazine load and stream discharge during May to August, variations in herbicide use and rainfall patterns within subbasins can have a large effect on herbicide loads in the Mississippi River Basin and probably explain a large part of the annual variation in atrazine load to the Gulf of Mexico.

  17. Dynamic Capacity and Surface Fatigue Life for Spur and Helical Gears

    NASA Technical Reports Server (NTRS)

    Coy, J. J.; Townsend, D. P.; Zaretsky, E. V.

    1975-01-01

    A mathematical model for surface fatigue life of gear, pinion, or entire meshing gear train is given. The theory is based on a previous statistical approach for rolling-element bearings. Equations are presented which give the dynamic capacity of the gear set. The dynamic capacity is the transmitted tangential load which gives a 90 percent probability of survival of the gear set for one million pinion revolutions. The analytical results are compared with test data for a set of AISI 9310 spur gears operating at a maximum Hertz stress of 1.71 billion N/sq m and 10,000 rpm. The theoretical life predictions are shown to be good when material constants obtained from rolling-element bearing tests were used in the gear life model.

  18. Overload Control for Signaling Congestion of Machine Type Communications in 3GPP Networks

    PubMed Central

    Lu, Zhaoming; Pan, Qi; Wang, Luhan; Wen, Xiangming

    2016-01-01

    Because of the limited resources on radio access channels of third generation partnership projection (3GPP) network, one of the most challenging tasks posted by 3GPP cellular-based machine type communications (MTC) is congestion due to massive requests for connection to radio access network (RAN). In this paper, an overload control algorithm in 3GPP RAN is proposed, which proactively disperses the simultaneous access attempts in evenly distributed time window. Through periodic reservation strategy, massive access requests of MTC devices are dispersed in time, which reduces the probability of confliction of signaling. By the compensation and prediction mechanism, each device can communicate with MTC server with dynamic load of air interface. Numerical results prove that proposed method makes MTC applications friendly to 3GPP cellular network. PMID:27936011

  19. Overload Control for Signaling Congestion of Machine Type Communications in 3GPP Networks.

    PubMed

    Lu, Zhaoming; Pan, Qi; Wang, Luhan; Wen, Xiangming

    2016-01-01

    Because of the limited resources on radio access channels of third generation partnership projection (3GPP) network, one of the most challenging tasks posted by 3GPP cellular-based machine type communications (MTC) is congestion due to massive requests for connection to radio access network (RAN). In this paper, an overload control algorithm in 3GPP RAN is proposed, which proactively disperses the simultaneous access attempts in evenly distributed time window. Through periodic reservation strategy, massive access requests of MTC devices are dispersed in time, which reduces the probability of confliction of signaling. By the compensation and prediction mechanism, each device can communicate with MTC server with dynamic load of air interface. Numerical results prove that proposed method makes MTC applications friendly to 3GPP cellular network.

  20. Probabilistic fracture finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Lua, Y. J.

    1991-01-01

    The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.

  1. Reliability and risk assessment of structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1991-01-01

    Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.

  2. Probabilistic fracture finite elements

    NASA Astrophysics Data System (ADS)

    Liu, W. K.; Belytschko, T.; Lua, Y. J.

    1991-05-01

    The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.

  3. Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    2016-09-01

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  4. Model-based prognostics for batteries which estimates useful life and uses a probability density function

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)

    2012-01-01

    This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.

  5. Fire and explosion hazards to flora and fauna from explosives.

    PubMed

    Merrifield, R

    2000-06-30

    Deliberate or accidental initiation of explosives can produce a range of potentially damaging fire and explosion effects. Quantification of the consequences of such effects upon the surroundings, particularly on people and structures, has always been of paramount importance. Information on the effects on flora and fauna, however, is limited, with probably the weakest area lying with fragmentation of buildings and their effects on different small mammals. Information has been used here to gain an appreciation of the likely magnitude of the potential fire and explosion effects on flora and fauna. This is based on a number of broad assumptions and a variety of data sources including World War II bomb damage, experiments performed with animals 30-40 years ago, and more recent field trials on building break-up under explosive loading.

  6. Probabilistic load simulation: Code development status

    NASA Astrophysics Data System (ADS)

    Newell, J. F.; Ho, H.

    1991-05-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  7. Power generation in microbial fuel cells using platinum group metal-free cathode catalyst: Effect of the catalyst loading on performance and costs

    NASA Astrophysics Data System (ADS)

    Santoro, Carlo; Kodali, Mounika; Herrera, Sergio; Serov, Alexey; Ieropoulos, Ioannis; Atanassov, Plamen

    2018-02-01

    Platinum group metal-free (PGM-free) catalyst with different loadings was investigated in air breathing electrodes microbial fuel cells (MFCs). Firstly, the electrocatalytic activity towards oxygen reduction reaction (ORR) of the catalyst was investigated by rotating ring disk electrode (RRDE) setup with different catalyst loadings. The results showed that higher loading led to an increased in the half wave potential and the limiting current and to a further decrease in the peroxide production. The electrons transferred also slightly increased with the catalyst loading up to the value of ≈3.75. This variation probably indicates that the catalyst investigated follow a 2x2e- transfer mechanism. The catalyst was integrated within activated carbon pellet-like air-breathing cathode in eight different loadings varying between 0.1 mgcm-2 and 10 mgcm-2. Performance were enhanced gradually with the increase in catalyst content. Power densities varied between 90 ± 9 μWcm-2 and 262 ± 4 μWcm-2 with catalyst loading of 0.1 mgcm-2 and 10 mgcm-2 respectively. Cost assessments related to the catalyst performance are presented. An increase in catalyst utilization led to an increase in power generated with a substantial increase in the whole costs. Also a decrease in performance due to cathode/catalyst deterioration over time led to a further increase in the costs.

  8. Effect of cognitive load on articulation rate and formant frequencies during simulator flights.

    PubMed

    Huttunen, Kerttu H; Keränen, Heikki I; Pääkkönen, Rauno J; Päivikki Eskelinen-Rönkä, R; Leino, Tuomo K

    2011-03-01

    It was explored how three types of intensive cognitive load typical of military aviation (load on situation awareness, information processing, or decision-making) affect speech. The utterances of 13 male military pilots were recorded during simulated combat flights. Articulation rate was calculated from the speech samples, and the first formant (F1) and second formant (F2) were tracked from first-syllable short vowels in pre-defined phoneme environments. Articulation rate was found to correlate negatively (albeit with low coefficients) with loads on situation awareness and decision-making but not with changes in F1 or F2. Changes were seen in the spectrum of the vowels: mean F1 of front vowels usually increased and their mean F2 decreased as a function of cognitive load, and both F1 and F2 of back vowels increased. The strongest associations were seen between the three types of cognitive load and F1 and F2 changes in back vowels. Because fluent and clear radio speech communication is vital to safety in aviation and temporal and spectral changes may affect speech intelligibility, careful use of standard aviation phraseology and training in the production of clear speech during a high level of cognitive load are important measures that diminish the probability of possible misunderstandings. © 2011 Acoustical Society of America

  9. Extreme winds and tornadoes: an overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, J.R.

    1985-01-01

    The objective of this course on extreme winds, hurricanes and tornadoes is to provide an overview of these natural phenomenon from the perspective of design of new buildings and structures or the evaluation of existing ones. Information is directly applicable to design and evaluation processes. The premise is that the facility under consideration, which may consist of various buildings, structures, processing equipment, stacks, ventilation ducts, etc., can be classified into certain categories, depending on the importance of the mission performed in the facility or the hazard that is presented by the particular operation. Having classified the facility into an appropriatemore » category will automatically define certain design goals for the facility. The design goals are then met by selecting a design wind speed that is appropriate for the specified exceedance probability and by following certain specified design procedures. The problem then is to determine appropriate wind loads and other applicable loads, including dead loads, live loads, seismic loads and other loads that may act on the structures. The design process can then proceed in the usual manner. In the case of existing facilities the strengths of the various structural elements, subsystems and systems are evaluated and these strengths are related to wind speeds that would result in failure to meet the design goals. 12 refs.« less

  10. Post retention and post/core shear bond strength of four post systems.

    PubMed

    Stockton, L W; Williams, P T; Clarke, C T

    2000-01-01

    As clinicians we continue to search for a post system which will give us maximum retention while maximizing resistance to root fracture. The introduction of several new post systems, with claims of high retentive and resistance to root fracture values, require that independent studies be performed to evaluate these claims. This study tested the tensile and shear dislodgment forces of four post designs that were luted into roots 10 mm apical of the CEJ. The Para Post Plus (P1) is a parallel-sided, passive design; the Para Post XT (P2) is a combination active/passive design; the Flexi-Post (F1) and the Flexi-Flange (F2) are active post designs. All systems tested were stainless steel. This study compared the test results of the four post designs for tensile and shear dislodgment. All mounted samples were loaded in tension until failure occurred. The tensile load was applied parallel to the long axis of the root, while the shear load was applied at 450 to the long axis of the root. The Flexi-Post (F1) was significantly different from the other three in the tensile test, however, the Para Post XT (P2) was significantly different to the other three in the shear test and had a better probability for survival in the Kaplan-Meier survival function test. Based on the results of this study, our recommendation is for the Para Post XT (P2).

  11. Mass burden and estimated flux of heavy metals in Pakistan coast: sedimentary pollution and eco-toxicological concerns.

    PubMed

    Ali, Usman; Malik, Riffat Naseem; Syed, Jabir Hussain; Mehmood, Ch Tahir; Sánchez-García, Laura; Khalid, Azeem; Chaudhry, Muhammad Jamshed Iqbal

    2015-03-01

    Heavy-metal contamination in coastal areas poses a serious threat to aquatic life and public health due to their high toxicity and bio-accumulation potential. In the present study, levels of different heavy metals (Cu, Cd, Cr, Ni, Co, Pb, Zn, and Mn), their spatial distribution, geochemical status, and enrichment indices (Cu, Cd, Cr, Ni, Co, Pb, Zn) were investigated in the sediment samples from 18 coastal sites of Pakistan. The analyses of coastal sediments indicated the presence of heavy metals in order such as Cr > Zn > Cu > Pb > Ni > Mn > Co > Cd. Geo-accumulation index (I geo), enrichment factor (EF), and contamination factor (CF) showed diverse range in heavy-metal enrichment site by site. Pollution load index (PLI) has shown that average pollution load along the entire coastal belt was not significant. Based on the mean effect range medium quotient, coastal sediments of Pakistan had 21% probability of toxicity. The estimated sedimentary load of selected heavy metals was recorded in the range of 0.3-44.7 g/cm(2)/year, while the depositional flux was in the range of 0.07-43.5 t/year. Heavy-metal inventories of 9.8 × 10(2)-3.8 × 10(5) t were estimated in the coastal sediments of Pakistan. The enrichment and contamination factors (EF and CF) suggested significant influence of anthropogenic and industrial activities along the coastal belt of Pakistan.

  12. Pulmonary Function Measures before and after Exposure of Human Subjects to +G(z) and +G(x) Acceleration Loads.

    DTIC Science & Technology

    1981-09-28

    If pure oxygen is breathed instead of air (as would most probably be the case for pilots in an air combat situation), absorptional atelectasis could...occur in the most dependent parts of the lungs, with resulting arterial desaturation. However, even when air is breathed, atelectasis can occur and

  13. A cellular automaton model of wildfire propagation and extinction

    Treesearch

    Keith C. Clarke; James A. Brass; Phillip J. Riggan

    1994-01-01

    We propose a new model to predict the spatial and temporal behavior of wildfires. Fire spread and intensity were simulated using a cellular automaton model. Monte Carlo techniques were used to provide fire risk probabilities for areas where fuel loadings and topography are known. The model assumes predetermined or measurable environmental variables such as wind...

  14. 14 CFR 27.497 - Ground loading conditions: landing gear with tail wheels.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... be resisted by angular inertia forces. (c) Level landing attitude with all wheels contacting the... gravity; or (2) The probability of landing with initial contact on the rear wheel must be shown to be... paragraph (f)(1) of this section must be applied— (i) At the ground contact point with the wheel in the...

  15. 14 CFR 29.497 - Ground loading conditions: landing gear with tail wheels.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... be resisted by angular inertia forces. (c) Level landing attitude with all wheels contacting the... gravity; or (2) The probability of landing with initial contact on the rear wheel must be shown to be... paragraph (f)(1) of this section must be applied— (i) At the ground contact point with the wheel in the...

  16. 14 CFR 29.497 - Ground loading conditions: landing gear with tail wheels.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... be resisted by angular inertia forces. (c) Level landing attitude with all wheels contacting the... gravity; or (2) The probability of landing with initial contact on the rear wheel must be shown to be... paragraph (f)(1) of this section must be applied— (i) At the ground contact point with the wheel in the...

  17. 14 CFR 27.497 - Ground loading conditions: landing gear with tail wheels.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... be resisted by angular inertia forces. (c) Level landing attitude with all wheels contacting the... gravity; or (2) The probability of landing with initial contact on the rear wheel must be shown to be... paragraph (f)(1) of this section must be applied— (i) At the ground contact point with the wheel in the...

  18. 14 CFR 29.497 - Ground loading conditions: landing gear with tail wheels.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... be resisted by angular inertia forces. (c) Level landing attitude with all wheels contacting the... gravity; or (2) The probability of landing with initial contact on the rear wheel must be shown to be... paragraph (f)(1) of this section must be applied— (i) At the ground contact point with the wheel in the...

  19. 14 CFR 29.497 - Ground loading conditions: landing gear with tail wheels.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... be resisted by angular inertia forces. (c) Level landing attitude with all wheels contacting the... gravity; or (2) The probability of landing with initial contact on the rear wheel must be shown to be... paragraph (f)(1) of this section must be applied— (i) At the ground contact point with the wheel in the...

  20. 14 CFR 27.497 - Ground loading conditions: landing gear with tail wheels.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... be resisted by angular inertia forces. (c) Level landing attitude with all wheels contacting the... gravity; or (2) The probability of landing with initial contact on the rear wheel must be shown to be... paragraph (f)(1) of this section must be applied— (i) At the ground contact point with the wheel in the...

  1. 14 CFR 27.497 - Ground loading conditions: landing gear with tail wheels.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... be resisted by angular inertia forces. (c) Level landing attitude with all wheels contacting the... gravity; or (2) The probability of landing with initial contact on the rear wheel must be shown to be... paragraph (f)(1) of this section must be applied— (i) At the ground contact point with the wheel in the...

  2. Nutrient storage rates in a national marsh receiving waste water

    Treesearch

    J.A. Nyman

    2000-01-01

    Artificial wetlands are commonly used to improve water quality in rivers and the coastal zone. In most wetlands associated with rivers, denitrification is probably the primary process that reduces nutrient loading. Where rivers meet oceans, however, significant amounts of nutrients might be permanently buried in wetlands because of global sea-level rise and regional...

  3. Self-organization of critical behavior in controlled general queueing models

    NASA Astrophysics Data System (ADS)

    Blanchard, Ph.; Hongler, M.-O.

    2004-03-01

    We consider general queueing models of the (G/G/1) type with service times controlled by the busy period. For feedback control mechanisms driving the system to very high traffic load, it is shown the busy period probability density exhibits a generic - {3}/{2} power law which is a typical mean field behavior of SOC models.

  4. Molecular-Level Study of the Effect of Prior Axial Compression/Torsion on the Axial-Tensile Strength of PPTA Fibers

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Yavari, R.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.

    2013-11-01

    A comprehensive all-atom molecular-level computational investigation is carried out in order to identify and quantify: (i) the effect of prior longitudinal-compressive or axial-torsional loading on the longitudinal-tensile behavior of p-phenylene terephthalamide (PPTA) fibrils/fibers; and (ii) the role various microstructural/topological defects play in affecting this behavior. Experimental and computational results available in the relevant open literature were utilized to construct various defects within the molecular-level model and to assign the concentration to these defects consistent with the values generally encountered under "prototypical" PPTA-polymer synthesis and fiber fabrication conditions. When quantifying the effect of the prior longitudinal-compressive/axial-torsional loading on the longitudinal-tensile behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account. The results obtained revealed that: (a) due to the stochastic nature of the defect type, concentration/number density and size/potency, the PPTA fibril/fiber longitudinal-tensile strength is a statistical quantity possessing a characteristic probability density function; (b) application of the prior axial compression or axial torsion to the PPTA imperfect single-crystalline fibrils degrades their longitudinal-tensile strength and only slightly modifies the associated probability density function; and (c) introduction of the fibril/fiber interfaces into the computational analyses showed that prior axial torsion can induce major changes in the material microstructure, causing significant reductions in the PPTA-fiber longitudinal-tensile strength and appreciable changes in the associated probability density function.

  5. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    We study repeated earthquake slip of a 2 m long laboratory granite fault surface with approximately homogenous frictional properties. In this apparatus earthquakes follow a period of controlled, constant rate shear stress increase, analogous to tectonic loading. Slip initiates and accumulates within a limited area of the fault surface while the surrounding fault remains locked. Dynamic rupture propagation and slip of the entire fault surface is induced when slip in the nucleating zone becomes sufficiently large. We report on the event to event reproducibility of loading time (recurrence interval), failure stress, stress drop, and precursory activity. We tentatively interpret these variations as indications of the intrinsic variability of small earthquake occurrence and source physics in this controlled setting. We use the results to produce measures of earthquake predictability based on the probability density of repeating occurrence and the reproducibility of near-field precursory strain. At 4 MPa normal stress and a loading rate of 0.0001 MPa/s, the loading time is ˜25 min, with a coefficient of variation of around 10%. Static stress drop has a similar variability which results almost entirely from variability of the final (rather than initial) stress. Thus, the initial stress has low variability and event times are slip-predictable. The variability of loading time to failure is comparable to the lowest variability of recurrence time of small repeating earthquakes at Parkfield (Nadeau et al., 1998) and our result may be a good estimate of the intrinsic variability of recurrence. Distributions of loading time can be adequately represented by a log-normal or Weibel distribution but long term prediction of the next event time based on probabilistic representation of previous occurrence is not dramatically better than for field-observed small- or large-magnitude earthquake datasets. The gradually accelerating precursory aseismic slip observed in the region of nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of earthquake prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based models of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip rate V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an earthquake, failure time can be predicted from measured fault slip rate with a typical error of 140 days, and a day prior to the earthquake with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing earthquake precursors from other strain transients. There is some field evidence of precursory seismic strain for large earthquakes (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).

  6. [Rapid startup and nitrogen removal characteristic of anaerobic ammonium oxidation reactor in packed bed biofilm reactor with suspended carrier].

    PubMed

    Chen, Sheng; Sun, De-zhi; Yu, Guang-lu

    2010-03-01

    Packed bed biofilm reactor with suspended carrier was used to cultivate ANAMMOX bacteria with sludge inoculums from WWTP secondary settler. The startup of ANAMMOX reactor was comparatively studied using high nitrogen loading method and low nitrogen loading method with aerobically biofilmed on the carrier, and the nitrogen removal characteristic was further investigated. The results showed that the reactor could be started up successfully within 90 days using low nitrogen loading method, the removal efficiencies of ammonium and nitrite were nearly 100% and the TN removal efficiencywas over 75% , however, the high nitrogen loading method was proved unsuccessfully for startup of ANAMMOX reactor probably because of the inhibition effect of high concentration of ammonium and nitrite. The pH value of effluent was slightly higher than the influent and the pH value can be used as an indicator for the process of ANAMMOX reaction. The packed bed ANAMMOX reactor with suspended carrier showed good characteristics of high nitrogen loading and high removal efficiency, 100% of removal efficiency could be achieved when the influent ammonium and nitrite concentration was lower than 800 mg/L.

  7. Associations Between Socioeconomic Status and Allostatic Load: Effects of Neighborhood Poverty and Tests of Mediating Pathways

    PubMed Central

    Mentz, Graciela; Lachance, Laurie; Johnson, Jonetta; Gaines, Causandra; Israel, Barbara A.

    2012-01-01

    Objectives. We examined relationships between neighborhood poverty and allostatic load in a low- to moderate-income multiracial urban community. We tested the hypothesis that neighborhood poverty is associated with allostatic load, controlling for household poverty. We also examined the hypotheses that this association was mediated by psychosocial stress and health-related behaviors. Methods. We conducted multilevel analyses using cross-sectional data from a probability sample survey in Detroit, Michigan (n = 919) and the 2000 US Census. The outcome measure was allostatic load. Independent variables included neighborhood and household poverty, psychosocial stress, and health-related behaviors. Covariates included neighborhood and individual demographic characteristics. Results. Neighborhood poverty was positively associated with allostatic load (P < .05), independent of household poverty and controlling for potential confounders. Relationships between neighborhood poverty were mediated by self-reported neighborhood environment stress but not by health-related behaviors. Conclusions. Neighborhood poverty is associated with wear and tear on physiological systems, and this relationship is mediated through psychosocial stress. These relationships are evident after accounting for household poverty levels. Efforts to promote health equity should focus on neighborhood poverty, associated stressful environmental conditions, and household poverty. PMID:22873478

  8. Quantification of Nonproteolytic Clostridium botulinum Spore Loads in Food Materials.

    PubMed

    Barker, Gary C; Malakar, Pradeep K; Plowman, June; Peck, Michael W

    2016-01-04

    We have produced data and developed analysis to build representations for the concentration of spores of nonproteolytic Clostridium botulinum in materials that are used during the manufacture of minimally processed chilled foods in the United Kingdom. Food materials are categorized into homogenous groups which include meat, fish, shellfish, cereals, fresh plant material, dairy liquid, dairy nonliquid, mushroom and fungi, and dried herbs and spices. Models are constructed in a Bayesian framework and represent a combination of information from a literature survey of spore loads from positive-control experiments that establish a detection limit and from dedicated microbiological tests for real food materials. The detection of nonproteolytic C. botulinum employed an optimized protocol that combines selective enrichment culture with multiplex PCR, and the majority of tests on food materials were negative. Posterior beliefs about spore loads center on a concentration range of 1 to 10 spores kg(-1). Posterior beliefs for larger spore loads were most significant for dried herbs and spices and were most sensitive to the detailed results from control experiments. Probability distributions for spore loads are represented in a convenient form that can be used for numerical analysis and risk assessments. Copyright © 2016 Barker et al.

  9. Quantification of Nonproteolytic Clostridium botulinum Spore Loads in Food Materials

    PubMed Central

    Barker, Gary C.; Malakar, Pradeep K.; Plowman, June

    2016-01-01

    We have produced data and developed analysis to build representations for the concentration of spores of nonproteolytic Clostridium botulinum in materials that are used during the manufacture of minimally processed chilled foods in the United Kingdom. Food materials are categorized into homogenous groups which include meat, fish, shellfish, cereals, fresh plant material, dairy liquid, dairy nonliquid, mushroom and fungi, and dried herbs and spices. Models are constructed in a Bayesian framework and represent a combination of information from a literature survey of spore loads from positive-control experiments that establish a detection limit and from dedicated microbiological tests for real food materials. The detection of nonproteolytic C. botulinum employed an optimized protocol that combines selective enrichment culture with multiplex PCR, and the majority of tests on food materials were negative. Posterior beliefs about spore loads center on a concentration range of 1 to 10 spores kg−1. Posterior beliefs for larger spore loads were most significant for dried herbs and spices and were most sensitive to the detailed results from control experiments. Probability distributions for spore loads are represented in a convenient form that can be used for numerical analysis and risk assessments. PMID:26729721

  10. Associations between socioeconomic status and allostatic load: effects of neighborhood poverty and tests of mediating pathways.

    PubMed

    Schulz, Amy J; Mentz, Graciela; Lachance, Laurie; Johnson, Jonetta; Gaines, Causandra; Israel, Barbara A

    2012-09-01

    We examined relationships between neighborhood poverty and allostatic load in a low- to moderate-income multiracial urban community. We tested the hypothesis that neighborhood poverty is associated with allostatic load, controlling for household poverty. We also examined the hypotheses that this association was mediated by psychosocial stress and health-related behaviors. We conducted multilevel analyses using cross-sectional data from a probability sample survey in Detroit, Michigan (n = 919) and the 2000 US Census. The outcome measure was allostatic load. Independent variables included neighborhood and household poverty, psychosocial stress, and health-related behaviors. Covariates included neighborhood and individual demographic characteristics. Neighborhood poverty was positively associated with allostatic load (P < .05), independent of household poverty and controlling for potential confounders. Relationships between neighborhood poverty were mediated by self-reported neighborhood environment stress but not by health-related behaviors. Neighborhood poverty is associated with wear and tear on physiological systems, and this relationship is mediated through psychosocial stress. These relationships are evident after accounting for household poverty levels. Efforts to promote health equity should focus on neighborhood poverty, associated stressful environmental conditions, and household poverty.

  11. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  12. Research on virtual network load balancing based on OpenFlow

    NASA Astrophysics Data System (ADS)

    Peng, Rong; Ding, Lei

    2017-08-01

    The Network based on OpenFlow technology separate the control module and data forwarding module. Global deployment of load balancing strategy through network view of control plane is fast and of high efficiency. This paper proposes a Weighted Round-Robin Scheduling algorithm for virtual network and a load balancing plan for server load based on OpenFlow. Load of service nodes and load balancing tasks distribution algorithm will be taken into account.

  13. Constraints on the mechanism of long-term, steady subsidence at Medicine Lake volcano, northern California, from GPS, leveling, and InSAR

    USGS Publications Warehouse

    Poland, Michael P.; Burgmann, Roland; Dzurisin, Daniel; Lisowski, Michael; Masterlark, Timothy; Owen, Susan; Fink, Jonathan

    2006-01-01

    Leveling surveys across Medicine Lake volcano (MLV) have documented subsidence that is centered on the summit caldera and decays symmetrically on the flanks of the edifice. Possible mechanisms for this deformation include fluid withdrawal from a subsurface reservoir, cooling/crystallization of subsurface magma, loading by the volcano and dense intrusions, and crustal thinning due to tectonic extension (Dzurisin et al., 1991 [Dzurisin, D., Donnelly-Nolan, J.M., Evans, J.R., Walter, S.R., 1991. Crustal subsidence, seismicity, and structure near Medicine Lake Volcano, California. Journal of Geophysical Research 96, 16, 319-16, 333.]; Dzurisin et al., 2002 [Dzurisin, D., Poland, M.P., Bürgmann, R., 2002. Steady subsidence of Medicine Lake Volcano, Northern California, revealed by repeated leveling surveys. Journal of Geophysical Research 107, 2372, doi:10.1029/2001JB000893.]). InSAR data that approximate vertical displacements are similar to the leveling results; however, vertical deformation data alone are not sufficient to distinguish between source mechanisms. Horizontal displacements from GPS were collected in the Mt. Shasta/MLV region in 1996, 1999, 2000, 2003, and 2004. These results suggest that the region is part of the western Oregon block that is rotating about an Euler pole in eastern Oregon. With this rotation removed, most sites in the network have negligible velocities except for those near MLV caldera. There, measured horizontal velocities are less than predicted from ∼10 km deep point and dislocation sources of volume loss based on the leveling data; therefore volumetric losses simulated by these sources are probably not causing the observed subsidence at MLV. This result demonstrates that elastic models of subsurface volume change can provide misleading results where additional geophysical and geological constraints are unavailable, or if only vertical deformation is known. The deformation source must be capable of causing broad vertical deformation with comparatively smaller horizontal displacements. Thermoelastic contraction of a column of hot rock beneath the volcano cannot reproduce the observed ratio of vertical to horizontal surface displacements. Models that determine deformation due to loading by the volcano and dense intrusions can be made to fit the pattern of vertical displacements by assuming a weak upper crust beneath MLV, though the subsidence rates due to surface loading must be lower than the observed displacements. Tectonic extension is almost certainly occurring based on fault orientations and focal mechanisms, but does not appear to be a major contributor to the observed deformation. We favor a model that includes a combination of sources, including extension and loading of a hot weak crust with thermal contraction of a cooling mass of rock beneath MLV, which are processes that are probably occurring at MLV. Future microgravity surveys and the planned deployment of an array of continuous GPS stations as part of a Plate Boundary Observatory volcano cluster will help to refine this model.

  14. Discharge and sediment loads in the Boise River drainage basin, Idaho 1939-40

    USGS Publications Warehouse

    Love, S.K.; Benedict, Paul Charles

    1948-01-01

    The Boise River project is a highly developed agricultural area comprising some 520 square miles of valley and bench lands in southwestern Idaho. Water for irrigation is obtained from the Boise River and its tributaries which are regulated by storage in Arrow Rock and Deer Flat reservoirs. Distribution of water to the farms is effected by 27 principal canals and several small farm laterals which divert directly from the river. The- New York Canal, which is the largest, not only supplies water to smaller canals and farm laterals, but also is used to fill Deer Flat Reservoir near Nampa from which water is furnished to farms in the lower valley. During the past 15 years maintenance costs in a number of those canals have increased due to deposition of sediment in them and in the river channel itself below the mouth of Moore Creek. Interest in determining the runoff and sediment loads from certain areas in the Boise River drainage basin led to an investigation by the Flood Control Coordinating Committee of the Department of Agriculture. Measurements of daily discharge and sediments loads were made by the Geological Survey at 13 stations in the drainage basin during the 18-month period ended June 30, 1940. The stations were on streams in areas having different kinds of vegetative cover and subjected to different kinds of land-use practice. Data obtained during the investigation furnish a basis for certain comparisons of runoff and sediment loads from several areas arid for several periods of time. Runoff measured at stations on the. Boise River near Twin Springs and on Moore Creek near Arrow Rock was smaller during 1939 than during 1940 and was below the average annual runoff for the period of available record. Runoff measured at the other stations on the project also was smaller during 1939 than during 1940 and probably did not exceed the average for the previous 25 years. The sediment loads measured during the spring runoff in 1939 were smaller at most stations than those measured during the spring runoff in 1940. At those stations where the flow was not affected, or only slightly affected, by upstream diversions or by placer-mining operations, the largest sadiment loads per unit of drainage area were measured in Grouse Creek during both 1939 and 1940, amounting to 3,460 and 2,490 tons per square mile, respectively, and the smallest loads per unit of drainage area were measured in Bannock Creek during 1939 and in the Boise River near Twin Springs during 1940, amounting to 14 and 83 tons per square mile, respectively. Size anaylses of a large number of samples of suspended and deposited sediments give an indication of the origin of sediments carried past some of the stations. The analyses show that most of the sediment measured at the five stations in the Moore Creek drainages basin above Idaho City consisted largely of coarse material. They show, also, that the sediment measured at the station on Moore Creek above Thorn Creek consisted almost entirely of fine material during practically the entire period of the investigation. Most of the coarse material passing the stations above Idaho City probably was retained behind the dikes or in the pools usually formed by tailings from dredging operations in the placer-mining area below Idaho City, and much of the fine material measured at the station on Moore Creek above Thorn Creek probably was contributed by placer-mining activity. During the years when the spring runoff is greater than that measured during 1939 and 1940, it is probable that the dikes and pools will be less effective in retaining coarse sediments within the placered area. Records of sediment loads measured in the New York Canal indicate that a negligible amount of sediment was deposited there during 1939, but that in 1940 from 10 to 15 percent of the total load at the gaging station consisted of coarse sediment which was later deposited on the canal bottom. Most of the fine material was doubtless carried through the canal and eventually deposited in diversion ditches and on farm land. Because the sediment carried past the station on Moore Creek above Thorn Creek consisted almost entirely of fine material, it is probable, that a considerable part of the coarse sediment carried in the New York Canal during the 1940 spring runoff period was scoured from the large bed of deposited material in the Boise River above Diversion- Dam, and that the remainder came from Grimes Creek. Arrow Rock Reservoir was not sluiced during the investigation, and it is therefore unlikely that any of the coarse sediment in the New York Canal came from the Boise River above Moore Creek during 1939 and 1940. The average dry weight of 71 samples of deposited sediments collected from several parts of the Boise River drainage basin is about 90 pounds per cubic foot. The average specific gravity of 77 samples of deposited sediments is 2.57.

  15. Load power device, system and method of load control and management employing load identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yi; Luebke, Charles John; Schoepf, Thomas J.

    A load power device includes a power input, at least one power output for at least one load, a plurality of sensors structured to sense voltage and current at the at least one power output, and a processor. The processor provides: (a) load identification based upon the sensed voltage and current, and (b) load control and management based upon the load identification.

  16. A Bivariate return period for levee failure monitoring

    NASA Astrophysics Data System (ADS)

    Isola, M.; Caporali, E.

    2017-12-01

    Levee breaches are strongly linked with the interaction processes among water, soil and structure, thus many are the factors that affect the breach development. One of the main is the hydraulic load, characterized by intensity and duration, i.e. by the flood event hydrograph. On the magnitude of the hydraulic load is based the levee design, generally without considering the fatigue failure due to the load duration. Moreover, many are the cases in which the levee breach are characterized by flood of magnitude lower than the design one. In order to implement the strategies of flood risk management, we built here a procedure based on a multivariate statistical analysis of flood peak and volume together with the analysis of the past levee failure events. Particularly, in order to define the probability of occurrence of the hydraulic load on a levee, a bivariate copula model is used to obtain the bivariate joint distribution of flood peak and volume. Flood peak is the expression of the load magnitude, while the volume is the expression of the stress over time. We consider the annual flood peak and the relative volume. The volume is given by the hydrograph area between the beginning and the end of event. The beginning of the event is identified as an abrupt rise of the discharge by more than 20%. The end is identified as the point from which the receding limb is characterized by the baseflow, using a nonlinear reservoir algorithm as baseflow separation technique. By this, with the aim to define warning thresholds we consider the past levee failure events and the relative bivariate return period (BTr) compared with the estimation of a traditional univariate model. The discharge data of 30 hydrometric stations of Arno River in Tuscany, Italy, in the period 1995-2016 are analysed. The database of levee failure events, considering for each event the location as well as the failure mode, is also created. The events were registered in the period 2000-2014 by EEA-Europe Environment Agency, the Italian Civil Protection and ISPRA (the Italian National Institute for Environmental Protection and Research). Only two levee failures events occurred in the sub-basin of Era River have been detected and analysed. The estimated return period with the univariate model of flood peak is greater than 2 and 5 years while the BTr is greater of 25 and 30 years respectively.

  17. Use of Artificial Neural Networks to Examine Parameters Affecting the Immobilization of Streptokinase in Chitosan

    PubMed Central

    Modaresi, Seyed Mohamad Sadegh; Faramarzi, Mohammad Ali; Soltani, Arash; Baharifar, Hadi; Amani, Amir

    2014-01-01

    Streptokinase is a potent fibrinolytic agent which is widely used in treatment of deep vein thrombosis (DVT), pulmonary embolism (PE) and acute myocardial infarction (MI). Major limitation of this enzyme is its short biological half-life in the blood stream. Our previous report showed that complexing streptokinase with chitosan could be a solution to overcome this limitation. The aim of this research was to establish an artificial neural networks (ANNs) model for identifying main factors influencing the loading efficiency of streptokinase, as an essential parameter determining efficacy of the enzyme. Three variables, namely, chitosan concentration, buffer pH and enzyme concentration were considered as input values and the loading efficiency was used as output. Subsequently, the experimental data were modeled and the model was validated against a set of unseen data. The developed model indicated chitosan concentration as probably the most important factor, having reverse effect on the loading efficiency. PMID:25587327

  18. Modes of sediment transport in channelized water flows with ramifications to the erosion of the Martian outflow channels

    NASA Technical Reports Server (NTRS)

    Komar, P. D.

    1980-01-01

    The paper discusses application to Martian water flows of the criteria that determine which grain-size ranges are transported as bed load, suspension, and wash load. The results show nearly all sand-sized material and finer would have been transported as wash load and that basalt pebbles and even cobbles could have been transported at rapid rates of suspension. An analysis of the threshold of sediment motion on Mars further indicates that the flows would have been highly competent, the larger flows having been able to transport boulder-sized material. Comparisons with terrestrial rivers which transport hyperconcentration levels of sediments suggest that the Martian water flows could have achieved sediment concentrations up to 70% in weight. Although it is possible that flows could have picked up enough sediment to convert to pseudolaminar mud flows, they probably remained at hyperconcentration levels and fully turbulent in flow character.

  19. [Hereditary fructose intolerance].

    PubMed

    Rumping, Lynne; Waterham, Hans R; Kok, Irene; van Hasselt, Peter M; Visser, Gepke

    2014-01-01

    Hereditary fructose intolerance (HFI) is a rare metabolic disease affecting fructose metabolism. After ingestion of fructose, patients may present with clinical symptoms varying from indefinite gastrointestinal symptoms to life-threatening hypoglycaemia and hepatic failure. A 13-year-old boy was referred to the department of metabolic diseases because of an abnormal fructose loading test. He was known with persistent gastrointestinal symptoms since infancy. His dietary history revealed an avoidance of fruit and sweets. Because malabsorption was suspected, an oral fructose loading test was performed. During this test, he developed severe vagal symptoms which were probably caused by a potentially fatal hypoglycaemia. The diagnosis of HFI was confirmed by genetic analysis. A good dietary history may be of important help in the diagnosis of HFI. On suspicion of HFI, genetic analysis is easy and the first choice in the diagnostic work-up. With timely diagnosis and adequate dietary treatment patients have an excellent prognosis. Fructose loading tests as part of the diagnostics can be dangerous.

  20. Evidence for apolipoprotein E {epsilon}4 association in early-onset Alzheimer`s patients with late-onset relatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Tur, J.; Delacourte, A.; Chartier-Harlin, M.C.

    1995-12-18

    Recently several reports have extended the apolipoprotein E (APOE) {epsilon}4 association found in late-onset Alzheimer`s disease (LOAD) patients to early-onset (EO) AD patients. We have studied this question in a large population of 119 EOAD patients (onset {<=}60 years) in which family history was carefully assessed and in 109 controls. We show that the APOE {epsilon}A allele frequency is increased only in the subset of patients who belong to families where LOAD secondary cases are present. Our sampling scheme permits us to demonstrate that, for an individual, bearing at least one {epsilon}4 allele increases both the risk of AD beforemore » age 60 and the probability of belonging to a family with late-onset affected subjects. Our results suggest that a subset of EOAD cases shares a common determinism with LOAD cases. 19 refs., 3 tabs.« less

Top