Science.gov

Sample records for flooring sensitivity uncertainty

  1. Predicting Residential Exposure to Phthalate Plasticizer Emitted from Vinyl Flooring: Sensitivity, Uncertainty, and Implications for Biomonitoring

    PubMed Central

    Xu, Ying; Cohen Hubal, Elaine A.; Little, John C.

    2010-01-01

    Background Because of the ubiquitous nature of phthalates in the environment and the potential for adverse human health effects, an urgent need exists to identify the most important sources and pathways of exposure. Objectives Using emissions of di(2-ethylhexyl) phthalate (DEHP) from vinyl flooring (VF) as an illustrative example, we describe a fundamental approach that can be used to identify the important sources and pathways of exposure associated with phthalates in indoor material. Methods We used a three-compartment model to estimate the emission rate of DEHP from VF and the evolving exposures via inhalation, dermal absorption, and oral ingestion of dust in a realistic indoor setting. Results A sensitivity analysis indicates that the VF source characteristics (surface area and material-phase concentration of DEHP), as well as the external mass-transfer coefficient and ventilation rate, are important variables that influence the steady-state DEHP concentration and the resulting exposure. In addition, DEHP is sorbed by interior surfaces, and the associated surface area and surface/air partition coefficients strongly influence the time to steady state. The roughly 40-fold range in predicted exposure reveals the inherent difficulty in using biomonitoring to identify specific sources of exposure to phthalates in the general population. Conclusions The relatively simple dependence on source and chemical-specific transport parameters suggests that the mechanistic modeling approach could be extended to predict exposures arising from other sources of phthalates as well as additional sources of other semivolatile organic compounds (SVOCs) such as biocides and flame retardants. This modeling approach could also provide a relatively inexpensive way to quantify exposure to many of the SVOCs used in indoor materials and consumer products. PMID:20123613

  2. Dark matter astrophysical uncertainties and the neutrino floor

    NASA Astrophysics Data System (ADS)

    O'Hare, Ciaran A. J.

    2016-09-01

    The search for weakly interacting massive particles (WIMPs) by direct detection faces an encroaching background due to coherent neutrino-nucleus scattering. For a given WIMP mass the cross section at which neutrinos constitute a dominant background is dependent on the uncertainty on the flux of each neutrino source, principally from the Sun, supernovae or atmospheric cosmic ray collisions. However there are also considerable uncertainties with regard to the astrophysical ingredients of the predicted WIMP signal. Uncertainties in the velocity of the Sun with respect to the Milky Way dark matter halo, the local density of WIMPs, and the shape of the local WIMP speed distribution all have an effect on the expected event rate in direct detection experiments and hence will change the region of the WIMP parameter space for which neutrinos are a significant background. In this work we extend the neutrino floor calculation to account for the uncertainty in the astrophysics dependence of the WIMP signal. We show the effect of uncertainties on projected discovery limits with an emphasis on low WIMP masses (less than 10 GeV) when solar neutrino backgrounds are most important. We find that accounting for astrophysical uncertainties changes the shape of the neutrino floor as a function of WIMP mass but also causes it to appear at cross sections up to an order of magnitude larger, extremely close to existing experimental limits, indicating that neutrino backgrounds will become an issue sooner than previously thought. We also explore how neutrinos hinder the estimation of WIMP parameters and how astrophysical uncertainties impact the discrimination of WIMPs and neutrinos with the use of their respective time dependencies.

  3. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  4. Dark matter vs. neutrinos: the effect of astrophysical uncertainties and timing information on the neutrino floor

    SciTech Connect

    Davis, Jonathan H.

    2015-03-09

    Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable through a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.

  5. Dark matter vs. neutrinos: the effect of astrophysical uncertainties and timing information on the neutrino floor

    SciTech Connect

    Davis, Jonathan H.

    2015-03-01

    Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable through a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.

  6. Sensitivity analysis of uncertainty in model prediction.

    PubMed

    Russi, Trent; Packard, Andrew; Feeley, Ryan; Frenklach, Michael

    2008-03-27

    Data Collaboration is a framework designed to make inferences from experimental observations in the context of an underlying model. In the prior studies, the methodology was applied to prediction on chemical kinetics models, consistency of a reaction system, and discrimination among competing reaction models. The present work advances Data Collaboration by developing sensitivity analysis of uncertainty in model prediction with respect to uncertainty in experimental observations and model parameters. Evaluation of sensitivity coefficients is performed alongside the solution of the general optimization ansatz of Data Collaboration. The obtained sensitivity coefficients allow one to determine which experiment/parameter uncertainty contributes the most to the uncertainty in model prediction, rank such effects, consider new or even hypothetical experiments to perform, and combine the uncertainty analysis with the cost of uncertainty reduction, thereby providing guidance in selecting an experimental/theoretical strategy for community action.

  7. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  8. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.

  9. Uncertainty Quantification of Equilibrium Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Brandon, S. T.; Covey, C. C.; Domyancic, D. M.; Johannesson, G.; Klein, R.; Tannahill, J.; Zhang, Y.

    2011-12-01

    Significant uncertainties exist in the temperature response of the climate system to changes in the levels of atmospheric carbon dioxide. We report progress to quantify the uncertainties of equilibrium climate sensitivity using perturbed parameter ensembles of the Community Earth System Model (CESM). Through a strategic initiative at the Lawrence Livermore National Laboratory, we have been developing uncertainty quantification (UQ) methods and incorporating them into a software framework called the UQ Pipeline. We have applied this framework to generate a large number of ensemble simulations using Latin Hypercube and other schemes to sample up to three dozen uncertain parameters in the atmospheric (CAM) and sea ice (CICE) model components of CESM. The parameters sampled are related to many highly uncertain processes, including deep and shallow convection, boundary layer turbulence, cloud optical and microphysical properties, and sea ice albedo. An extensive ensemble database comprised of more than 46,000 simulated climate-model-years of recent climate conditions has been assembled. This database is being used to train surrogate models of CESM responses and to perform statistical calibrations of the CAM and CICE models given observational data constraints. The calibrated models serve as a basis for propagating uncertainties forward through climate change simulations using a slab ocean model configuration of CESM. This procedure is being used to quantify the probability density function of equilibrium climate sensitivity accounting for uncertainties in climate model processes. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013. (LLNL-ABS-491765)

  10. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    SciTech Connect

    Hansen, Clifford W.; Martin, Curtis E.

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  11. Uncertainty and Sensitivity in Surface Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Kettner, Albert J.; Syvitski, James P. M.

    2016-05-01

    Papers for this special issue on 'Uncertainty and Sensitivity in Surface Dynamics Modeling' heralds from papers submitted after the 2014 annual meeting of the Community Surface Dynamics Modeling System or CSDMS. CSDMS facilitates a diverse community of experts (now in 68 countries) that collectively investigate the Earth's surface-the dynamic interface between lithosphere, hydrosphere, cryosphere, and atmosphere, by promoting, developing, supporting and disseminating integrated open source software modules. By organizing more than 1500 researchers, CSDMS has the privilege of identifying community strengths and weaknesses in the practice of software development. We recognize, for example, that progress has been slow on identifying and quantifying uncertainty and sensitivity in numerical modeling of earth's surface dynamics. This special issue is meant to raise awareness for these important subjects and highlight state-of-the-art progress.

  12. Solar atmospheric neutrinos and the sensitivity floor for solar dark matter annihilation searches

    NASA Astrophysics Data System (ADS)

    Argüelles, C. A.; de Wasseige, G.; Fedynitch, A.; Jones, B. J. P.

    2017-07-01

    Cosmic rays interacting in the solar atmosphere produce showers that result in a flux of high-energy neutrinos from the Sun. These form an irreducible background to indirect solar WIMP self-annihilation searches, which look for heavy dark matter particles annihilating into final states containing neutrinos in the Solar core. This background will eventually create a sensitivity floor for indirect WIMP self-annihilation searches analogous to that imposed by low-energy solar neutrino interactions for direct dark matter detection experiments. We present a new calculation of the flux of solar atmospheric neutrinos with a detailed treatment of systematic uncertainties inherent in solar atmospheric shower evolution, and we use this to derive the sensitivity floor for indirect solar WIMP annihilation analyses. We find that the floor lies less than one order of magnitude beyond the present experimental limits on spin-dependent WIMP-proton cross sections for some mass points, and that the high-energy solar atmospheric neutrino flux may be observable with running and future neutrino telescopes.

  13. Temperature targets revisited under climate sensitivity uncertainty

    NASA Astrophysics Data System (ADS)

    Neubersch, Delf; Roth, Robert; Held, Hermann

    2015-04-01

    While the 2° target has become an official goal of the COP (Conference of the Parties) process recent work has shown that it requires re-interpretation if climate sensitivity uncertainty in combination with anticipated future learning is considered (Schmidt et al., 2011). A strict probabilistic limit as suggested by the Copenhagen diagnosis may lead to conceptual flaws in view of future learning such a negative expected value of information or even ill-posed policy recommendations. Instead Schmidt et al. suggest trading off the probabilistic transgression of a temperature target against mitigation-induced welfare losses and call this procedure cost risk analysis (CRA). Here we spell out CRA for the integrated assessment model MIND and derive necessary conditions for the exact nature of that trade-off. With CRA at hand it is for the first time that the expected value of climate information, for a given temperature target, can meaningfully be assessed. When focusing on a linear risk function as the most conservative of all possible risk functions, we find that 2° target-induced mitigation costs could be reduced by up to 1/3 if the climate response to carbon dioxide emissions were known with certainty, amounting to hundreds of billions of Euros per year (Neubersch et al., 2014). Further benefits of CRA over strictly formulated temperature targets are discussed. References: D. Neubersch, H. Held, A. Otto, Operationalizing climate targets under learning: An application of cost-risk analysis, Climatic Change, 126 (3), 305-318, DOI 10.1007/s10584-014-1223-z (2014). M. G. W. Schmidt, A. Lorenz, H. Held, E. Kriegler, Climate Targets under Uncertainty: Challenges and Remedies, Climatic Change Letters, 104 (3-4), 783-791, DOI 10.1007/s10584-010-9985-4 (2011).

  14. Research on the attribution evaluating methods of dynamic effects of various parameter uncertainties on the in-structure floor response spectra of nuclear power plant

    NASA Astrophysics Data System (ADS)

    Li, Jianbo; Lin, Gao; Liu, Jun; Li, Zhiyuan

    2017-01-01

    Consideration of the dynamic effects of the site and structural parameter uncertainty is required by the standards for nuclear power plants (NPPs) in most countries. The anti-seismic standards provide two basic methods to analyze parameter uncertainty. Directly manually dealing with the calculated floor response spectra (FRS) values of deterministic approaches is the first method. The second method is to perform probability statistical analysis of the FRS results on the basis of the Monte Carlo method. The two methods can only reflect the overall effects of the uncertain parameters, and the results cannot be screened for a certain parameter's influence and contribution. In this study, based on the dynamic analyses of the floor response spectra of NPPs, a comprehensive index of the assessed impact for various uncertain parameters is presented and recommended, including the correlation coefficient, the regression slope coefficient and Tornado swing. To compensate for the lack of guidance in the NPP seismic standards, the proposed method can effectively be used to evaluate the contributions of various parameters from the aspects of sensitivity, acuity and statistical swing correlations. Finally, examples are provided to verify the set of indicators from systematic and intuitive perspectives, such as the uncertainty of the impact of the structure parameters and the contribution to the FRS of NPPs. The index is sensitive to different types of parameters, which provides a new technique for evaluating the anti-seismic parameters required for NPPs.

  15. Uncertainty and sensitivity analysis and its applications in OCD measurements

    NASA Astrophysics Data System (ADS)

    Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio

    2009-03-01

    This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.

  16. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    SciTech Connect

    Williams, Mark L; Rearden, Bradley T

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  17. Peer review of HEDR uncertainty and sensitivity analyses plan

    SciTech Connect

    Hoffman, F.O.

    1993-06-01

    This report consists of a detailed documentation of the writings and deliberations of the peer review panel that met on May 24--25, 1993 in Richland, Washington to evaluate your draft report ``Uncertainty/Sensitivity Analysis Plan`` (PNWD-2124 HEDR). The fact that uncertainties are being considered in temporally and spatially varying parameters through the use of alternative time histories and spatial patterns deserves special commendation. It is important to identify early those model components and parameters that will have the most influence on the magnitude and uncertainty of the dose estimates. These are the items that should be investigated most intensively prior to committing to a final set of results.

  18. Sensitivity of wildlife habitat models to uncertainties in GIS data

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  19. Sensitivity of wildlife habitat models to uncertainties in GIS data

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  20. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  1. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    NASA Astrophysics Data System (ADS)

    Williams, M. L.; Rearden, B. T.

    2008-12-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by "low-fidelity" approximate covariances.

  2. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    SciTech Connect

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  3. Uncertainty and sensitivity assessment of flood risk assessments

    NASA Astrophysics Data System (ADS)

    de Moel, H.; Aerts, J. C.

    2009-12-01

    Floods are one of the most frequent and costly natural disasters. In order to protect human lifes and valuable assets from the effect of floods many defensive structures have been build. Despite these efforts economic losses due to catastrophic flood events have, however, risen substantially during the past couple of decades because of continuing economic developments in flood prone areas. On top of that, climate change is expected to affect the magnitude and frequency of flood events. Because these ongoing trends are expected to continue, a transition can be observed in various countries to move from a protective flood management approach to a more risk based flood management approach. In a risk based approach, flood risk assessments play an important role in supporting decision making. Most flood risk assessments assess flood risks in monetary terms (damage estimated for specific situations or expected annual damage) in order to feed cost-benefit analysis of management measures. Such flood risk assessments contain, however, considerable uncertainties. This is the result from uncertainties in the many different input parameters propagating through the risk assessment and accumulating in the final estimate. Whilst common in some other disciplines, as with integrated assessment models, full uncertainty and sensitivity analyses of flood risk assessments are not so common. Various studies have addressed uncertainties regarding flood risk assessments, but have mainly focussed on the hydrological conditions. However, uncertainties in other components of the risk assessment, like the relation between water depth and monetary damage, can be substantial as well. This research therefore tries to assess the uncertainties of all components of monetary flood risk assessments, using a Monte Carlo based approach. Furthermore, the total uncertainty will also be attributed to the different input parameters using a variance based sensitivity analysis. Assessing and visualizing the

  4. Uncertainty and Sensitivity Analyses of Duct Propagation Models

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.

    2008-01-01

    This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.

  5. Employing Sensitivity Derivatives to Estimate Uncertainty Propagation in CFD

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III

    2004-01-01

    Two methods that exploit the availability of sensitivity derivatives are successfully employed to predict uncertainty propagation through Computational Fluid Dynamics (CFD) code for an inviscid airfoil problem. An approximate statistical second-moment method and a Sensitivity Derivative Enhanced Monte Carlo (SDEMC) method are successfully demonstrated on a two-dimensional problem. First- and second-order sensitivity derivatives of code output with respect to code input are obtained through an efficient incremental iterative approach. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables); these sensitivity derivatives enable one to formulate first- and second-order Taylor Series approximations for the mean and variance of CFD output quantities. Additionally, incorporation of the first-order sensitivity derivatives into the data reduction phase of a conventional Monte Carlo (MC) simulation allows for improved accuracy in determining the first moment of the CFD output. Both methods are compared to results generated using a conventional MC method. The methods that exploit the availability of sensitivity derivatives are found to be valid when considering small deviations from input mean values.

  6. Sensitivity and uncertainty analysis of a polyurethane foam decomposition model

    SciTech Connect

    HOBBS,MICHAEL L.; ROBINSON,DAVID G.

    2000-03-14

    Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.

  7. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    SciTech Connect

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such as MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.

  8. Sensitivity of direct global warming potentials to key uncertainties

    SciTech Connect

    Wuebbles, D.J.; Patten, K.O.; Grant, K.E. ); Jain, A.K. )

    1992-07-01

    A series of sensitivity studies examines the effect of several uncertainties in Global Wanning Potentials (GWPs). For example, the original evaluation of GWPs for the Intergovernmental Panel on Climate Change (EPCC, 1990) did not attempt to account for the possible sinks of carbon dioxide (CO{sub 2}) that could balance the carbon cycle and produce atmospheric concentrations of C0{sub 2} that match observations. In this study, a balanced carbon cycle model is applied in calculation of the radiative forcing from C0{sub 2}. Use of the balanced model produces up to 20 percent enhancement of the GWPs for most trace gases compared with the EPCC (1990) values for time horizons up to 100 years, but a decreasing enhancement with longer time horizons. Uncertainty limits of the fertilization feedback parameter contribute a 10 percent range in GWP values. Another systematic uncertainty in GWPs is the assumption of an equilibrium atmosphere (one in which the concentration of trace gases remains constant) versus a disequilibrium atmosphere. The latter gives GWPs that are 15 to 30 percent greater than the former, dependening upon the carbon dioxide emission scenario chosen. Seven scenarios are employed: constant emission past 1990 and the six EPCC (1992) emission scenarios. For the analysis of uncertainties in atmospheric lifetime ({tau}), the GWP changes in direct proportion to {tau} for short-lived gases, but to a lesser extent for gases with {tau} greater than the time horizon for the GWP calculation.

  9. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  10. Orbit uncertainty propagation and sensitivity analysis with separated representations

    NASA Astrophysics Data System (ADS)

    Balducci, Marc; Jones, Brandon; Doostan, Alireza

    2017-09-01

    Most approximations for stochastic differential equations with high-dimensional, non-Gaussian inputs suffer from a rapid (e.g., exponential) increase of computational cost, an issue known as the curse of dimensionality. In astrodynamics, this results in reduced accuracy when propagating an orbit-state probability density function. This paper considers the application of separated representations for orbit uncertainty propagation, where future states are expanded into a sum of products of univariate functions of initial states and other uncertain parameters. An accurate generation of separated representation requires a number of state samples that is linear in the dimension of input uncertainties. The computation cost of a separated representation scales linearly with respect to the sample count, thereby improving tractability when compared to methods that suffer from the curse of dimensionality. In addition to detailed discussions on their construction and use in sensitivity analysis, this paper presents results for three test cases of an Earth orbiting satellite. The first two cases demonstrate that approximation via separated representations produces a tractable solution for propagating the Cartesian orbit-state uncertainty with up to 20 uncertain inputs. The third case, which instead uses Equinoctial elements, reexamines a scenario presented in the literature and employs the proposed method for sensitivity analysis to more thoroughly characterize the relative effects of uncertain inputs on the propagated state.

  11. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  12. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    SciTech Connect

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-09-20

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

  13. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    NASA Astrophysics Data System (ADS)

    Gubler, S.; Endrizzi, S.; Gruber, S.; Purves, R. S.

    2013-02-01

    Before operational use or for decision making, models must be validated, and the degree of trust in model outputs should be quantified. Often, model validation is performed at single locations due to the lack of spatially-distributed data. Since the analysis of parametric model uncertainties can be performed independently of observations, it is a suitable method to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainty of a physically-based mountain permafrost model are quantified within an artificial topography consisting of different elevations and exposures combined with six ground types characterized by their hydraulic properties. The analyses performed for all combinations of topographic factors and ground types allowed to quantify the variability of model sensitivity and uncertainty within mountain regions. We found that modeled snow duration considerably influences the mean annual ground temperature (MAGT). The melt-out day of snow (MD) is determined by processes determining snow accumulation and melting. Parameters such as the temperature and precipitation lapse rate and the snow correction factor have therefore a great impact on modeled MAGT. Ground albedo changes MAGT from 0.5 to 4°C in dependence of the elevation, the aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter snow cover. Snow albedo and other parameters determining the amount of reflected solar radiation are important, changing MAGT at different depths by more than 1°C. Parameters influencing the turbulent fluxes as the roughness length or the dew temperature are more sensitive at low elevation sites due to higher air temperatures and decreased solar radiation. Modeling the individual terms of the energy balance correctly is

  14. CALIBRATION, OPTIMIZATION, AND SENSITIVITY AND UNCERTAINTY ALGORITHMS APPLICATION PROGRAMMING INTERFACE (COSU-API)

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...

  15. CALIBRATION, OPTIMIZATION, AND SENSITIVITY AND UNCERTAINTY ALGORITHMS APPLICATION PROGRAMMING INTERFACE (COSU-API)

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...

  16. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping

  17. Sensitivity to Uncertainty in Asteroid Impact Risk Assessment

    NASA Astrophysics Data System (ADS)

    Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.

    2015-12-01

    The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.

  18. Climate sensitivity uncertainty: when is good news bad?

    PubMed

    Freeman, Mark C; Wagner, Gernot; Zeckhauser, Richard J

    2015-11-28

    Climate change is real and dangerous. Exactly how bad it will get, however, is uncertain. Uncertainty is particularly relevant for estimates of one of the key parameters: equilibrium climate sensitivity--how eventual temperatures will react as atmospheric carbon dioxide concentrations double. Despite significant advances in climate science and increased confidence in the accuracy of the range itself, the 'likely' range has been 1.5-4.5°C for over three decades. In 2007, the Intergovernmental Panel on Climate Change (IPCC) narrowed it to 2-4.5°C, only to reverse its decision in 2013, reinstating the prior range. In addition, the 2013 IPCC report removed prior mention of 3°C as the 'best estimate'. We interpret the implications of the 2013 IPCC decision to lower the bottom of the range and excise a best estimate. Intuitively, it might seem that a lower bottom would be good news. Here we ask: when might apparently good news about climate sensitivity in fact be bad news in the sense that it lowers societal well-being? The lowered bottom value also implies higher uncertainty about the temperature increase, definitely bad news. Under reasonable assumptions, both the lowering of the lower bound and the removal of the 'best estimate' may well be bad news. © 2015 The Author(s).

  19. Sensitivity and uncertainty analysis of regional marine ecosystem services value

    NASA Astrophysics Data System (ADS)

    Shi, Honghua; Zheng, Wei; Wang, Zongling; Ding, Dewen

    2009-06-01

    Marine ecosystem services are the benefits which people obtain from the marine ecosystem, including provisioning services, regulating services, cultural services and supporting services. The human species, while buffered against environmental changes by culture and technology, is fundamentally dependent on the flow of ecosystem services. Marine ecosystem services become increasingly valuable as the terrestrial resources become scarce. The value of marine ecosystem services is the monetary flow of ecosystem services on specific temporal and spatial scales, which often changes due to the variation of the goods prices, yields and the status of marine exploitation. Sensitivity analysis is to study the relationship between the value of marine ecosystem services and the main factors which affect it. Uncertainty analysis based on varying prices, yields and status of marine exploitation was carried out. Through uncertainty analysis, a more credible value range instead of a fixed value of marine ecosystem services was obtained in this study. Moreover, sensitivity analysis of the marine ecosystem services value revealed the relative importance of different factors.

  20. Fast Computation of Hemodynamic Sensitivity to Lumen Segmentation Uncertainty.

    PubMed

    Sankaran, Sethuraman; Grady, Leo; Taylor, Charles A

    2015-12-01

    Patient-specific blood flow modeling combining imaging data and computational fluid dynamics can aid in the assessment of coronary artery disease. Accurate coronary segmentation and realistic physiologic modeling of boundary conditions are important steps to ensure a high diagnostic performance. Segmentation of the coronary arteries can be constructed by a combination of automated algorithms with human review and editing. However, blood pressure and flow are not impacted equally by different local sections of the coronary artery tree. Focusing human review and editing towards regions that will most affect the subsequent simulations can significantly accelerate the review process. We define geometric sensitivity as the standard deviation in hemodynamics-derived metrics due to uncertainty in lumen segmentation. We develop a machine learning framework for estimating the geometric sensitivity in real time. Features used include geometric and clinical variables, and reduced-order models. We develop an anisotropic kernel regression method for assessment of lumen narrowing score, which is used as a feature in the machine learning algorithm. A multi-resolution sensitivity algorithm is introduced to hierarchically refine regions of high sensitivity so that we can quantify sensitivities to a desired spatial resolution. We show that the mean absolute error of the machine learning algorithm compared to 3D simulations is less than 0.01. We further demonstrate that sensitivity is not predicted simply by anatomic reduction but also encodes information about hemodynamics which in turn depends on downstream boundary conditions. This sensitivity approach can be extended to other systems such as cerebral flow, electro-mechanical simulations, etc.

  1. Uncertainty estimates in broadband seismometer sensitivities using microseisms

    USGS Publications Warehouse

    Ringler, Adam T.; Storm, Tyler L.; Gee, Lind S.; Hutt, Charles R.; Wilson, David C.

    2015-01-01

    The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution (R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.

  2. Uncertainty in the analysis of the overall equipment effectiveness on the shop floor

    NASA Astrophysics Data System (ADS)

    Rößler, M. P.; Abele, E.

    2013-06-01

    In this article an approach will be presented which supports transparency regarding the effectiveness of manufacturing equipment by combining the fuzzy set theory with the method of the overall equipment effectiveness analysis. One of the key principles of lean production and also a fundamental task in production optimization projects is the prior analysis of the current state of a production system by the use of key performance indicators to derive possible future states. The current state of the art in overall equipment effectiveness analysis is usually performed by cumulating different machine states by means of decentralized data collection without the consideration of uncertainty. In manual data collection or semi-automated plant data collection systems the quality of derived data often diverges and leads optimization teams to distorted conclusions about the real optimization potential of manufacturing equipment. The method discussed in this paper is to help practitioners to get more reliable results in the analysis phase and so better results of optimization projects. Under consideration of a case study obtained results are discussed.

  3. Sensitivity and uncertainty analysis of a regulatory risk model

    SciTech Connect

    Kumar, A.; Manocha, A.; Shenoy, T.

    1999-07-01

    Health Risk Assessments (H.R.A.s) are increasingly being used in the environmental decision making process, starting from problem identification to the final clean up activities. A key issue concerning the results of these risk assessments is the uncertainty associated with them. This uncertainty has been associated with highly conservative estimates of risk assessment parameters in past studies. The primary purpose of this study was to investigate error propagation through a risk model. A hypothetical glass plant situated in the state of California was studied. Air emissions from this plant were modeled using the ISCST2 model and the risk was calculated using the ACE2588 model. The downwash was also considered during the concentration calculations. A sensitivity analysis on the risk computations identified five parameters--mixing depth for human consumption, deposition velocity, weathering constant, interception factors for vine crop and the average leaf vegetable consumption--which had the greatest impact on the calculated risk. A Monte Carlo analysis using these five parameters resulted in a distribution with a lesser percentage deviation than the percentage standard deviation of the input parameters.

  4. Sensitivity and uncertainty analysis of the recharge boundary condition

    NASA Astrophysics Data System (ADS)

    Jyrkama, M. I.; Sykes, J. F.

    2006-01-01

    The reliability analysis method is integrated with MODFLOW to study the impact of recharge on the groundwater flow system at a study area in New Jersey. The performance function is formulated in terms of head or flow rate at a pumping well, while the recharge sensitivity vector is computed efficiently by implementing the adjoint method in MODFLOW. The developed methodology not only quantifies the reliability of head at the well in terms of uncertainties in the recharge boundary condition, but it also delineates areas of recharge that have the highest impact on the head and flow rate at the well. The results clearly identify the most important land use areas that should be protected in order to maintain the head and hence production at the pumping well. These areas extend far beyond the steady state well capture zone used for land use planning and management within traditional wellhead protection programs.

  5. PC-based trending and analysis of floor vibration in sensitive fabrication areas

    NASA Astrophysics Data System (ADS)

    Palm, Jon E.; Middleton, Ben

    1992-02-01

    This paper describes a floor monitoring system utilizing a PC that continuously monitors very low levels of vibration and warns the user of possible " vibration contamination" that might result. The floor monitoring system designed by DataSignal Systems Inc. in Friendswood Texas is a complete package including special purpose microvelocity sensors signal conditioning and band specific velocity detection electronics analog-todigital sampling vibration spectrum analysis parameter trending alarming and archiving measurements. An IBM or compatible computer runs the systems software and displays the measured results. The computer can be installed in a convenient location for ease of use and maintenance. In order to maximize its effectiveness for alarms and ease of data display interpretation a VGA color monitor is a must. Since the system monitors facility vibration continuously the computer must be dedicated and not time shared. 2 . MEASURE MICRO-VIBRATION In many of todays high technology manufacturing facilities vibration can have a costly impact on the process and quality of an operation. This system can be set to alarm at vibration levels determined to be critical allowing an operator to take appropriate steps including date and time coding the process or even stopping the process. The system can also be used to establish limits for manufacturing operations in an adjoining facility that causes structure borne vibration to be transmitted to the vibration sensitive manufacturing area. Up to eight micro velocity sensor can be monitored simultaneously with results being displayed in a bar chart format on the computer screen. For detailed analysis purposes to help identify the source of vibration a narrowband FFT processor is used to display a vibration spectrum from a selected sensors output signal. The vibration spectrum analysis capability can be manually activated or be automatically acquired upon an alarm condition. 0819407577/92J$4. OO SPIE Vol. 1619 Vibration

  6. Sensitivity of collective action to uncertainty about climate tipping points

    NASA Astrophysics Data System (ADS)

    Barrett, Scott; Dannenberg, Astrid

    2014-01-01

    Despite more than two decades of diplomatic effort, concentrations of greenhouse gases continue to trend upwards, creating the risk that we may someday cross a threshold for `dangerous' climate change. Although climate thresholds are very uncertain, new research is trying to devise `early warning signals' of an approaching tipping point. This research offers a tantalizing promise: whereas collective action fails when threshold uncertainty is large, reductions in this uncertainty may bring about the behavioural change needed to avert a climate `catastrophe'. Here we present the results of an experiment, rooted in a game-theoretic model, showing that behaviour differs markedly either side of a dividing line for threshold uncertainty. On one side of the dividing line, where threshold uncertainty is relatively large, free riding proves irresistible and trust illusive, making it virtually inevitable that the tipping point will be crossed. On the other side, where threshold uncertainty is small, the incentive to coordinate is strong and trust more robust, often leading the players to avoid crossing the tipping point. Our results show that uncertainty must be reduced to this `good' side of the dividing line to stimulate the behavioural shift needed to avoid `dangerous' climate change.

  7. The Relationship Between Intolerance of Uncertainty, Sensory Sensitivities, and Anxiety in Autistic and Typically Developing Children.

    PubMed

    Neil, Louise; Olsson, Nora Choque; Pellicano, Elizabeth

    2016-06-01

    Guided by a recent theory that proposes fundamental differences in how autistic individuals deal with uncertainty, we investigated the extent to which the cognitive construct 'intolerance of uncertainty' and anxiety were related to parental reports of sensory sensitivities in 64 autistic and 85 typically developing children aged 6-14 years. Intolerance of uncertainty and anxiety explained approximately half the variance in autistic children's sensory sensitivities, but only around a fifth of the variance in typical children's sensory sensitivities. In children with autism only, intolerance of uncertainty remained a significant predictor of children's sensory sensitivities once the effects of anxiety were adjusted for. Our results suggest intolerance of uncertainty is a relevant construct to sensory sensitivities in children with and without autism.

  8. Calculational methodology and associated uncertainties: Sensitivity and uncertainty analysis of reactor performance parameters

    SciTech Connect

    Kujawski, E.; Weisbin, C.R.

    1982-01-01

    This chapter considers the calculational methodology and associated uncertainties both for the design of large LMFBR's and the analysis of critical assemblies (fast critical experiments) as performed by several groups within the US. Discusses cross-section processing; calculational methodology for the design problem; core physics computations; design-oriented approximations; benchmark analyses; and determination of calculational corrections and associated uncertainties for a critical assembly. Presents a detailed analysis of the sources of calculational uncertainties for the critical assembly ZPR-6/7 to illustrate the quantitative assessment of calculational correction factors and uncertainties. Examines calculational uncertainties that arise from many different sources including intrinsic limitations of computational methods; design-oriented approximations related to reactor modeling; computational capability and code availability; economic limitations; and the skill of the reactor analyst. Emphasizes that the actual design uncertainties in most of the parameters, with the possible exception of burnup, are likely to be less than might be indicated by the results presented in this chapter because reactor designers routinely apply bias factors (usually derived from critical experiments) to their calculated results.

  9. Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code

    SciTech Connect

    Cabellos, O.; Sanz, J.; Rodriguez, A.; Gonzalez, E.; Embid, M.; Alvarez, F.; Reyes, S.

    2005-05-24

    Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.

  10. Sensitivity and Uncertainty Analysis to Burn-up Estimates on ADS Using ACAB Code

    SciTech Connect

    Cabellos, O; Sanz, J; Rodriguez, A; Gonzalez, E; Embid, M; Alvarez, F; Reyes, S

    2005-02-11

    Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic reevaluation of some uncertainty XSs for ADS.

  11. Sensitivity and uncertainty in crop water footprint accounting: a case study for the Yellow River Basin

    NASA Astrophysics Data System (ADS)

    Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.

    2014-01-01

    Water Footprint Assessment is a quickly growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of water footprint estimates to changes in important input variables and quantifies the size of uncertainty in water footprint estimates. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat in the Yellow River Basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River Basin in the period considered. The sensitivity and uncertainty analysis focused on the effects on water footprint estimates at basin level (in m3 t-1) of four key input variables: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), and crop calendar. The one-at-a-time method was carried out to analyse the sensitivity of the water footprint of crops to fractional changes of individual input variables. Uncertainties in crop water footprint estimates were quantified through Monte Carlo simulations. The results show that the water footprint of crops is most sensitive to ET0 and Kc, followed by crop calendar and PR. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0 was dominant compared to that of precipitation. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±26% (at 95% confidence level). The sensitivities and uncertainties differ across crop types, with highest sensitivities

  12. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  13. Uncertainty in the 2°C warming threshold related to climate sensitivity and climate feedback

    NASA Astrophysics Data System (ADS)

    Zhou, Tianjun; Chen, Xiaolong

    2015-12-01

    Climate sensitivity is an important index that measures the relationship between the increase in greenhouse gases and the magnitude of global warming. Uncertainties in climate change projection and climate modeling are mostly related to the climate sensitivity. The climate sensitivities of coupled climate models determine the magnitudes of the projected global warming. In this paper, the authors thoroughly review the literature on climate sensitivity, and discuss issues related to climate feedback processes and the methods used in estimating the equilibrium climate sensitivity and transient climate response (TCR), including the TCR to cumulative CO2 emissions. After presenting a summary of the sources that affect the uncertainty of climate sensitivity, the impact of climate sensitivity on climate change projection is discussed by addressing the uncertainties in 2°C warming. Challenges that call for further investigation in the research community, in particular the Chinese community, are discussed.

  14. TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE

    SciTech Connect

    Rearden, Bradley T; Mueller, Don; Bowman, Stephen M; Busch, Robert D.; Emerson, Scott

    2009-01-01

    This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity and uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.

  15. The Relationship between Intolerance of Uncertainty, Sensory Sensitivities, and Anxiety in Autistic and Typically Developing Children

    ERIC Educational Resources Information Center

    Neil, Louise; Olsson, Nora Choque; Pellicano, Elizabeth

    2016-01-01

    Guided by a recent theory that proposes fundamental differences in how autistic individuals deal with uncertainty, we investigated the extent to which the cognitive construct "intolerance of uncertainty" and anxiety were related to parental reports of sensory sensitivities in 64 autistic and 85 typically developing children aged…

  16. Users manual for the FORSS sensitivity and uncertainty analysis code system

    SciTech Connect

    Lucius, J.L.; Weisbin, C.R.; Marable, J.H.; Drischler, J.D.; Wright, R.Q.; White, J.E.

    1981-01-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions and associated uncertainties. This report describes the computing environment and the modules currently used to implement FORSS Sensitivity and Uncertainty Methodology.

  17. The Relationship between Intolerance of Uncertainty, Sensory Sensitivities, and Anxiety in Autistic and Typically Developing Children

    ERIC Educational Resources Information Center

    Neil, Louise; Olsson, Nora Choque; Pellicano, Elizabeth

    2016-01-01

    Guided by a recent theory that proposes fundamental differences in how autistic individuals deal with uncertainty, we investigated the extent to which the cognitive construct "intolerance of uncertainty" and anxiety were related to parental reports of sensory sensitivities in 64 autistic and 85 typically developing children aged…

  18. Predictive Uncertainty And Parameter Sensitivity Of A Sediment-Flux Model: Nitrogen Flux and Sediment Oxygen Demand

    EPA Science Inventory

    Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...

  19. Predictive Uncertainty And Parameter Sensitivity Of A Sediment-Flux Model: Nitrogen Flux and Sediment Oxygen Demand

    EPA Science Inventory

    Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...

  20. Sensitivity of Stratospheric Dynamics to Uncertainty in O3 Production

    NASA Astrophysics Data System (ADS)

    Hsu, J. C.; Prather, M. J.; Bergmann, D. J.; Cameron-Smith, P. J.

    2013-12-01

    Some key photochemical uncertainties that cannot be readily eliminated by current observations translate into a range of stratospheric O3 abundances in the tens of percent. The uncertainty in O3 production due to that in the cross sections for O2 in the Hertzberg continuum is studied here with the NCAR Community Atmosphere Model, which allows for interactive climate and ozone chemistry. A min-max range in the O2 cross sections of 30%, consistent with current uncertainties, changes O3 abundances in the lower tropical stratosphere by up to 30%, with a relatively smaller and opposite change above 30 hPa. Here we have systematically examined the changes in the time-mean state, the seasonal cycle, and the interannual variability of the temperature and circulation associated with the +/-30% change in O2 cross sections. This study points to the important role of O3 in the lower tropical stratosphere in determining the physical characteristics of the tropical tropopause layer. Reducing O2 cross sections by 30% increases ozone abundances which warms the lower stratosphere (60S -60N; 2K maximum at equator) and lowers the tropopause height by 100-200m (30S -30N). The large-scale warming leads to enhanced stratification near the tropopause which reduces upward wave propagation everywhere except for high latitudes. The lowermost tropical stratosphere is better ventilated during austral winter. The annual cycle of ozone is amplified. The interannual variability of the winter stratospheric polar vortices also increases, but the mechanism involves wave-mean flow interaction and the exact role of ozone in it needs further investigation.

  1. PROBABILISTIC SENSITIVITY AND UNCERTAINTY ANALYSIS WORKSHOP SUMMARY REPORT

    SciTech Connect

    Seitz, R

    2008-06-25

    Stochastic or probabilistic modeling approaches are being applied more frequently in the United States and globally to quantify uncertainty and enhance understanding of model response in performance assessments for disposal of radioactive waste. This increased use has resulted in global interest in sharing results of research and applied studies that have been completed to date. This technical report reflects the results of a workshop that was held to share results of research and applied work related to performance assessments conducted at United States Department of Energy sites. Key findings of this research and applied work are discussed and recommendations for future activities are provided.

  2. Sensitivity and uncertainty analysis of reactivities for UO2 and MOX fueled PWR cells

    NASA Astrophysics Data System (ADS)

    Foad, Basma; Takeda, Toshikazu

    2015-12-01

    The purpose of this paper is to apply our improved method for calculating sensitivities and uncertainties of reactivity responses for UO2 and MOX fueled pressurized water reactor cells. The improved method has been used to calculate sensitivity coefficients relative to infinite dilution cross-sections, where the self-shielding effect is taken into account. Two types of reactivities are considered: Doppler reactivity and coolant void reactivity, for each type of reactivity, the sensitivities are calculated for small and large perturbations. The results have demonstrated that the reactivity responses have larger relative uncertainty than eigenvalue responses. In addition, the uncertainty of coolant void reactivity is much greater than Doppler reactivity especially for large perturbations. The sensitivity coefficients and uncertainties of both reactivities were verified by comparing with SCALE code results using ENDF/B-VII library and good agreements have been found.

  3. Sensitivity and uncertainty analysis of reactivities for UO2 and MOX fueled PWR cells

    SciTech Connect

    Foad, Basma; Takeda, Toshikazu

    2015-12-31

    The purpose of this paper is to apply our improved method for calculating sensitivities and uncertainties of reactivity responses for UO{sub 2} and MOX fueled pressurized water reactor cells. The improved method has been used to calculate sensitivity coefficients relative to infinite dilution cross-sections, where the self-shielding effect is taken into account. Two types of reactivities are considered: Doppler reactivity and coolant void reactivity, for each type of reactivity, the sensitivities are calculated for small and large perturbations. The results have demonstrated that the reactivity responses have larger relative uncertainty than eigenvalue responses. In addition, the uncertainty of coolant void reactivity is much greater than Doppler reactivity especially for large perturbations. The sensitivity coefficients and uncertainties of both reactivities were verified by comparing with SCALE code results using ENDF/B-VII library and good agreements have been found.

  4. Sensitivity analysis of a two-dimensional quantitative microbiological risk assessment: keeping variability and uncertainty separated.

    PubMed

    Busschaert, Pieter; Geeraerd, Annemie H; Uyttendaele, Mieke; Van Impe, Jan F

    2011-08-01

    The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo-randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used-that is, an ANOVA-like model and Sobol sensitivity indices-to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.

  5. Sensitivity and Uncertainty Analysis for Nuclear Criticality Safety Using KENO in the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.

    Sensitivity and uncertainty methods have been developed to aid in the establishment of areas of applicability and validation of computer codes and nuclear data for nuclear criticality safety studies. A key component in this work is the generation of sensitivity and uncertainty parameters for typically several hundred benchmarks experiments used in validation exercises. Previously, only one-dimensional sensitivity tools were available for this task, which necessitated the remodeling of multidimensional inputs in order for such an analysis to be performed. This paper describes the development of the SEN3 Monte Carlo based sensitivity analysis sequence for SCALE.

  6. Uncertainty

    USGS Publications Warehouse

    Hunt, Randall J.

    2012-01-01

    Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.

  7. The Role That Clouds Play in Uncertainty in the Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Dessler, A. E.

    2014-12-01

    Much of the uncertainty in evaluations of the climate sensitivity comes from the uncertainty in the cloud feedback. This comes from the unique property that clouds affect both the solar and infrared energy budgets of the planet, and these effects tend to offset. As a result, the net cloud effect is a small difference between large, offsetting terms. In addition, these estimates tend to be derived from short-term climate variations (e.g., ENSO). I will examine various estimates of the cloud feedback and investigate what they can tell us about the equilibrium climate sensitivity and its uncertainty.

  8. Calculating Sensitivities, Response and Uncertainties Within LODI for Precipitation Scavenging

    SciTech Connect

    Loosmore, G; Hsieh, H; Grant, K

    2004-01-21

    This paper describes an investigation into the uses of first-order, local sensitivity analysis in a Lagrangian dispersion code. The goal of the project is to gain knowledge not only about the sensitivity of the dispersion code predictions to the specific input parameters of interest, but also to better understand the uses and limitations of sensitivity analysis within such a context. The dispersion code of interest here is LODI, which is used for modeling emergency release scenarios at the Department of Energy's National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory. The NARAC system provides both real-time operational predictions and detailed assessments for atmospheric releases of hazardous materials. LODI is driven by a meteorological data assimilation model and an in-house version of COAMPS, the Naval Research Laboratory's mesoscale weather forecast model.

  9. Quantifying uncertainty and sensitivity in sea ice models

    SciTech Connect

    Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark

    2016-07-15

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  10. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  11. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perkó, Zoltán; Kiedrowski, Brian C.; ...

    2017-03-01

    The evaluation of uncertainties is essential for criticality safety. Our paper deals with material density and composition uncertainties and provides guidance on how traditional first-order sensitivity methods can be used to predict their effects. Unlike problems that deal with traditional cross-section uncertainty analysis, material density and composition-related problems are often characterized by constraints that do not allow arbitrary and independent variations of the input parameters. Their proper handling requires constrained sensitivities that take into account the interdependence of the inputs. This paper discusses how traditional unconstrained isotopic density sensitivities can be calculated using the adjoint sensitivity capabilities of the popularmore » Monte Carlo codes MCNP6 and SCALE 6.2, and we also present the equations to be used when forward and adjoint flux distributions are available. Subsequently, we show how the constrained sensitivities can be computed using the unconstrained (adjoint-based) sensitivities as well as by applying central differences directly. We present three distinct procedures for enforcing the constraint on the input variables, each leading to different constrained sensitivities. As a guide, the sensitivity and uncertainty formulas for several frequently encountered specific cases involving densities and compositions are given. One analytic k∞ example highlights the relationship between constrained sensitivity formulas and central differences, and a more realistic numerical problem reveals similarities among the computer codes used and differences among the three methods of enforcing the constraint.« less

  12. Sensitivity of Airburst Damage Prediction to Asteroid Characterization Uncertainty

    NASA Astrophysics Data System (ADS)

    Mathias, Donovan; Wheeler, Lorien; Dotson, Jessie L.

    2016-10-01

    Characterizing the level of risk posed by asteroid impacts is quintessential to developing informed mitigation criteria, response plans, and long-term survey and characterization strategies for potentially hazardous asteroids. A physics-based impact risk (PBIR) model has been created to assess the consequences of potential asteroid strikes by combining probabilistic sampling of uncertain impact parameters with numerical simulation of the atmospheric flight, breakup, and resulting ground damage for each sampled impact case. The model incudes a Monte Carlo framework that allows the uncertainties in the potential impact parameters to be described in terms of probability distributions, and produces statistical results that support inference regarding the threat level across those ranges. This work considers the PBIR model outputs in terms of potential threat characterization metrics for decision support. Several metrics are assessed, from the single estimated casualty (Ec) parameter to more descriptive distribution functions. Distributions are shown for aggregate risk, risk versus asteroid size, and risk to specific geographic regions. In addition, these results show how the uncertain properties of potential impactors can lead to different conclusions about optimal survey and characterization strategies.

  13. Reducing capture zone uncertainty with a systematic sensitivity analysis.

    PubMed

    Esling, Steven P; Keller, John E; Miller, Kenneth J

    2008-01-01

    The U.S. Environmental Protection Agency has established several methods to delineate wellhead protection areas (WHPAs) around community wells in order to protect them from surface contamination sources. Delineating a WHPA often requires defining the capture zone for a well. Generally, analytical models or arbitrary setback zones have been used to define the capture zone in areas where little is known about the distribution of hydraulic head, hydraulic conductivity, or recharge. Numerical modeling, however, even in areas of sparse data, offers distinct advantages over the more simplified analytical models or arbitrary setback zones. The systematic approach discussed here calibrates a numerical flow model to regional topography and then applies a matrix of plausible recharge to hydraulic conductivity ratios (R/K) to investigate the impact on the size and shape of the capture zone. This approach does not attempt to determine the uncertainty of the model but instead yields several possible capture zones, the composite of which is likely to contain the actual capture zone. A WHPA based on this composite capture zone will protect ground water resources better than one based on any individual capture zone. An application of the method to three communities illustrates development of the R/K matrix and demonstrates that the method is particularly well suited for determining capture zones in alluvial aquifers.

  14. Modelling survival: exposure pattern, species sensitivity and uncertainty

    NASA Astrophysics Data System (ADS)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.

    2016-07-01

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.

  15. Modelling survival: exposure pattern, species sensitivity and uncertainty

    PubMed Central

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; Van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.

    2016-01-01

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans. PMID:27381500

  16. Uncertainty of relative sensitivity factors in glow discharge mass spectrometry

    NASA Astrophysics Data System (ADS)

    Meija, Juris; Methven, Brad; Sturgeon, Ralph E.

    2017-10-01

    The concept of the relative sensitivity factors required for the correction of the measured ion beam ratios in pin-cell glow discharge mass spectrometry is examined in detail. We propose a data-driven model for predicting the relative response factors, which relies on a non-linear least squares adjustment and analyte/matrix interchangeability phenomena. The model provides a self-consistent set of response factors for any analyte/matrix combination of any element that appears as either an analyte or matrix in at least one known response factor.

  17. Soil moisture sensitivity of autotrophic and heterotrophic forest floor respiration in boreal xeric pine and mesic spruce forests

    NASA Astrophysics Data System (ADS)

    Ťupek, Boris; Launiainen, Samuli; Peltoniemi, Mikko; Heikkinen, Jukka; Lehtonen, Aleksi

    2016-04-01

    Litter decomposition rates of the most process based soil carbon models affected by environmental conditions are linked with soil heterotrophic CO2 emissions and serve for estimating soil carbon sequestration; thus due to the mass balance equation the variation in measured litter inputs and measured heterotrophic soil CO2 effluxes should indicate soil carbon stock changes, needed by soil carbon management for mitigation of anthropogenic CO2 emissions, if sensitivity functions of the applied model suit to the environmental conditions e.g. soil temperature and moisture. We evaluated the response forms of autotrophic and heterotrophic forest floor respiration to soil temperature and moisture in four boreal forest sites of the International Cooperative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests) by a soil trenching experiment during year 2015 in southern Finland. As expected both autotrophic and heterotrophic forest floor respiration components were primarily controlled by soil temperature and exponential regression models generally explained more than 90% of the variance. Soil moisture regression models on average explained less than 10% of the variance and the response forms varied between Gaussian for the autotrophic forest floor respiration component and linear for the heterotrophic forest floor respiration component. Although the percentage of explained variance of soil heterotrophic respiration by the soil moisture was small, the observed reduction of CO2 emissions with higher moisture levels suggested that soil moisture response of soil carbon models not accounting for the reduction due to excessive moisture should be re-evaluated in order to estimate right levels of soil carbon stock changes. Our further study will include evaluation of process based soil carbon models by the annual heterotrophic respiration and soil carbon stocks.

  18. Amphetamine-induced sensitization and reward uncertainty similarly enhance incentive salience for conditioned cues.

    PubMed

    Robinson, Mike J F; Anselme, Patrick; Suchomel, Kristen; Berridge, Kent C

    2015-08-01

    Amphetamine and stress can sensitize mesolimbic dopamine-related systems. In Pavlovian autoshaping, repeated exposure to uncertainty of reward prediction can enhance motivated sign-tracking or attraction to a discrete reward-predicting cue (lever-conditioned stimulus; CS+), as well as produce cross-sensitization to amphetamine. However, it remains unknown how amphetamine sensitization or repeated restraint stress interact with uncertainty in controlling CS+ incentive salience attribution reflected in sign-tracking. Here rats were tested in 3 successive phases. First, different groups underwent either induction of amphetamine sensitization or repeated restraint stress, or else were not sensitized or stressed as control groups (either saline injections only, or no stress or injection at all). All next received Pavlovian autoshaping training under either certainty conditions (100% CS-UCS association) or uncertainty conditions (50% CS-UCS association and uncertain reward magnitude). During training, rats were assessed for sign-tracking to the CS+ lever versus goal-tracking to the sucrose dish. Finally, all groups were tested for psychomotor sensitization of locomotion revealed by an amphetamine challenge. Our results confirm that reward uncertainty enhanced sign-tracking attraction toward the predictive CS+ lever, at the expense of goal-tracking. We also reported that amphetamine sensitization promoted sign-tracking even in rats trained under CS-UCS certainty conditions, raising them to sign-tracking levels equivalent to the uncertainty group. Combining amphetamine sensitization and uncertainty conditions did not add together to elevate sign-tracking further above the relatively high levels induced by either manipulation alone. In contrast, repeated restraint stress enhanced subsequent amphetamine-elicited locomotion, but did not enhance CS+ attraction.

  19. Sensitivity and uncertainty in crop water footprint accounting: a case study for the Yellow River basin

    NASA Astrophysics Data System (ADS)

    Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.

    2014-06-01

    Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input

  20. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    SciTech Connect

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  1. Dose mapping sensitivity to deformable registration uncertainties in fractionated radiotherapy - applied to prostate proton treatments.

    PubMed

    Tilly, David; Tilly, Nina; Ahnesjö, Anders

    2013-06-14

    Calculation of accumulated dose in fractionated radiotherapy based on spatial mapping of the dose points generally requires deformable image registration (DIR). The accuracy of the accumulated dose thus depends heavily on the DIR quality. This motivates investigations of how the registration uncertainty influences dose planning objectives and treatment outcome predictions.A framework was developed where the dose mapping can be associated with a variable known uncertainty to simulate the DIR uncertainties in a clinical workflow. The framework enabled us to study the dependence of dose planning metrics, and the predicted treatment outcome, on the DIR uncertainty. The additional planning margin needed to compensate for the dose mapping uncertainties can also be determined. We applied the simulation framework to a hypofractionated proton treatment of the prostate using two different scanning beam spot sizes to also study the dose mapping sensitivity to penumbra widths. The planning parameter most sensitive to the DIR uncertainty was found to be the target D95. We found that the registration mean absolute error needs to be ≤0.20 cm to obtain an uncertainty better than 3% of the calculated D95 for intermediate sized penumbras. Use of larger margins in constructing PTV from CTV relaxed the registration uncertainty requirements to the cost of increased dose burdens to the surrounding organs at risk. The DIR uncertainty requirements should be considered in an adaptive radiotherapy workflow since this uncertainty can have significant impact on the accumulated dose. The simulation framework enabled quantification of the accuracy requirement for DIR algorithms to provide satisfactory clinical accuracy in the accumulated dose.

  2. Sensitivity and uncertainty in the effective delayed neutron fraction ({beta}{sub eff})

    SciTech Connect

    Kodeli, I. I.

    2012-07-01

    Precise knowledge of effective delayed neutron fraction ({beta}{sub eff}) and of the corresponding uncertainty is important for reactor safety analysis. The interest in developing the methodology for estimating the uncertainty in {beta}{sub eff} was expressed in the scope of the UAM project of the OECD/NEA. A novel approach for the calculation of the nuclear data sensitivity and uncertainty of the effective delayed neutron fraction is proposed, based on the linear perturbation theory. The method allows the detailed analysis of components of {beta}{sub eff} uncertainty. The procedure was implemented in the SUSD3D sensitivity and uncertainty code applied to several fast neutron benchmark experiments from the ICSBEP and IRPhE databases. According to the JENDL-4 covariance matrices and taking into account the uncertainty in the cross sections and in the prompt and delayed fission spectra the total uncertainty in {beta}eff was found to be of the order of {approx}2 to {approx}3.5 % for the studied fast experiments. (authors)

  3. Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications

    SciTech Connect

    Arbanas, G.; Williams, M.L.; Leal, L.C.; Dunn, M.E.; Khuwaileh, B.A.; Wang, C.; Abdel-Khalik, H.

    2015-01-15

    The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, “AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications,” Trans. Am. Nucl. Soc. 86, 118–119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.

  4. Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications

    NASA Astrophysics Data System (ADS)

    Arbanas, G.; Williams, M. L.; Leal, L. C.; Dunn, M. E.; Khuwaileh, B. A.; Wang, C.; Abdel-Khalik, H.

    2015-01-01

    The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, "AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications," Trans. Am. Nucl. Soc. 86, 118-119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.

  5. Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications

    SciTech Connect

    Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany

    2015-01-01

    The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBEs and differential cross section data simultaneously.

  6. Uncertainty and Sensitivity Analyses Plan. Draft for Peer Review: Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy`s (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  7. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    SciTech Connect

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  8. A Methodology For Performing Global Uncertainty And Sensitivity Analysis In Systems Biology

    PubMed Central

    Marino, Simeone; Hogue, Ian B.; Ray, Christian J.; Kirschner, Denise E.

    2008-01-01

    Accuracy of results from mathematical and computer models of biological systems is often complicated by the presence of uncertainties in experimental data that are used to estimate parameter values. Current mathematical modeling approaches typically use either single-parameter or local sensitivity analyses. However, these methods do not accurately assess uncertainty and sensitivity in the system as, by default they hold all other parameters fixed at baseline values. Using techniques described within we demonstrate how a multi-dimensional parameter space can be studied globally so all uncertainties can be identified. Further, uncertainty and sensitivity analysis techniques can help to identify and ultimately control uncertainties. In this work we develop methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models. We compare two specific types of global sensitivity analysis indexes that have proven to be among the most robust and efficient. Through familiar and new examples of mathematical and computer models, we provide a complete methodology for performing these analyses, both in deterministic and stochastic settings, and propose novel techniques to handle problems encountered during this type of analyses. PMID:18572196

  9. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  10. Sensitivity and Uncertainty Analysis in Chemical Mechanisms for Air Quality Modeling

    NASA Astrophysics Data System (ADS)

    Gao, Dongfen

    1995-01-01

    Ambient ozone in urban and regional air pollution is a serious environmental problem. Air quality models can be used to predict ozone concentrations and explore control strategies. One important component of such air quality models is a chemical mechanism. Sensitivity and uncertainty analysis play an important role in the evaluation of the performance of air quality models. The uncertainties associated with the RADM2 chemical mechanism in predicted concentrations of O_3, HCHO, H _2rm O_2, PAN, and HNO _3 were estimated. Monte Carlo simulations with Latin Hypercube Sampling were used to estimate the overall uncertainties in concentrations of species of interest, due to uncertainties in chemical parameters. The parameters that were treated as random variables were identified through first-order sensitivity and uncertainty analyses. Recent estimates of uncertainties in rate parameters and product yields were used. The results showed the relative uncertainties in ozone predictions are +/-23-50% (1sigma relative to the mean) in urban cases, and less than +/-20% in rural cases. Uncertainties in HNO_3 concentrations are the smallest, followed by HCHO, O_3 and PAN. Predicted H_2rm O_2 concentrations have the highest uncertainties. Uncertainties in the differences of peak ozone concentrations between base and control cases were also studied. The results show that the uncertainties in the fractional reductions in ozone concentrations were 9-12% with NO_{rm x} control at an ROG/NO_{rm x} ratio of 24:1 and 11-33% with ROG control at an ROG/NO _{rm x} ratio of 6:1. Linear regression analysis of the Monte Carlo results showed that uncertainties in rate parameters for the formation of HNO_3, for the reaction of HCHO + hv to 2HO _2 + CO, for PAN chemistry and for the photolysis of NO_2 are most influential to ozone concentrations and differences of ozone. The parameters that are important to ozone concentrations also tend to be relatively influential to other key species

  11. Uncertainty and sensitivity analyses in seismic risk assessments on the example of Cologne, Germany

    NASA Astrophysics Data System (ADS)

    Tyagunov, S.; Pittore, M.; Wieland, M.; Parolai, S.; Bindi, D.; Fleming, K.; Zschau, J.

    2014-06-01

    Both aleatory and epistemic uncertainties associated with different sources and components of risk (hazard, exposure, vulnerability) are present at each step of seismic risk assessments. All individual sources of uncertainty contribute to the total uncertainty, which might be very high and, within the decision-making context, may therefore lead to either very conservative and expensive decisions or the perception of considerable risk. When anatomizing the structure of the total uncertainty, it is therefore important to propagate the different individual uncertainties through the computational chain and to quantify their contribution to the total value of risk. The present study analyses different uncertainties associated with the hazard, vulnerability and loss components by the use of logic trees. The emphasis is on the analysis of epistemic uncertainties, which represent the reducible part of the total uncertainty, including a sensitivity analysis of the resulting seismic risk assessments with regard to the different uncertainty sources. This investigation, being a part of the EU FP7 project MATRIX (New Multi-Hazard and Multi-Risk Assessment Methods for Europe), is carried out for the example of, and with reference to, the conditions of the city of Cologne, Germany, which is one of the MATRIX test cases. At the same time, this particular study does not aim to revise nor to refine the hazard and risk level for Cologne; it is rather to show how large are the existing uncertainties and how they can influence seismic risk estimates, especially in less well-studied areas, if hazard and risk models adapted from other regions are used.

  12. Uncertainty and sensitivity analyses in seismic risk assessments on the example of Cologne, Germany

    NASA Astrophysics Data System (ADS)

    Tyagunov, S.; Pittore, M.; Wieland, M.; Parolai, S.; Bindi, D.; Fleming, K.; Zschau, J.

    2013-12-01

    Both aleatory and epistemic uncertainties associated with different sources and components of risk (hazard, exposure, vulnerability) are present at each step of seismic risk assessments. All individual sources of uncertainty contribute to the total uncertainty, which might be very high and, within the decision-making context, may therefore lead to either very conservative and expensive decisions or the perception of considerable risk. When anatomizing the structure of the total uncertainty, it is therefore important to propagate the different individual uncertainties through the computational chain and to quantify their contribution to the total value of risk. The present study analyzes different uncertainties associated with the hazard, vulnerability and loss components by the use of logic trees. The emphasis is on the analysis of epistemic uncertainties, which represent the reducible part of the total uncertainty, including a sensitivity analysis of the resulting seismic risk assessments with regards to the different uncertainty sources. This investigation, being a part of the EU FP7 project MATRIX (New Multi-Hazard and Multi-Risk Assessment Methods for Europe), is carried out for the example of, and with reference to, the conditions of the city of Cologne, Germany, which is one of the MATRIX test cases. At the same time, this particular study does not aim to revise nor to refine the hazard and risk level for Cologne; it is rather to show how large are the existing uncertainties and how they can influence seismic risk estimates, especially in less well-studied areas, if hazard and risk models adapted from other regions are used.

  13. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    SciTech Connect

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; Turner, Adrian Keith; Jeffery, Nicole

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.

  14. Delft3D Sensitivity and Uncertainty Snalysis for Hurricane Simulations in the North Atlantic

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Knighton, J.; Kline, S. W.; Pistininzi, J.

    2016-02-01

    We use model averaging techniques to estimate hurricane hazards and uncertainties of coastal surge and waves in the North Atlantic U.S. coast. First we determine the output uncertainty associated with the selection of appropriate model parameters by means of a thorough parameter sensitivity analysis of the Delft3D model using data from Hurricane Bob (1991) track. The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking (gamma_b), and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking -alpha_b) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. We also present an analyis of the influence of three theoretical wind field parameterizations (NWS23, modified Rankine, and Holland) on the model performance and corresponding uncertainty. Finally, an overall estimate of the joint uncertainty related to the model parameters and the input wind fields is presented.

  15. Sensitivity of seismic hazard evaluations to uncertainties determined from seismic source characterization

    NASA Astrophysics Data System (ADS)

    Tavakoli, Behrooz

    The sensitivity and overall uncertainty in peak ground acceleration (PGA)estimates have been calculated for the city of Tabriz, northwestern Iran byusing a specific randomized blocks design. Eight seismic hazard models andparameters with randomly selected uncertainties at two levels have beenconsidered and then a linear model between predicted PGA at a givenprobability level and the uncertainties has been performed. The inputmodels and parameters are those related to the attenuation, magnituderupture-length and recurrence relationships with their uncertainties.Application of this procedure to the studied area indicates that effects ofthe simultaneous variation of all eight input models and parameters on thesensitivity of the seismic hazard can be investigated with a decreasingnumber of computations for all possible combinations at a fixed annualprobability. The results show that the choice of a mathematical model ofthe source mechanism, attenuation relationships and the definition ofseismic parameters are most critical in estimating the sensitivity of seismichazard evaluation, in particular at low levels of probability of exceedance.The overall uncertainty in the expected PGA for an annual probability of0.0021 (10% exceedence in 50 yr) is expressed by a coefficient ofvariation (CV) of about 34% at 68% confidence level for a distance ofabout 5km from the field of the major faults. The CV will decrease withincreasing site-source distance and remains constant, CV = 15%, fordistances larger than 15 km. Finally, treating alternative models on theoverall uncertainty are investigated by additional outliers in input decision.

  16. Status of XSUSA for Sampling Based Nuclear Data Uncertainty and Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Zwermann, W.; Gallner, L.; Klein, M.; Krzykacz-Hausmann, B.; Pasichnyk, I.; Pautz, A.; Velkov, K.

    2013-03-01

    In the present contribution, an overview of the sampling based XSUSA method for sensitivity and uncertainty analysis with respect to nuclear data is given. The focus is on recent developments and applications of XSUSA. These applications include calculations for critical assemblies, fuel assembly depletion calculations, and steadystate as well as transient reactor core calculations. The analyses are partially performed in the framework of international benchmark working groups (UACSA - Uncertainty Analyses for Criticality Safety Assessment, UAM - Uncertainty Analysis in Modelling). It is demonstrated that particularly for full-scale reactor calculations the influence of the nuclear data uncertainties on the results can be substantial. For instance, for the radial fission rate distributions of mixed UO2/MOX light water reactor cores, the 2σ uncertainties in the core centre and periphery can reach values exceeding 10%. For a fast transient, the resulting time behaviour of the reactor power was covered by a wide uncertainty band. Overall, the results confirm the necessity of adding systematic uncertainty analyses to best-estimate reactor calculations.

  17. Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh

    2010-10-01

    Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information

  18. Use of input uncertainty and model sensitivity to guide site exploration

    USGS Publications Warehouse

    Graettinger, A.J.; Reeves, H.W.; Lee, J.; Dethan, D.; ,

    2003-01-01

    Three Quantitatively Directed Exploration (QDE) methods to identify optimum field sampling locations based on model input covariance and model sensitivity are presented. The first method bases site exploration only on the spatial variation in the uncertainty of input properties. The second method uses only the spatial variation in model sensitivities. The third method uses a first-order second-moment (FOSM) method to estimate the spatial variation in the output covariance. The FOSM method estimates output uncertainty using the product of the input covariance and model sensitivity. The three methods are illustrated by means of a synthetic groundwater site simulated with MODFLOW-2000. The groundwater-flow model computes piezometric head and the sensitivity of head to changes in input values. The QDE methods are evaluated by comparing model results to the "true" head. For the synthetic site used in this study, the most effective QDE method was the FOSM method.

  19. Uncertainty and Sensitivity Analyses of a Two-Parameter Impedance Prediction Model

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Parrott, T. L.; Watson, W. R.

    2008-01-01

    This paper presents comparisons of predicted impedance uncertainty limits derived from Monte-Carlo-type simulations with a Two-Parameter (TP) impedance prediction model and measured impedance uncertainty limits based on multiple tests acquired in NASA Langley test rigs. These predicted and measured impedance uncertainty limits are used to evaluate the effects of simultaneous randomization of each input parameter for the impedance prediction and measurement processes. A sensitivity analysis is then used to further evaluate the TP prediction model by varying its input parameters on an individual basis. The variation imposed on the input parameters is based on measurements conducted with multiple tests in the NASA Langley normal incidence and grazing incidence impedance tubes; thus, the input parameters are assigned uncertainties commensurate with those of the measured data. These same measured data are used with the NASA Langley impedance measurement (eduction) processes to determine the corresponding measured impedance uncertainty limits, such that the predicted and measured impedance uncertainty limits (95% confidence intervals) can be compared. The measured reactance 95% confidence intervals encompass the corresponding predicted reactance confidence intervals over the frequency range of interest. The same is true for the confidence intervals of the measured and predicted resistance at near-resonance frequencies, but the predicted resistance confidence intervals are lower than the measured resistance confidence intervals (no overlap) at frequencies away from resonance. A sensitivity analysis indicates the discharge coefficient uncertainty is the major contributor to uncertainty in the predicted impedances for the perforate-over-honeycomb liner used in this study. This insight regarding the relative importance of each input parameter will be used to guide the design of experiments with test rigs currently being brought on-line at NASA Langley.

  20. Global sensitivity analysis in wastewater treatment plant model applications: prioritizing sources of uncertainty.

    PubMed

    Sin, Gürkan; Gernaey, Krist V; Neumann, Marc B; van Loosdrecht, Mark C M; Gujer, Willi

    2011-01-01

    This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R(2) > 0.9) for effluent concentrations, sludge production and energy demand. This high extent of linearity means that the plant performance criteria can be described as linear functions of the model inputs under the defined plant conditions. In effect, the system of coupled ordinary differential equations can be replaced by multivariate linear models, which can be used as surrogate models. The importance ranking based on the sensitivity measures demonstrates that the most influential factors involve ash content and influent inert particulate COD among others, largely responsible for the uncertainty in predicting sludge production and effluent ammonium concentration. While these results were in agreement with process knowledge, the added value is that the global sensitivity methods can quantify the contribution of the variance of significant parameters, e.g., ash content explains 70% of the variance in sludge production. Further the importance of formulating appropriate sensitivity analysis scenarios that match the purpose of the model application needs to be highlighted. Overall, the global sensitivity analysis proved a powerful tool for explaining and quantifying uncertainties as well as providing insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants.

  1. Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes

    SciTech Connect

    Gerhard Strydom

    2010-06-01

    The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of the implementation and testing of SUSA at the INL VHTR Project Office.

  2. Uncertainty and sensitivity analysis of the retrieved essential climate variables from remotely sensed observations

    NASA Astrophysics Data System (ADS)

    Djepa, Vera; Badii, Atta

    2016-04-01

    The sensitivity of weather and climate system to sea ice thickness (SIT), Sea Ice Draft (SID) and Snow Depth (SD) in the Arctic is recognized from various studies. Decrease in SIT will affect atmospheric circulation, temperature, precipitation and wind speed in the Arctic and beyond. Ice thermodynamics and dynamic properties depend strongly on sea Ice Density (ID) and SD. SIT, SID, ID and SD are sensitive to environmental changes in the Polar region and impact the climate system. For accurate forecast of climate change, sea ice mass balance, ocean circulation and sea- atmosphere interactions it is required to have long term records of SIT, SID, SD and ID with errors and uncertainty analyses. The SID, SIT, ID and freeboard (F) have been retrieved from Radar Altimeter (RA) (on board ENVISAT) and IceBridge Laser Altimeter (LA) and validated, using over 10 years -collocated observations of SID and SD in the Arctic, provided from the European Space Agency (ESA CCI sea ice ECV project). Improved algorithms to retrieve SIT from LA and RA have been derived, applying statistical analysis. The snow depth is obtained from AMSR-E/Aqua and NASA IceBridge Snow Depth radar. The sea ice properties of pancake ice have been retrieved from ENVISAT/Synthetic Aperture Radar (ASAR). The uncertainties of the retrieved climate variables have been analysed and the impact of snow depth and sea ice density on retrieved SIT has been estimated. The sensitivity analysis illustrates the impact of uncertainties of input climate variables (ID and SD) on accuracy of the retrieved output variables (SIT and SID). The developed methodology of uncertainty and sensitivity analysis is essential for assessment of the impact of environmental variables on climate change and better understanding of the relationship between input and output variables. The uncertainty analysis quantifies the uncertainties of the model results and the sensitivity analysis evaluates the contribution of each input variable to

  3. Sensitivity and uncertainty analysis of estimated soil hydraulic parameters for simulating soil water content

    NASA Astrophysics Data System (ADS)

    Gupta, Manika; Garg, Naveen Kumar; Srivastava, Prashant K.

    2014-05-01

    The sensitivity and uncertainty analysis has been carried out for the scalar parameters (soil hydraulic parameters (SHPs)), which govern the simulation of soil water content in the unsaturated soil zone. The study involves field experiments, which were conducted in real field conditions for wheat crop in Roorkee, India under irrigated conditions. Soil samples were taken for the soil profile of 60 cm depth at an interval of 15 cm in the experimental field to determine soil water retention curves (SWRCs). These experimentally determined SWRCs were used to estimate the SHPs by least square optimization under constrained conditions. Sensitivity of the SHPs estimated by various pedotransfer functions (PTFs), that relate various easily measurable soil properties like soil texture, bulk density and organic carbon content, is compared with lab derived parameters to simulate respective soil water retention curves. Sensitivity analysis was carried out using the monte carlo simulations and the one factor at a time approach. The different sets of SHPs, along with experimentally determined saturated permeability, are then used as input parameters in physically based, root water uptake model to ascertain the uncertainties in simulating soil water content. The generalised likelihood uncertainty estimation procedure (GLUE) was subsequently used to estimate the uncertainty bounds (UB) on the model predictions. It was found that the experimentally obtained SHPs were able to simulate the soil water contents with efficiencies of 70-80% at all the depths for the three irrigation treatments. The SHPs obtained from the PTFs, performed with varying uncertainties in simulating the soil water contents. Keywords: Sensitivity analysis, Uncertainty estimation, Pedotransfer functions, Soil hydraulic parameters, Hydrological modelling

  4. Overview and application of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) toolbox

    USDA-ARS?s Scientific Manuscript database

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  5. PC-BASED SUPERCOMPUTING FOR UNCERTAINTY AND SENSITIVITY ANALYSIS OF MODELS

    EPA Science Inventory

    Evaluating uncertainty and sensitivity of multimedia environmental models that integrate assessments of air, soil, sediments, groundwater, and surface water is a difficult task. It can be an enormous undertaking even for simple, single-medium models (i.e. groundwater only) descr...

  6. INVESTIGATING UNCERTAINTY AND SENSITIVITY IN INTEGRATED, MULTIMEDIA ENVIRONMENTAL MODELS: TOOLS FOR FRAMES-3MRA

    EPA Science Inventory

    Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites...

  7. PC-BASED SUPERCOMPUTING FOR UNCERTAINTY AND SENSITIVITY ANALYSIS OF MODELS

    EPA Science Inventory

    Evaluating uncertainty and sensitivity of multimedia environmental models that integrate assessments of air, soil, sediments, groundwater, and surface water is a difficult task. It can be an enormous undertaking even for simple, single-medium models (i.e. groundwater only) descr...

  8. SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN

    EPA Science Inventory

    In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...

  9. SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN

    EPA Science Inventory

    In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...

  10. INVESTIGATING UNCERTAINTY AND SENSITIVITY IN INTEGRATED MULTIMEDIA ENVIRONMENTAL MODELS: TOOLS FOR 3MRA

    EPA Science Inventory

    Sufficiently elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The ensuing challenge of examining ever more complex, integrated, higher-ord...

  11. INVESTIGATING UNCERTAINTY AND SENSITIVITY IN INTEGRATED, MULTIMEDIA ENVIRONMENTAL MODELS: TOOLS FOR FRAMES-3MRA

    EPA Science Inventory

    Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites...

  12. INVESTIGATING UNCERTAINTY AND SENSITIVITY IN INTEGRATED MULTIMEDIA ENVIRONMENTAL MODELS: TOOLS FOR 3MRA

    EPA Science Inventory

    Sufficiently elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The ensuing challenge of examining ever more complex, integrated, higher-ord...

  13. The Model Optimization, Uncertainty, and SEnsitivity analysis (MOUSE) toolbox: overview and application

    USDA-ARS?s Scientific Manuscript database

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  14. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  15. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Ye, Ming; Walker, Anthony P.; Chen, Xingyuan

    2017-04-01

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods with variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. For demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.

  16. How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Razavi, Saman

    2016-04-01

    Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.

  17. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex, Hydrogeologic Systems

    NASA Astrophysics Data System (ADS)

    Wolfsberg, A.; Kang, Q.; Li, C.; Ruskauff, G.; Bhark, E.; Freeman, E.; Prothro, L.; Drellack, S.

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  18. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    SciTech Connect

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  19. Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling

    NASA Astrophysics Data System (ADS)

    Pastore, Giovanni; Swiler, L. P.; Hales, J. D.; Novascone, S. R.; Perez, D. M.; Spencer, B. W.; Luzzi, L.; Van Uffelen, P.; Williamson, R. L.

    2015-01-01

    The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code with a recently implemented physics-based model for fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information in the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior predictions with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, significantly higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.

  20. Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling

    SciTech Connect

    Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.

    2014-10-12

    The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.

  1. Demonstration of Uncertainty Quantification and Sensitivity Analysis for PWR Fuel Performance with BISON

    SciTech Connect

    Zhang, Hongbin; Ladd, Jacob; Zhao, Haihua; Zou, Ling; Burns, Douglas

    2015-11-01

    BISON is an advanced fuels performance code being developed at Idaho National Laboratory and is the code of choice for fuels performance by the U.S. Department of Energy (DOE)’s Consortium for Advanced Simulation of Light Water Reactors (CASL) Program. An approach to uncertainty quantification and sensitivity analysis with BISON was developed and a new toolkit was created. A PWR fuel rod model was developed and simulated by BISON, and uncertainty quantification and sensitivity analysis were performed with eighteen uncertain input parameters. The maximum fuel temperature and gap conductance were selected as the figures of merit (FOM). Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis.

  2. Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2017-04-01

    Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear

  3. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  4. Sensitivity of key factors and uncertainties in health risk assessment of benzene pollutant.

    PubMed

    Liu, Zhi-quan; Zhang, Ying-hua; Li, Guang-he; Zhang, Xu

    2007-01-01

    Predicting long-term potential human health risks from contaminants in the multimedia environment requires the use of models. However, there is uncertainty associated with these predictions of many parameters which can be represented by ranges or probability distributions rather than single value. Based on a case study with information from an actual site contaminated with benzene, this study describes the application of MMSOILS model to predict health risk and distributions of those predictions generated using Monte Carlo techniques. A sensitivity analysis was performed to evaluate which of the random variables are most important in producing the predicted distributions of health risks. The sensitivity analysis shows that the predicted distributions can be accurately reproduced using a small subset of the random variables. The practical implication of this analysis is the ability to distinguish between important versus unimportant random variables in terms of their sensitivity to selected endpoints. This directly translates into a reduction in data collection and modeling effort. It was demonstrated that how correlation coefficient could be used to evaluate contributions to overall uncertainty from each parameter. The integrated uncertainty analysis shows that although drinking groundwater risk is similar with inhalation air risk, uncertainties of total risk come dominantly from drinking groundwater route. Most percent of the variance of total risk comes from four random variables.

  5. Using Global Sensitivity Analysis to Understand the Implications of Epistemic Uncertainty in Earth Systems Modelling

    NASA Astrophysics Data System (ADS)

    Wagener, T.; Pianosi, F.; Almeida, S.; Holcombe, E.

    2016-12-01

    We can define epistemic uncertainty as those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past, because the historical data is unreliable (imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is simply no observation or network available). This kind of uncertainty is typical for earth system modelling, but our approaches to address it are poorly developed. Because epistemic uncertainties cannot easily be characterised by probability distributions, traditional uncertainty analysis techniques based on Monte Carlo simulation and forward propagation of uncertainty are not adequate. Global Sensitivity Analysis (GSA) can provide an alternative approach where, rather than quantifying the impact of poorly defined or even unknown uncertainties on model predictions, one can investigate at what level such uncertainties would start to matter and whether this level is likely to be reached within the relevant time period analysed. The underlying objective of GSA in this case lies in mapping the uncertain input factors onto critical regions of the model output, e.g. when the output exceeds a certain thresholds. Methods to implement this mapping step have so far received less attention and significant improvement is needed. We will present an example from landslide modelling - a field where observations are scarce, sub-surface characteristics are poorly constrained, and potential future rainfall triggers can be highly uncertain due to climate change. We demonstrate an approach that combines GSA and advanced Classification and Regression Tress (CART) to understand the risk of slope failure for an application in the Caribbean. We close with a discussion of opportunities for further methodological advancement.

  6. Uncertainty and sensitivity analyses of ballast life-cycle cost and payback period

    SciTech Connect

    McMahon, James E.; Liu, Xiaomin; Turiel, Ike; Hakim, Sajid; Fisher, Diane

    2000-06-01

    The paper introduces an innovative methodology for evaluating the relative significance of energy-efficient technologies applied to fluorescent lamp ballasts. The method involves replacing the point estimates of life cycle cost of the ballasts with uncertainty distributions reflecting the whole spectrum of possible costs, and the assessed probability associated with each value. The results of uncertainty and sensitivity analyses will help analysts reduce effort in data collection and carry on analysis more efficiently. These methods also enable policy makers to gain an insightful understanding of which efficient technology alternatives benefit or cost what fraction of consumers, given the explicit assumptions of the analysis.

  7. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    EPA Science Inventory

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  8. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  9. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    SciTech Connect

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification through PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.

  10. Inverse Sensitivity/Uncertainty Methods Development for Nuclear Fuel Cycle Applications

    NASA Astrophysics Data System (ADS)

    Arbanas, G.; Dunn, M. E.; Williams, M. L.

    2014-04-01

    The Standardized Computer Analyses for Licensing Evaluation (SCALE) software package developed at the Oak Ridge National Laboratory includes codes that propagate uncertainties available in the nuclear data libraries to compute uncertainties in nuclear application performance parameters. We report on our recent efforts to extend this capability to develop an inverse sensitivity/uncertainty (IS/U) methodology that identifies the improvements in nuclear data that are needed to compute application responses within prescribed tolerances, while minimizing the cost of such data improvements. We report on our progress to date and present a simple test case for our method. Our methodology is directly applicable to thermal and intermediate neutron energy systems because it addresses the implicit neutron resonance self-shielding effects that are essential to accurate modeling of thermal and intermediate systems. This methodology is likely to increase the efficiency of nuclear data efforts.

  11. Performance evaluation of passive cooling in office buildings based on uncertainty and sensitivity analysis

    SciTech Connect

    Breesch, H.; Janssens, A.

    2010-08-15

    Natural night ventilation is an interesting passive cooling method in moderate climates. Driven by wind and stack generated pressures, it cools down the exposed building structure at night, in which the heat of the previous day is accumulated. The performance of natural night ventilation highly depends on the external weather conditions and especially on the outdoor temperature. An increase of this outdoor temperature is noticed over the last century and the IPCC predicts an additional rise to the end of this century. A methodology is needed to evaluate the reliable operation of the indoor climate of buildings in case of warmer and uncertain summer conditions. The uncertainty on the climate and on other design data can be very important in the decision process of a building project. The aim of this research is to develop a methodology to predict the performance of natural night ventilation using building energy simulation taking into account the uncertainties in the input. The performance evaluation of natural night ventilation is based on uncertainty and sensitivity analysis. The results of the uncertainty analysis showed that thermal comfort in a single office cooled with single-sided night ventilation had the largest uncertainty. The uncertainties on thermal comfort in case of passive stack and cross ventilation were substantially smaller. However, since wind, as the main driving force for cross ventilation, is highly variable, the cross ventilation strategy required larger louvre areas than the stack ventilation strategy to achieve a similar performance. The differences in uncertainty between the orientations were small. Sensitivity analysis was used to determine the most dominant set of input parameters causing the uncertainty on thermal comfort. The internal heat gains, solar heat gain coefficient of the sunblinds, internal convective heat transfer coefficient, thermophysical properties related to thermal mass, set-point temperatures controlling the natural

  12. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    SciTech Connect

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surface temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.

  13. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    DOE PAGES

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  14. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE PAGES

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  15. Sensitivities and Uncertainties Related to Numerics and Building Features in Urban Modeling

    SciTech Connect

    Joseph III, Robert Anthony; Slater, Charles O; Evans, Thomas M; Mosher, Scott W; Johnson, Jeffrey O

    2011-01-01

    Oak Ridge National Laboratory (ORNL) has been engaged in the development and testing of a computational system that would use a grid of activation foil detectors to provide postdetonation forensic information from a nuclear device detonation. ORNL has developed a high-performance, three-dimensional (3-D) deterministic radiation transport code called Denovo. Denovo solves the multigroup discrete ordinates (SN) equations and can output 3-D data in a platform-independent format that can be efficiently analyzed using parallel, high-performance visualization tools. To evaluate the sensitivities and uncertainties associated with the deterministic computational method numerics, a numerical study on the New York City Times Square model was conducted using Denovo. In particular, the sensitivities and uncertainties associated with various components of the calculational method were systematically investigated, including (a) the Legendre polynomial expansion order of the scattering cross sections, (b) the angular quadrature, (c) multigroup energy binning, (d) spatial mesh sizes, (e) the material compositions of the building models, (f) the composition of the foundations upon which the buildings rest (e.g., ground, concrete, or asphalt), and (g) the amount of detail included in the building models. Although Denovo may calculate the idealized model well, there may be uncertainty in the results because of slight departures of the above-named parameters from those used in the idealized calculations. Fluxes and activities at selected locations from perturbed calculations are compared with corresponding values from the idealized or base case to determine the sensitivities associated with specified parameter changes. Results indicate that uncertainties related to numerics can be controlled by using higher fidelity models, but more work is needed to control the uncertainties related to the model.

  16. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    DOE PAGES

    Strydom, Gerhard

    2013-01-01

    The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less

  17. Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling

    DOE PAGES

    Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; ...

    2014-10-12

    The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less

  18. Examining Dark Triad traits in relation to sleep disturbances, anxiety sensitivity and intolerance of uncertainty in young adults.

    PubMed

    Sabouri, Sarah; Gerber, Markus; Lemola, Sakari; Becker, Stephen P; Shamsi, Mahin; Shakouri, Zeinab; Sadeghi Bahmani, Dena; Kalak, Nadeem; Holsboer-Trachsler, Edith; Brand, Serge

    2016-07-01

    The Dark Triad (DT) describes a set of three closely related personality traits, Machiavellianism, narcissism, and psychopathy. The aim of this study was to examine the associations between DT traits, sleep disturbances, anxiety sensitivity and intolerance of uncertainty. A total of 341 adults (M=29years) completed a series of questionnaires related to the DT traits, sleep disturbances, anxiety sensitivity, and intolerance of uncertainty. A higher DT total score was associated with increased sleep disturbances, and higher scores for anxiety sensitivity and intolerance of uncertainty. In regression analyses Machiavellianism and psychopathy were predictors of sleep disturbances, anxiety sensitivity, and intolerance of uncertainty. Results indicate that specific DT traits, namely Machiavellianism and psychopathy, are associated with sleep disturbances, anxiety sensitivity and intolerance of uncertainty in young adults. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Quantitative uncertainty and sensitivity analysis of a PWR control rod ejection accident

    SciTech Connect

    Pasichnyk, I.; Perin, Y.; Velkov, K.

    2013-07-01

    The paper describes the results of the quantitative Uncertainty and Sensitivity (U/S) Analysis of a Rod Ejection Accident (REA) which is simulated by the coupled system code ATHLET-QUABOX/CUBBOX applying the GRS tool for U/S analysis SUSA/XSUSA. For the present study, a UOX/MOX mixed core loading based on a generic PWR is modeled. A control rod ejection is calculated for two reactor states: Hot Zero Power (HZP) and 30% of nominal power. The worst cases for the rod ejection are determined by steady-state neutronic simulations taking into account the maximum reactivity insertion in the system and the power peaking factor. For the U/S analysis 378 uncertain parameters are identified and quantified (thermal-hydraulic initial and boundary conditions, input parameters and variations of the two-group cross sections). Results for uncertainty and sensitivity analysis are presented for safety important global and local parameters. (authors)

  20. Space Shuttle Orbiter flight heating rate measurement sensitivity to thermal protection system uncertainties

    NASA Technical Reports Server (NTRS)

    Bradley, P. F.; Throckmorton, D. A.

    1981-01-01

    A study was completed to determine the sensitivity of computed convective heating rates to uncertainties in the thermal protection system thermal model. Those parameters considered were: density, thermal conductivity, and specific heat of both the reusable surface insulation and its coating; coating thickness and emittance; and temperature measurement uncertainty. The assessment used a modified version of the computer program to calculate heating rates from temperature time histories. The original version of the program solves the direct one dimensional heating problem and this modified version of The program is set up to solve the inverse problem. The modified program was used in thermocouple data reduction for shuttle flight data. Both nominal thermal models and altered thermal models were used to determine the necessity for accurate knowledge of thermal protection system's material thermal properties. For many thermal properties, the sensitivity (inaccuracies created in the calculation of convective heating rate by an altered property) was very low.

  1. A Variance Decomposition Approach to Uncertainty Quantification and Sensitivity Analysis of the J&E Model

    PubMed Central

    Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.

    2015-01-01

    The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051

  2. Wavelet-Monte Carlo Hybrid System for HLW Nuclide Migration Modeling and Sensitivity and Uncertainty Analysis

    SciTech Connect

    Nasif, Hesham; Neyama, Atsushi

    2003-02-26

    This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).

  3. A guide to uncertainty quantification and sensitivity analysis for cardiovascular applications.

    PubMed

    Eck, Vinzenz Gregor; Donders, Wouter Paulus; Sturdy, Jacob; Feinberg, Jonathan; Delhaas, Tammo; Hellevik, Leif Rune; Huberts, Wouter

    2016-08-01

    As we shift from population-based medicine towards a more precise patient-specific regime guided by predictions of verified and well-established cardiovascular models, an urgent question arises: how sensitive are the model predictions to errors and uncertainties in the model inputs? To make our models suitable for clinical decision-making, precise knowledge of prediction reliability is of paramount importance. Efficient and practical methods for uncertainty quantification (UQ) and sensitivity analysis (SA) are therefore essential. In this work, we explain the concepts of global UQ and global, variance-based SA along with two often-used methods that are applicable to any model without requiring model implementation changes: Monte Carlo (MC) and polynomial chaos (PC). Furthermore, we propose a guide for UQ and SA according to a six-step procedure and demonstrate it for two clinically relevant cardiovascular models: model-based estimation of the fractional flow reserve (FFR) and model-based estimation of the total arterial compliance (CT ). Both MC and PC produce identical results and may be used interchangeably to identify most significant model inputs with respect to uncertainty in model predictions of FFR and CT . However, PC is more cost-efficient as it requires an order of magnitude fewer model evaluations than MC. Additionally, we demonstrate that targeted reduction of uncertainty in the most significant model inputs reduces the uncertainty in the model predictions efficiently. In conclusion, this article offers a practical guide to UQ and SA to help move the clinical application of mathematical models forward. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    PubMed

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  5. Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, J. R.; Urban, N. M.

    2015-12-01

    Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.

  6. Uncertainty and sensitivity assessments of an agricultural-hydrological model (RZWQM2) using the GLUE method

    NASA Astrophysics Data System (ADS)

    Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin

    2016-03-01

    Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This

  7. Uncertainty of Wheat Water Use: Simulated Patterns and Sensitivity to Temperature and CO2

    NASA Technical Reports Server (NTRS)

    Cammarano, Davide; Roetter, Reimund P.; Asseng, Senthold; Ewert, Frank; Wallach, Daniel; Martre, Pierre; Hatfield, Jerry L.; Jones, James W.; Rosenzweig, Cynthia E.; Ruane, Alex C.; Boote, Kenneth J.; Thorburn, Peter J.; Kersebaum, Kurt Christian; Aggarwal, Pramod K.; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J.; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richie; Heng, Lee; Steduto, Pasquale

    2016-01-01

    Projected global warming and population growth will reduce future water availability for agriculture. Thus, it is essential to increase the efficiency in using water to ensure crop productivity. Quantifying crop water use (WU; i.e. actual evapotranspiration) is a critical step towards this goal. Here, sixteen wheat simulation models were used to quantify sources of model uncertainty and to estimate the relative changes and variability between models for simulated WU, water use efficiency (WUE, WU per unit of grain dry mass produced), transpiration efficiency (Teff, transpiration per kg of unit of grain yield dry mass produced), grain yield, crop transpiration and soil evaporation at increased temperatures and elevated atmospheric carbon dioxide concentrations ([CO2]). The greatest uncertainty in simulating water use, potential evapotranspiration, crop transpiration and soil evaporation was due to differences in how crop transpiration was modelled and accounted for 50 of the total variability among models. The simulation results for the sensitivity to temperature indicated that crop WU will decline with increasing temperature due to reduced growing seasons. The uncertainties in simulated crop WU, and in particularly due to uncertainties in simulating crop transpiration, were greater under conditions of increased temperatures and with high temperatures in combination with elevated atmospheric [CO2] concentrations. Hence the simulation of crop WU, and in particularly crop transpiration under higher temperature, needs to be improved and evaluated with field measurements before models can be used to simulate climate change impacts on future crop water demand.

  8. Uncertainty of Wheat Water Use: Simulated Patterns and Sensitivity to Temperature and CO2

    NASA Technical Reports Server (NTRS)

    Cammarano, Davide; Roetter, Reimund P.; Asseng, Senthold; Ewert, Frank; Wallach, Daniel; Martre, Pierre; Hatfield, Jerry L.; Jones, James W.; Rosenzweig, Cynthia E.; Ruane, Alex C.; hide

    2016-01-01

    Projected global warming and population growth will reduce future water availability for agriculture. Thus, it is essential to increase the efficiency in using water to ensure crop productivity. Quantifying crop water use (WU; i.e. actual evapotranspiration) is a critical step towards this goal. Here, sixteen wheat simulation models were used to quantify sources of model uncertainty and to estimate the relative changes and variability between models for simulated WU, water use efficiency (WUE, WU per unit of grain dry mass produced), transpiration efficiency (Teff, transpiration per kg of unit of grain yield dry mass produced), grain yield, crop transpiration and soil evaporation at increased temperatures and elevated atmospheric carbon dioxide concentrations ([CO2]). The greatest uncertainty in simulating water use, potential evapotranspiration, crop transpiration and soil evaporation was due to differences in how crop transpiration was modelled and accounted for 50 of the total variability among models. The simulation results for the sensitivity to temperature indicated that crop WU will decline with increasing temperature due to reduced growing seasons. The uncertainties in simulated crop WU, and in particularly due to uncertainties in simulating crop transpiration, were greater under conditions of increased temperatures and with high temperatures in combination with elevated atmospheric [CO2] concentrations. Hence the simulation of crop WU, and in particularly crop transpiration under higher temperature, needs to be improved and evaluated with field measurements before models can be used to simulate climate change impacts on future crop water demand.

  9. Recent progress toward reducing the uncertainty in tropical low cloud feedback and climate sensitivity: a review

    NASA Astrophysics Data System (ADS)

    Kamae, Youichi; Ogura, Tomoo; Shiogama, Hideo; Watanabe, Masahiro

    2016-12-01

    Equilibrium climate sensitivity (ECS) to doubling of atmospheric CO2 concentration is a key index for understanding the Earth's climate history and prediction of future climate changes. Tropical low cloud feedback, the predominant factor for uncertainty in modeled ECS, diverges both in sign and magnitude among climate models. Despite its importance, the uncertainty in ECS and low cloud feedback remains a challenge. Recently, researches based on observations and climate models have demonstrated a possibility that the tropical low cloud feedback in a perturbed climate can be constrained by the observed relationship between cloud, sea surface temperature and atmospheric dynamic and thermodynamic structures. The observational constraint on the tropical low cloud feedback suggests a higher ECS range than raw range obtained from climate model simulations. In addition, newly devised modeling frameworks that address both spreads among different model structures and parameter settings have contributed to evaluate possible ranges of the uncertainty in ECS and low cloud feedback. Further observational and modeling approaches and their combinations may help to advance toward dispelling the clouds of uncertainty.

  10. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis.

    PubMed

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  11. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis

    NASA Astrophysics Data System (ADS)

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  12. Risk-Sensitive Optimal Feedback Control Accounts for Sensorimotor Behavior under Uncertainty

    PubMed Central

    Nagengast, Arne J.; Braun, Daniel A.; Wolpert, Daniel M.

    2010-01-01

    Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. PMID:20657657

  13. Sensitivity Analysis and Uncertainty Characterization of Subnational Building Energy Demand in an Integrated Assessment Model

    NASA Astrophysics Data System (ADS)

    Scott, M. J.; Daly, D.; McJeon, H.; Zhou, Y.; Clarke, L.; Rice, J.; Whitney, P.; Kim, S.

    2012-12-01

    Residential and commercial buildings are a major source of energy consumption and carbon dioxide emissions in the United States, accounting for 41% of energy consumption and 40% of carbon emissions in 2011. Integrated assessment models (IAMs) historically have been used to estimate the impact of energy consumption on greenhouse gas emissions at the national and international level. Increasingly they are being asked to evaluate mitigation and adaptation policies that have a subnational dimension. In the United States, for example, building energy codes are adopted and enforced at the state and local level. Adoption of more efficient appliances and building equipment is sometimes directed or actively promoted by subnational governmental entities for mitigation or adaptation to climate change. The presentation reports on new example results from the Global Change Assessment Model (GCAM) IAM, one of a flexibly-coupled suite of models of human and earth system interactions known as the integrated Regional Earth System Model (iRESM) system. iRESM can evaluate subnational climate policy in the context of the important uncertainties represented by national policy and the earth system. We have added a 50-state detailed U.S. building energy demand capability to GCAM that is sensitive to national climate policy, technology, regional population and economic growth, and climate. We are currently using GCAM in a prototype stakeholder-driven uncertainty characterization process to evaluate regional climate mitigation and adaptation options in a 14-state pilot region in the U.S. upper Midwest. The stakeholder-driven decision process involves several steps, beginning with identifying policy alternatives and decision criteria based on stakeholder outreach, identifying relevant potential uncertainties, then performing sensitivity analysis, characterizing the key uncertainties from the sensitivity analysis, and propagating and quantifying their impact on the relevant decisions. In the

  14. Numerical daemons in hydrological modeling: Effects on uncertainty assessment, sensitivity analysis and model predictions

    NASA Astrophysics Data System (ADS)

    Kavetski, D.; Clark, M. P.; Fenicia, F.

    2011-12-01

    Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated

  15. PH Sensitive Polymers for Improving Reservoir Sweep and Conformance Control in Chemical Flooring

    SciTech Connect

    Mukul Sharma; Steven Bryant; Chun Huh

    2008-03-31

    viscoelastic behavior as functions of pH; shear rate; polymer concentration; salinity, including divalent ion effects; polymer molecular weight; and degree of hydrolysis. A comprehensive rheological model was developed for HPAM solution rheology in terms of: shear rate; pH; polymer concentration; and salinity, so that the spatial and temporal changes in viscosity during the polymer flow in the reservoir can be accurately modeled. A series of acid coreflood experiments were conducted to understand the geochemical reactions relevant for both the near-wellbore injection profile control and for conformance control applications. These experiments showed that the use hydrochloric acid as a pre-flush is not viable because of the high reaction rate with the rock. The use of citric acid as a pre-flush was found to be quite effective. This weak acid has a slow rate of reaction with the rock and can buffer the pH to below 3.5 for extended periods of time. With the citric acid pre-flush the polymer could be efficiently propagated through the core in a low pH environment i.e. at a low viscosity. The transport of various HPAM solutions was studied in sandstones, in terms of permeability reduction, mobility reduction, adsorption and inaccessible pore volume with different process variables: injection pH, polymer concentration, polymer molecular weight, salinity, degree of hydrolysis, and flow rate. Measurements of polymer effluent profiles and tracer tests show that the polymer retention increases at the lower pH. A new simulation capability to model the deep-penetrating mobility control or conformance control using pH-sensitive polymer was developed. The core flood acid injection experiments were history matched to estimate geochemical reaction rates. Preliminary scale-up simulations employing linear and radial geometry floods in 2-layer reservoir models were conducted. It is clearly shown that the injection rate of pH-sensitive polymer solutions can be significantly increased by injecting

  16. The Lower Uncertainty Bound of Climate Sensitivity in Gcms: How Low Can We Go?...

    NASA Astrophysics Data System (ADS)

    Millar, R.; Sparrow, S.; Sexton, D.; Lowe, J. A.; Ingram, W.; Allen, M. R.

    2014-12-01

    The equilibrium climate sensitivity (ECS) is one of the most important metrics of climate change. As such, constraining the uncertainties of its magnitude, and the magnitude of its transient counterpart (TCR), is one of the primary goals of global climate science. General circulations models (GCMs) from modelling centres around the world have consistently failed to produce a model with a sensitivity of less than 2 degrees. However, as the CMIP5 multi-model ensemble is an ensemble of opportunity, it is unclear whether this fact is sufficient to rule out climate sensitivity of less than 2 degrees, or is the ensemble simply not diverse enough to sample low values of climate sensitivity? We present analysis based on the observed planetary energy budget and simple energy-balance models. When view in terms of the TCR:ECS ratio (RWF- the Realised Warming Fraction), we find a region of climate response space of low RWF and low TCR that is robust to the structure of the simple climate model and isn't sampled by the CMIP5 ensemble. We show that this region is better sampled by a perturbed physics ensemble of the HadCM3 GCM constrained solely on top of atmosphere radiative fluxes than the CMIP5 ensemble, raising the question of the physical plausibility of low climate sensitivity GCMs. Based on our results above, we have set out to systematically probe the ability to create GCMs with low climate sensitivity in the HadCM3 GCM. We train a statistical emulator on our perturbed physics ensemble and use it to identify regions of HadCM3 parameter space that are consistent with both a low climate sensitivity and a low RWF. We then run this "low sensitivity" ensemble to test our predictions and understand the combination of feedbacks needed to produce a sensible GCM with a sensitivity of less than 2 degrees. Here we hope to demonstrate our results from this systematic probing of the low climate sensitivity uncertainty bound and add further understanding to the physical plausibility

  17. Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations

    NASA Astrophysics Data System (ADS)

    Stripling, Hayes Franklin

    Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.

  18. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.

  19. Uncertainty and sensitivity analysis for anisotropic inhomogeneous head tissue conductivity in human head modelling.

    PubMed

    Bashar, M R; Li, Y; Wen, P

    2010-06-01

    The accuracy of an electroencephalography (EEG) forward problem partially depends on the head tissue conductivities. These conductivities are anisotropic and inhomogeneous in nature. This paper investigates the effects of conductivity uncertainty and analyses its sensitivity on an EEG forward problem for a spherical and a realistic head models. We estimate the uncertain conductivities using an efficient constraint based on an optimization method and perturb it by means of the volume and directional constraints. Assigning the uncertain conductivities, we construct spherical and realistic head models by means of a stochastic finite element method for fixed dipolar sources. We also compute EEG based on the constructed head models. We use a probabilistic sensitivity analysis method to determine the sensitivity indexes. These indexes characterize the conductivities with the most or the least effects on the computed outputs. These results demonstrate that conductivity uncertainty has significant effects on EEG. These results also show that the uncertain conductivities of the scalp, the radial direction of the skull and transversal direction in the white matter are more sensible.

  20. LCA of emerging technologies: addressing high uncertainty on inputs' variability when performing global sensitivity analysis.

    PubMed

    Lacirignola, Martino; Blanc, Philippe; Girard, Robin; Pérez-López, Paula; Blanc, Isabelle

    2017-02-01

    In the life cycle assessment (LCA) context, global sensitivity analysis (GSA) has been identified by several authors as a relevant practice to enhance the understanding of the model's structure and ensure reliability and credibility of the LCA results. GSA allows establishing a ranking among the input parameters, according to their influence on the variability of the output. Such feature is of high interest in particular when aiming at defining parameterized LCA models. When performing a GSA, the description of the variability of each input parameter may affect the results. This aspect is critical when studying new products or emerging technologies, where data regarding the model inputs are very uncertain and may cause misleading GSA outcomes, such as inappropriate input rankings. A systematic assessment of this sensitivity issue is now proposed. We develop a methodology to analyze the sensitivity of the GSA results (i.e. the stability of the ranking of the inputs) with respect to the description of such inputs of the model (i.e. the definition of their inherent variability). With this research, we aim at enriching the debate on the application of GSA to LCAs affected by high uncertainties. We illustrate its application with a case study, aiming at the elaboration of a simple model expressing the life cycle greenhouse gas emissions of enhanced geothermal systems (EGS) as a function of few key parameters. Our methodology allows identifying the key inputs of the LCA model, taking into account the uncertainty related to their description.

  1. Intolerance of uncertainty, anxiety sensitivity, health anxiety, and anxiety disorder symptoms in youth.

    PubMed

    Wright, Kristi D; Lebell, Megan A N Adams; Carleton, R Nicholas

    2016-06-01

    Intolerance of uncertainty (IU) - difficulty coping with uncertainty and its implications - is traditionally studied in adult populations, but more recently has been explored in children and adolescents. To date, the association between IU and health anxiety has not been explored in a child or adolescent sample. Further, it is unknown whether the relationship between IU and health anxiety may be mediated by anxiety sensitivity (i.e., fear of anxiety-related sensations) in this population. We sought to extend the existing research and expand our understanding of IU as a transdiagnostic construct by exploring the association between IU and health anxiety, anxiety sensitivity, and DSM-IV anxiety disorder symptom categories in 128 youth (M age=12.7years, SD=0.82, range 11-17 years). Participants completed measures of IU, health anxiety, anxiety sensitivity, and anxiety disorder symptom categories. Results demonstrated significant positive associations between IU and all measures. Mediation analyses supported the direct and indirect importance of each IU subscale on health anxiety. Future directions and implications are discussed. Copyright © 2016. Published by Elsevier Ltd.

  2. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis☆

    PubMed Central

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-01-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987

  3. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by

  4. Advanced Simulation Capability for Environmental Management (ASCEM): Developments in Uncertainty Quantification and Sensitivity Analysis.

    NASA Astrophysics Data System (ADS)

    McKinney, S. W.

    2015-12-01

    Effectiveness of uncertainty quantification (UQ) and sensitivity analysis (SA) has been improved in ASCEM by choosing from a variety of methods to best suit each model. Previously, ASCEM had a small toolset for UQ and SA, leaving out benefits of the many unincluded methods. Many UQ and SA methods are useful for analyzing models with specific characteristics; therefore, programming these methods into ASCEM would have been inefficient. Embedding the R programming language into ASCEM grants access to a plethora of UQ and SA methods. As a result, programming required is drastically decreased, and runtime efficiency and analysis effectiveness are increased relative to each unique model.

  5. Evaluating the Hydrologic Sensitivities of Three Land Surface Models to Bound Uncertainties in Runoff Projections

    NASA Astrophysics Data System (ADS)

    Chiao, T.; Nijssen, B.; Stickel, L.; Lettenmaier, D. P.

    2013-12-01

    Hydrologic modeling is often used to assess the potential impacts of climate change on water availability and quality. A common approach in these studies is to calibrate the selected model(s) to reproduce historic stream flows prior to the application of future climate projections. This approach relies on the implicit assumptions that the sensitivities of these models to meteorological fluctuations will remain relatively constant under climate change and that these sensitivities are similar among models if all models are calibrated to the same historic record. However, even if the models are able to capture the historic variability in hydrological variables, differences in model structure and parameter estimation contribute to the uncertainties in projected runoff, which confounds the incorporation of these results into water resource management decision-making. A better understanding of the variability in hydrologic sensitivities between different models can aid in bounding this uncertainty. In this research, we characterized the hydrologic sensitivities of three watershed-scale land surface models through a case study of the Bull Run watershed in Northern Oregon. The Distributed Hydrology Soil Vegetation Model (DHSVM), Precipitation-Runoff Modeling System (PRMS), and Variable Infiltration Capacity model (VIC) were implemented and calibrated individually to historic streamflow using a common set of long-term, gridded forcings. In addition to analyzing model performances for a historic period, we quantified the temperature sensitivity (defined as change in runoff in response to change in temperature) and precipitation elasticity (defined as change in runoff in response to change in precipitation) of these three models via perturbation of the historic climate record using synthetic experiments. By comparing how these three models respond to changes in climate forcings, this research aims to test the assumption of constant and similar hydrologic sensitivities. Our

  6. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and

  7. Funky Floors

    NASA Image and Video Library

    2006-11-08

    The material covering the floors of these two craters looks very different from the surrounds. The unusual markings of the floor material indicates that a volatile, such as ice, has affected the appearance of the surface

  8. Climate model parameter sensitivity and selection for incorporating uncertainty in regional climate modeling

    NASA Astrophysics Data System (ADS)

    Li, S.; Mote, P.; Rupp, D. E.; McNeall, D. J.; Sarah, S.; Hawkins, L.

    2016-12-01

    Many processes - especially those involving clouds - that control climate responses to external forcings are still poorly understood, poorly modeled, and/or difficult to observe in nature. As such, model parameterizations representing these processes have large uncertainties. Therefore, even a Global Climate Model (GCM)'s `standard' configuration, which has been tuned to reproduce observed climate well, is subject to large uncertainty. To explore the influence of different parameter selections on regional climate, a large global/regional atmospheric perturbed physics ensemble was run using the volunteer computing network weather@home with the goal of finding model variants that have small top-of-atmosphere flux imbalance. This configuration reasonably reproduces the observed climates across the western US, while retaining the possibility of a range regional climate sensitivities. After this screening step, a subset of these parameter perturbations are used when downscaling the global model simulations with an embedded regional climate model. This work aims to identify model parameters that influence the quality of regional simulations, improve global and regional model performance through improved model parameterizations, and quantify uncertainty in downscaled simulations stemming from error in model parameterizations.

  9. Uncertainty and sensitivity analysis of early exposure results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.

  10. Multi-Dimensional, Discrete-Ordinates Based Cross Section Sensitivity and Uncertainty Analysis Code System.

    SciTech Connect

    KODELI, IVAN-ALEXANDER

    2008-05-22

    Version 01 SUSD3D 2008 calculates sensitivity coefficients and standard deviation in the calculated detector responses or design parameters of interest due to input cross sections and their uncertainties. One-, two- and three-dimensional transport problems can be studied. Several types of uncertainties can be considered, i.e. those due to (1) neutron/gamma multi-group cross sections, (2) energy-dependent response functions, (3) secondary angular distribution (SAD) or secondary energy distribution (SED) uncertainties. SUSD3D, initially released in 2000, is loosely based on the SUSD code by K. Furuta, Y. Oka and S. Kondo from the University of Tokyo in Japan. SUSD 2008 modifications are primarily relevant for the sensitivity calculations of the critical systems and include: o Correction of the sensitivity calculation for prompt fission and number of delayed neutrons per fission (MT=18 and MT=455). o An option allows the re-normalization of the prompt fission spectra covariance matrices to be applied via the "normalization" of the sensitivity profiles. This option is useful in case if the fission spectra covariances (MF=35) used do not comply with the ENDF-6 Format Manual rules. o For the criticality calculations the normalization can be calculated by the code SUSD3D internally. Parameter NORM should be set to 0 in this case. Total number of neutrons per fission (MT=452) sensitivities for all the fissile materials must be requested in the SUSD3D OVERLAY-2 input deck in order to allow the correct normalization. o The cross section data format reading was updated, mostly for critical systems (e.g. MT18 reaction). o Fission spectra uncertainties can be calculated using the file MF35 data processed by the ERROR-J code. o Cross sections can be input directly using input card "xs" (vector data only). o k-eff card was added for subcritical systems. o This version of SUSD3D code is compatible with the single precision DANTSYS code package (CCC-0547/07 and /08, which are the

  11. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Knighton, J.; Kline, S. W.

    2015-10-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  12. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    NASA Astrophysics Data System (ADS)

    Bastidas, Luis A.; Knighton, James; Kline, Shaun W.

    2016-09-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  13. Using Divertor Strike Point Splitting to Study Plasma Response and Its Sensitivity to Equilibrium Uncertainties

    NASA Astrophysics Data System (ADS)

    Lee, J. S.; Orlov, D. M.; Moyer, R. A.; Bykov, I.; Evans, T. E.; Wu, W.; Lyons, B. C.; Sugiyama, L. E.

    2016-10-01

    Magnetic field perturbations (RMPs) split the strike points in divertor tokamaks. This splitting is measured using fast imaging of filtered visible light from the divertor. We compare the observed splitting during n=3 RMP experiments to vacuum and plasma response modeling to determine if the measured splitting provides a sensitive diagnostic for the plasma response to the RMP. We also investigate the sensitivity of the computed plasma response to uncertainties in the initial 2D equilibrium. Strike point splitting was also observed in ELMing H-mode without the RMP, possibly due to n=1 error- and error-field correction fields. We compare the measured splitting during ELMs to linear plasma response modeling of the divertor footprints, and to nonlinear M3D ELM simulations. Work supported by U.S. DOE under Grant Numbers DE-FG02-07ER54917, DE-FG02-05ER54809.

  14. Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2015-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.

  15. The BEMUSE programme: Best-estimate methods uncertainty and sensitivity evaluation - Phase 2

    SciTech Connect

    Petruzzi, A.; D'Auria, F.; De Crecy, A.

    2006-07-01

    The BEMUSE (Best Estimate Methods - Uncertainty and Sensitivity Evaluation) Programme has been promoted by the Working Group on Accident Management and Analysis (GAMA) and endorsed by the Committee on the Safety of Nuclear Installations (CSNI) [1]. The high-level objectives of the work are: To evaluate the practicability, the quality and the reliability of Best-Estimate (BE) methods including uncertainty evaluation in applications relevant to nuclear reactor safety; To promote the use of BE-Methods by the regulatory bodies and the industry. Operational objectives include an assessment of the applicability of best-estimate and uncertainty methods to integral tests and their use in reactor applications. The present paper deals with the activities performed by the participants during the Phase II of BEMUSE. It is connected with the re-analysis of the Experiment L2-5 performed in the LOFT facility using different thermal-hydraulic system codes. The technological importance of the activity can be derived from the following: a) LOFT is the only Integral Test Facility with a nuclear core where safety experiments have been performed; b) The ISP-13 was completed more than 20 years ago and open issues remained from the analysis of the comparison between measured and calculated trends The consideration of the BE codes and uncertainty evaluation for Design Basis Accident (DBA), by itself, shows the safety significance of the proposed activity. End users of the results are expected to be the industry, the safety authorities and the research laboratories. Main achievements of the Phase II can be summarized as follows: - Almost all performed calculations appear qualified against the fixed criteria; - Dispersion bands of reference results appear substantially less than in ISP-13. (authors)

  16. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    SciTech Connect

    Perkó, Zoltán Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both

  17. Climate Change Impact Uncertainties for Maize in Panama: Farm Information, Climate Projections, and Yield Sensitivities

    NASA Technical Reports Server (NTRS)

    Ruane, Alex C.; Cecil, L. Dewayne; Horton, Radley M.; Gordon, Roman; McCollum, Raymond (Brown, Douglas); Brown, Douglas; Killough, Brian; Goldberg, Richard; Greeley, Adam P.; Rosenzweig, Cynthia

    2011-01-01

    We present results from a pilot project to characterize and bound multi-disciplinary uncertainties around the assessment of maize (Zea mays) production impacts using the CERES-Maize crop model in a climate-sensitive region with a variety of farming systems (Panama). Segunda coa (autumn) maize yield in Panama currently suffers occasionally from high water stress at the end of the growing season, however under future climate conditions warmer temperatures accelerate crop maturation and elevated CO (sub 2) concentrations improve water retention. This combination reduces end-of-season water stresses and eventually leads to small mean yield gains according to median projections, although accelerated maturation reduces yields in seasons with low water stresses. Calibrations of cultivar traits, soil profile, and fertilizer amounts are most important for representing baseline yields, however sensitivity to all management factors is reduced in an assessment of future yield changes (most dramatically for fertilizers), suggesting that yield changes may be more generalizable than absolute yields. Uncertainty around General Circulation Model (GCM)s' projected changes in rainfall gain in importance throughout the century, with yield changes strongly correlated with growing season rainfall totals. Climate changes are expected to be obscured by the large inter-annual variations in Panamanian climate that will continue to be the dominant influence on seasonal maize yield into the coming decades. The relatively high (A2) and low (B1) emissions scenarios show little difference in their impact on future maize yields until the end of the century. Uncertainties related to the sensitivity of CERES-Maize to carbon dioxide concentrations have a substantial influence on projected changes, and remain a significant obstacle to climate change impacts assessment. Finally, an investigation into the potential of simple statistical yield emulators based upon key climate variables characterizes the

  18. Closed-flow column experiments: A numerical sensitivity analysis of reactive transport and parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Ritschel, Thomas; Totsche, Kai Uwe

    2016-08-01

    The identification of transport parameters by inverse modeling often suffers from equifinality or parameter correlation when models are fitted to measurements of the solute breakthrough in column outflow experiments. This parameter uncertainty can be approached by performing multiple experiments with different sets of boundary conditions, each provoking observations that are uniquely attributable to the respective transport processes. A promising approach to further increase the information potential of the experimental outcome is the closed-flow column design. It is characterized by the recirculation of the column effluent into the solution supply vessel that feeds the inflow, which results in a damped sinusoidal oscillation in the breakthrough curve. In order to reveal the potential application of closed-flow experiments, we present a comprehensive sensitivity analysis using common models for adsorption and degradation. We show that the sensitivity of inverse parameter determination with respect to the apparent dispersion can be controlled by the experimenter. For optimal settings, a decrease in parameter uncertainty as compared to classical experiments by an order of magnitude is achieved. In addition, we show a reduced equifinality between rate-limited interactions and apparent dispersion. Furthermore, we illustrate the expected breakthrough curve for equilibrium and nonequilibrium adsorption, the latter showing strong similarities to the behavior found for completely mixed batch reactor experiments. Finally, breakthrough data from a reactive tracer experiment is evaluated using the proposed framework with excellent agreement of model and experimental results.

  19. Sensitivity and uncertainty analysis within a methodology for evaluating environmental restoration technologies

    NASA Astrophysics Data System (ADS)

    Zio, Enrico; Apostolakis, George E.

    1999-03-01

    This paper illustrates an application of sensitivity and uncertainty analysis techniques within a methodology for evaluating environmental restoration technologies. The methodology consists of two main parts: the first part ("analysis") integrates a wide range of decision criteria and impact evaluation techniques in a framework that emphasizes and incorporates input from stakeholders in all aspects of the process. Its products are the rankings of the alternative options for each stakeholder using, essentially, expected utility theory. The second part ("deliberation") utilizes the analytical results of the "analysis" and attempts to develop consensus among the stakeholders in a session in which the stakeholders discuss and evaluate the analytical results. This paper deals with the analytical part of the approach and the uncertainty and sensitivity analyses that were carried out in preparation for the deliberative process. The objective of these investigations was that of testing the robustness of the assessments and of pointing out possible existing sources of disagreements among the participating stakeholders, thus providing insights for the successive deliberative process. Standard techniques, such as differential analysis, Monte Carlo sampling and a two-dimensional policy region analysis proved sufficient for the task.

  20. Uncertainty and sensitivity of flood risk calculations for a dike ring in the south of the Netherlands.

    PubMed

    de Moel, Hans; Bouwer, Laurens M; Aerts, Jeroen C J H

    2014-03-01

    A central tool in risk management is the exceedance-probability loss (EPL) curve, which denotes the probabilities of damages being exceeded or equalled. These curves are used for a number of purposes, including the calculation of the expected annual damage (EAD), a common indicator for risk. The model calculations that are used to create such a curve contain uncertainties that accumulate in the end result. As a result, EPL curves and EAD calculations are also surrounded by uncertainties. Knowledge of the magnitude and source of these uncertainties helps to improve assessments and leads to better informed decisions. This study, therefore, performs uncertainty and sensitivity analyses for a dike-ring area in the Netherlands, on the south bank of the river Meuse. In this study, a Monte Carlo framework is used that combines hydraulic boundary conditions, a breach growth model, an inundation model, and a damage model. It encompasses the modelling of thirteen potential breach locations and uncertainties related to probability, duration of the flood wave, height of the flood wave, erodibility of the embankment, damage curves, and the value of assets at risk. The assessment includes uncertainty and sensitivity of risk estimates for each individual location, as well as the dike-ring area as a whole. The results show that for the dike ring in question, EAD estimates exhibit a 90% percentile range from about 8 times lower than the median, up to 4.5 times higher than the median. This level of uncertainty can mainly be attributed to uncertainty in depth-damage curves, uncertainty in the probability of a flood event and the duration of the flood wave. There are considerable differences between breach locations, both in the magnitude of the uncertainty, and in its source. This indicates that local characteristics have a considerable impact on uncertainty and sensitivity of flood damage and risk calculations.

  1. Brief Report: Effects of Sensory Sensitivity and Intolerance of Uncertainty on Anxiety in Mothers of Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Uljarevic, Mirko; Carrington, Sarah; Leekam, Susan

    2016-01-01

    This study examined the relations between anxiety and individual characteristics of sensory sensitivity (SS) and intolerance of uncertainty (IU) in mothers of children with ASD. The mothers of 50 children completed the Hospital Anxiety and Depression Scale, the Highly Sensitive Person Scale and the IU Scale. Anxiety was associated with both SS and…

  2. Brief Report: Effects of Sensory Sensitivity and Intolerance of Uncertainty on Anxiety in Mothers of Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Uljarevic, Mirko; Carrington, Sarah; Leekam, Susan

    2016-01-01

    This study examined the relations between anxiety and individual characteristics of sensory sensitivity (SS) and intolerance of uncertainty (IU) in mothers of children with ASD. The mothers of 50 children completed the Hospital Anxiety and Depression Scale, the Highly Sensitive Person Scale and the IU Scale. Anxiety was associated with both SS and…

  3. Uncertainty and sensitivity analysis of chronic exposure results with the MACCS reactor accident consequence model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.

  4. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    NASA Astrophysics Data System (ADS)

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matthew; Thurber, Clifford H.; Tung, Sui

    2016-04-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  5. An uncertainty and sensitivity analysis approach for GIS-based multicriteria landslide susceptibility mapping

    PubMed Central

    Feizizadeh, Bakhtiar; Blaschke, Thomas

    2014-01-01

    GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster–Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster–Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC

  6. Uncertainty and sensitivity analysis of food pathway results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.

  7. An uncertainty and sensitivity analysis approach for GIS-based multicriteria landslide susceptibility mapping.

    PubMed

    Feizizadeh, Bakhtiar; Blaschke, Thomas

    2014-03-04

    GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation

  8. Reducing the sensitivity of IMPT treatment plans to setup errors and range uncertainties via probabilistic treatment planning

    SciTech Connect

    Unkelbach, Jan; Bortfeld, Thomas; Martin, Benjamin C.; Soukup, Martin

    2009-01-15

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be very sensitive to setup errors and range uncertainties. If these errors are not accounted for during treatment planning, the dose distribution realized in the patient may by strongly degraded compared to the planned dose distribution. The authors implemented the probabilistic approach to incorporate uncertainties directly into the optimization of an intensity modulated treatment plan. Following this approach, the dose distribution depends on a set of random variables which parameterize the uncertainty, as does the objective function used to optimize the treatment plan. The authors optimize the expected value of the objective function. They investigate IMPT treatment planning regarding range uncertainties and setup errors. They demonstrate that incorporating these uncertainties into the optimization yields qualitatively different treatment plans compared to conventional plans which do not account for uncertainty. The sensitivity of an IMPT plan depends on the dose contributions of individual beam directions. Roughly speaking, steep dose gradients in beam direction make treatment plans sensitive to range errors. Steep lateral dose gradients make plans sensitive to setup errors. More robust treatment plans are obtained by redistributing dose among different beam directions. This can be achieved by the probabilistic approach. In contrast, the safety margin approach as widely applied in photon therapy fails in IMPT and is neither suitable for handling range variations nor setup errors.

  9. Reducing the sensitivity of IMPT treatment plans to setup errors and range uncertainties via probabilistic treatment planning.

    PubMed

    Unkelbach, Jan; Bortfeld, Thomas; Martin, Benjamin C; Soukup, Martin

    2009-01-01

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be very sensitive to setup errors and range uncertainties. If these errors are not accounted for during treatment planning, the dose distribution realized in the patient may by strongly degraded compared to the planned dose distribution. The authors implemented the probabilistic approach to incorporate uncertainties directly into the optimization of an intensity modulated treatment plan. Following this approach, the dose distribution depends on a set of random variables which parameterize the uncertainty, as does the objective function used to optimize the treatment plan. The authors optimize the expected value of the objective function. They investigate IMPT treatment planning regarding range uncertainties and setup errors. They demonstrate that incorporating these uncertainties into the optimization yields qualitatively different treatment plans compared to conventional plans which do not account for uncertainty. The sensitivity of an IMPT plan depends on the dose contributions of individual beam directions. Roughly speaking, steep dose gradients in beam direction make treatment plans sensitive to range errors. Steep lateral dose gradients make plans sensitive to setup errors. More robust treatment plans are obtained by redistributing dose among different beam directions. This can be achieved by the probabilistic approach. In contrast, the safety margin approach as widely applied in photon therapy fails in IMPT and is neither suitable for handling range variations nor setup errors.

  10. Probabilistic approaches to compute uncertainty intervals and sensitivity factors of ultrasonic simulations of a weld inspection.

    PubMed

    Rupin, F; Blatman, G; Lacaze, S; Fouquet, T; Chassignole, B

    2014-04-01

    For comprehension purpose, numerical computations are more and more used to simulate the propagation phenomena observed during experimental inspections. However, the good agreement between experimental and simulated data necessitates the use of accurate input data and thus a good characterization of the inspected material. Generally the input data are provided by experimental measurements and are consequently tainted with uncertainties. Thus, it becomes necessary to evaluate the impact of these uncertainties on the outputs of the numerical model. The aim of this study is to perform a probabilistic analysis of an ultrasonic inspection of an austenitic weld containing a manufactured defect based on advanced techniques such as polynomial chaos expansions and computation of sensitivity factors (Sobol, DGSM). The simulation of this configuration with the finite element code ATHENA2D was performed 6000times with variations of the input parameters (the columnar grain orientation and the elastic constants of the material). The 6000 sets of input parameters were obtained from adapted statistical laws. The output parameters (the amplitude and the position of the defect echo) distributions were then analyzed and the 95% confidence intervals were determined. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. A comparison of five forest interception models using global sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Linhoss, Anna C.; Siegert, Courtney M.

    2016-07-01

    Interception by the forest canopy plays a critical role in the hydrologic cycle by removing a significant portion of incoming precipitation from the terrestrial component. While there are a number of existing physical models of forest interception, few studies have summarized or compared these models. The objective of this work is to use global sensitivity and uncertainty analysis to compare five mechanistic interception models including the Rutter, Rutter Sparse, Gash, Sparse Gash, and Liu models. Using parameter probability distribution functions of values from the literature, our results show that on average storm duration [Dur], gross precipitation [PG], canopy storage [S] and solar radiation [Rn] are the most important model parameters. On the other hand, empirical parameters used in calculating evaporation and drip (i.e. trunk evaporation as a proportion of evaporation from the saturated canopy [ɛ], the empirical drainage parameter [b], the drainage partitioning coefficient [pd], and the rate of water dripping from the canopy when canopy storage has been reached [Ds]) have relatively low levels of importance in interception modeling. As such, future modeling efforts should aim to decompose parameters that are the most influential in determining model outputs into easily measurable physical components. Because this study compares models, the choices regarding the parameter probability distribution functions are applied across models, which enables a more definitive ranking of model uncertainty.

  12. Determination of protection zones for Dutch groundwater wells against virus contamination--uncertainty and sensitivity analysis.

    PubMed

    Schijven, J F; Mülschlegel, J H C; Hassanizadeh, S M; Teunis, P F M; de Roda Husman, A M

    2006-09-01

    Protection zones of shallow unconfined aquifers in The Netherlands were calculated that allow protection against virus contamination to the level that the infection risk of 10(-4) per person per year is not exceeded with a 95% certainty. An uncertainty and a sensitivity analysis of the calculated protection zones were included. It was concluded that protection zones of 1 to 2 years travel time (206-418 m) are needed (6 to 12 times the currently applied travel time of 60 days). This will lead to enlargement of protection zones, encompassing 110 unconfined groundwater well systems that produce 3 x 10(8) m3 y(-1) of drinking water (38% of total Dutch production from groundwater). A smaller protection zone is possible if it can be shown that an aquifer has properties that lead to greater reduction of virus contamination, like more attachment. Deeper aquifers beneath aquitards of at least 2 years of vertical travel time are adequately protected because vertical flow in the aquitards is only 0.7 m per year. The most sensitive parameters are virus attachment and inactivation. The next most sensitive parameters are grain size of the sand, abstraction rate of groundwater, virus concentrations in raw sewage and consumption of unboiled drinking water. Research is recommended on additional protection by attachment and under unsaturated conditions.

  13. Investigating Uncertainty and Sensitivity in Integrated, Multimedia Environmental Models: Tools for FRAMES-3MRA

    SciTech Connect

    Babendreier, Justin E.; Castleton, Karl J.

    2005-08-01

    Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a comparative approach using several techniques, coupled with sufficient computational power. The Framework for Risk Analysis in Multimedia Environmental Systems - Multimedia, Multipathway, and Multireceptor Risk Assessment (FRAMES-3MRA) is an important software model being developed by the United States Environmental Protection Agency for use in risk assessment of hazardous waste management facilities. The 3MRA modeling system includes a set of 17 science modules that collectively simulate release, fate and transport, exposure, and risk associated with hazardous contaminants disposed of in land-based waste management units (WMU) .

  14. Uncertainty analysis and global sensitivity analysis of techno-economic assessments for biodiesel production.

    PubMed

    Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao

    2015-01-01

    There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Global sensitivity analysis and uncertainties in SEA models of vibroacoustic systems

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2017-06-01

    The effect of parametric uncertainties on the dispersion of Statistical Energy Analysis (SEA) models of structural-acoustic coupled systems is studied with the Fourier analysis sensitivity test (FAST) method. The method is firstly applied to an academic example representing a transmission suite, then to a more complex industrial structure from the space industry. Two sets of parameters are considered, namely error on the SEA model's coefficients, or directly the engineering parameters. The first case is an intrusive approach, but enables to identify the dominant phenomena taking place in a given configuration. The second is non-intrusive and appeals more to engineering considerations, by studying the effect of input parameters such as geometry or material characteristics on the SEA outputs. A study of the distribution of results in each frequency band with the same sampling shows some interesting features, such as bimodal repartitions in some ranges.

  16. Perspectives Gained in an Evaluation of Uncertainty, Sensitivity, and Decision Analysis Software

    SciTech Connect

    Davis, F.J.; Helton, J.C.

    1999-02-24

    The following software packages for uncertainty, sensitivity, and decision analysis were reviewed and also tested with several simple analysis problems: Crystal Ball, RiskQ, SUSA-PC, Analytica, PRISM, Ithink, Stella, LHS, STEPWISE, and JMP. Results from the review and test problems are presented. The study resulted in the recognition of the importance of four considerations in the selection of a software package: (1) the availability of an appropriate selection of distributions, (2) the ease with which data flows through the input sampling, model evaluation, and output analysis process, (3) the type of models that can be incorporated into the analysis process, and (4) the level of confidence in the software modeling and results.

  17. Sensitivity and uncertainty analysis of a physically-based landslide model

    NASA Astrophysics Data System (ADS)

    Yatheendradas, S.; Bach Kirschbaum, D.; Baum, R. L.; Godt, J.

    2013-12-01

    Worldwide, rainfall-induced landslides pose a major threat to life and property. Remotely sensed data combined with physically-based models of landslide initiation are a potentially economical solution for anticipating landslide activity over large, national or multinational areas as a basis for landslide early warning. Detailed high-resolution landslide modeling is challenging due to difficulties in quantifying the complex interaction between rainfall infiltration, surface materials and the typically coarse resolution of available remotely sensed data. These slope-stability models calculate coincident changes in driving and resisting forces at the hillslope level for anticipating landslides. This research seeks to better quantify the uncertainty of these models as well as evaluate their potential for application over large areas through detailed sensitivity analyses. Sensitivity to various factors including model input parameters, boundary and initial conditions, rainfall inputs, and spatial resolution of model inputs is assessed using a probabilistic ensemble setup. We use the physically-based USGS model, TRIGRS (Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability), that has been ported to NASA's high performance Land Information System (LIS) to take advantage of its multiple remote sensing data streams and tools. We apply the TRIGRS model over an example region with available in-situ gage and remotely sensed rainfall (e.g., TRMM: http://pmm.nasa.gov). To make this model applicable even in regions without relevant fine-resolution data, soil depth is estimated using topographic information, and initial water table depth using spatially disaggregated coarse-resolution modeled soil moisture data. The analyses are done across a range of fine spatial resolutions to determine the corresponding trend in the contribution of different factors to the model output uncertainty. This research acts as a guide towards application of such a detailed slope

  18. Sensitivity of elastic properties to measurement uncertainties in laryngeal muscles with implications for voice fundamental frequency prediction.

    PubMed

    Hunter, Eric J; Alipour, Fariborz; Titze, Ingo R

    2007-11-01

    This paper discusses the effects of measurement uncertainties when calculating elastic moduli of laryngeal tissue. Small dimensions coupled with highly nonlinear elastic properties exacerbate the uncertainties. The sensitivity of both tangent and secant Young's Modulus was quantified in terms of the coefficient of variation, which depended on measurement of reference length and cross-sectional area. Uncertainties in the measurement of mass, used to calculate cross-sectional area of a small tissue sample, affected Young's Modulus calculations when tissue absorption of the hydrating solution was not accounted for. Uncertainty in reference length had twice the effect on elasticity than other measures. The implication of these measurement errors on predicted fundamental frequency of vocalization is discussed. Refinements on isolated muscle experimental protocols are proposed that pay greatest attention to measures of highest sensitivity.

  19. SENSITIVITY OF ELASTIC PROPERTIES TO MEASUREMENT UNCERTAINTIES IN LARYNGEAL MUSCLES WITH IMPLICATIONS FOR VOICE FUNDAMENTAL FREQUENCY PREDICTION

    PubMed Central

    Hunter, Eric J.; Alipour, Fariborz; Titze, Ingo R.

    2016-01-01

    Objectives/Hypothesis This paper discusses the effects of measurement uncertainties when calculating elastic moduli of laryngeal tissue. Methods Small dimensions coupled with highly nonlinear elastic properties exacerbate the uncertainties. The sensitivity of both tangent and secant Young’s Modulus was quantified in terms of the coefficient of variation, which depended on measurement of reference length and cross-sectional area. Results Uncertainties in the measurement of mass, used to calculate cross-sectional area of a small tissue sample, affected Young’s Modulus calculations when tissue absorption of the hydrating solution was not accounted for. Uncertainty in reference length had twice the effect on elasticity than other measures. Conclusions The implication of these measurement errors on predicted fundamental frequency of vocalization is discussed. Refinements on isolated muscle experimental protocols are proposed that pay greatest attention to measures of highest sensitivity. PMID:16904867

  20. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    PubMed Central

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  1. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation.

    PubMed

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-07-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  2. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  3. Sensitivity of low energy brachytherapy Monte Carlo dose calculations to uncertainties in human tissue composition

    SciTech Connect

    Landry, Guillaume; Reniers, Brigitte; Murrer, Lars; Lutgens, Ludy; Bloemen-Van Gurp, Esther; Pignol, Jean-Philippe; Keller, Brian; Beaulieu, Luc; Verhaegen, Frank

    2010-10-15

    Purpose: The objective of this work is to assess the sensitivity of Monte Carlo (MC) dose calculations to uncertainties in human tissue composition for a range of low photon energy brachytherapy sources: {sup 125}I, {sup 103}Pd, {sup 131}Cs, and an electronic brachytherapy source (EBS). The low energy photons emitted by these sources make the dosimetry sensitive to variations in tissue atomic number due to the dominance of the photoelectric effect. This work reports dose to a small mass of water in medium D{sub w,m} as opposed to dose to a small mass of medium in medium D{sub m,m}. Methods: Mean adipose, mammary gland, and breast tissues (as uniform mixture of the aforementioned tissues) are investigated as well as compositions corresponding to one standard deviation from the mean. Prostate mean compositions from three different literature sources are also investigated. Three sets of MC simulations are performed with the GEANT4 code: (1) Dose calculations for idealized TG-43-like spherical geometries using point sources. Radial dose profiles obtained in different media are compared to assess the influence of compositional uncertainties. (2) Dose calculations for four clinical prostate LDR brachytherapy permanent seed implants using {sup 125}I seeds (Model 2301, Best Medical, Springfield, VA). The effect of varying the prostate composition in the planning target volume (PTV) is investigated by comparing PTV D{sub 90} values. (3) Dose calculations for four clinical breast LDR brachytherapy permanent seed implants using {sup 103}Pd seeds (Model 2335, Best Medical). The effects of varying the adipose/gland ratio in the PTV and of varying the elemental composition of adipose and gland within one standard deviation of the assumed mean composition are investigated by comparing PTV D{sub 90} values. For (2) and (3), the influence of using the mass density from CT scans instead of unit mass density is also assessed. Results: Results from simulation (1) show that variations

  4. Sensitivity of low energy brachytherapy Monte Carlo dose calculations to uncertainties in human tissue composition.

    PubMed

    Landry, Guillaume; Reniers, Brigitte; Murrer, Lars; Lutgens, Ludy; Gurp, Esther Bloemen-Van; Pignol, Jean-Philippe; Keller, Brian; Beaulieu, Luc; Verhaegen, Frank

    2010-10-01

    The objective of this work is to assess the sensitivity of Monte Carlo (MC) dose calculations to uncertainties in human tissue composition for a range of low photon energy brachytherapy sources: 125I, 103Pd, 131Cs, and an electronic brachytherapy source (EBS). The low energy photons emitted by these sources make the dosimetry sensitive to variations in tissue atomic number due to the dominance of the photoelectric effect. This work reports dose to a small mass of water in medium D(w,m) as opposed to dose to a small mass of medium in medium D(m,m). Mean adipose, mammary gland, and breast tissues (as uniform mixture of the aforementioned tissues) are investigated as well as compositions corresponding to one standard deviation from the mean. Prostate mean compositions from three different literature sources are also investigated. Three sets of MC simulations are performed with the GEANT4 code: (1) Dose calculations for idealized TG-43-like spherical geometries using point sources. Radial dose profiles obtained in different media are compared to assess the influence of compositional uncertainties. (2) Dose calculations for four clinical prostate LDR brachytherapy permanent seed implants using 125I seeds (Model 2301, Best Medical, Springfield, VA). The effect of varying the prostate composition in the planning target volume (PTV) is investigated by comparing PTV D90 values. (3) Dose calculations for four clinical breast LDR brachytherapy permanent seed implants using 103Pd seeds (Model 2335, Best Medical). The effects of varying the adipose/gland ratio in the PTV and of varying the elemental composition of adipose and gland within one standard deviation of the assumed mean composition are investigated by comparing PTV D90 values. For (2) and (3), the influence of using the mass density from CT scans instead of unit mass density is also assessed. Results from simulation (1) show that variations in the mean compositions of tissues affect low energy brachytherapy dosimetry

  5. Sensitivity of an atmospheric photochemistry model to chlorine perturbations including consideration of uncertainty propagation

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.; Douglass, A. R.

    1986-01-01

    Models of stratospheric photochemistry are generally tested by comparing their predictions for the composition of the present atmosphere with measurements of species concentrations. These models are then used to make predictions of the atmospheric sensitivity to perturbations. Here the problem of the sensitivity of such a model to chlorine perturbations ranging from the present influx of chlorine-containing compounds to several times that influx is addressed. The effects of uncertainties in input parameters, including reaction rate coefficients, cross sections, solar fluxes, and boundary conditions, are evaluated using a Monte Carlo method in which the values of the input parameters are randomly selected. The results are probability distributions for present atmosheric concentrations and for calculated perturbations due to chlorine from fluorocarbons. For more than 300 Monte Carlo runs the calculated ozone perturbation for continued emission of fluorocarbons at today's rates had a mean value of -6.2 percent, with a 1-sigma width of 5.5 percent. Using the same runs but only allowing the cases in which the calculated present atmosphere values of NO, NO2, and ClO at 25 km altitude fell within the range of measurements yielded a mean ozone depletion of -3 percent, with a 1-sigma deviation of 2.2 percent. The model showed a nonlinear behavior as a function of added fluorocarbons. The mean of the Monte Carlo runs was less nonlinear than the model run using mean value of the input parameters.

  6. Sensitivity of an atmospheric photochemistry model to chlorine perturbations including consideration of uncertainty propagation

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.; Douglass, A. R.

    1986-01-01

    Models of stratospheric photochemistry are generally tested by comparing their predictions for the composition of the present atmosphere with measurements of species concentrations. These models are then used to make predictions of the atmospheric sensitivity to perturbations. Here the problem of the sensitivity of such a model to chlorine perturbations ranging from the present influx of chlorine-containing compounds to several times that influx is addressed. The effects of uncertainties in input parameters, including reaction rate coefficients, cross sections, solar fluxes, and boundary conditions, are evaluated using a Monte Carlo method in which the values of the input parameters are randomly selected. The results are probability distributions for present atmosheric concentrations and for calculated perturbations due to chlorine from fluorocarbons. For more than 300 Monte Carlo runs the calculated ozone perturbation for continued emission of fluorocarbons at today's rates had a mean value of -6.2 percent, with a 1-sigma width of 5.5 percent. Using the same runs but only allowing the cases in which the calculated present atmosphere values of NO, NO2, and ClO at 25 km altitude fell within the range of measurements yielded a mean ozone depletion of -3 percent, with a 1-sigma deviation of 2.2 percent. The model showed a nonlinear behavior as a function of added fluorocarbons. The mean of the Monte Carlo runs was less nonlinear than the model run using mean value of the input parameters.

  7. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  8. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  9. Third Floor Plan, Second Floor Plan, First Floor Plan, Ground ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Third Floor Plan, Second Floor Plan, First Floor Plan, Ground Floor Plan, West Bunkhouse - Kennecott Copper Corporation, On Copper River & Northwestern Railroad, Kennicott, Valdez-Cordova Census Area, AK

  10. Summary Findings from the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.

  11. Overview of the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.

  12. Children's Sensitivity to Their Own Relative Ignorance: Handling of Possibilities Under Epistemic and Physical Uncertainty

    ERIC Educational Resources Information Center

    Robinson, Elizabeth J.; Rowley, Martin G.; Beck, Sarah R.; Carroll, Dan J.; Apperly, Ian A.

    2006-01-01

    Children more frequently specified possibilities correctly when uncertainty resided in the physical world (physical uncertainty) than in their own perspective of ignorance (epistemic uncertainty). In Experiment 1 (N=61), 4- to 6-year-olds marked both doors from which a block might emerge when the outcome was undetermined, but a single door when…

  13. Children's Sensitivity to Their Own Relative Ignorance: Handling of Possibilities Under Epistemic and Physical Uncertainty

    ERIC Educational Resources Information Center

    Robinson, Elizabeth J.; Rowley, Martin G.; Beck, Sarah R.; Carroll, Dan J.; Apperly, Ian A.

    2006-01-01

    Children more frequently specified possibilities correctly when uncertainty resided in the physical world (physical uncertainty) than in their own perspective of ignorance (epistemic uncertainty). In Experiment 1 (N=61), 4- to 6-year-olds marked both doors from which a block might emerge when the outcome was undetermined, but a single door when…

  14. The sensitivity analysis by adjoint method for the uncertainty evaluation of the CATHARE-2 code

    SciTech Connect

    Barre, F.; de Crecy, A.; Perret, C.

    1995-09-01

    This paper presents the application of the DASM (Discrete Adjoint Sensitivity Method) to CATHARE 2 thermal-hydraulics code. In a first part, the basis of this method is presented. The mathematical model of the CATHARE 2 code is based on the two fluid six equation model. It is discretized using implicit time discretization and it is relatively easy to implement this method in the code. The DASM is the ASM directly applied to the algebraic system of the discretized code equations which has been demonstrated to be the only solution of the mathematical model. The ASM is an integral part of the new version 1.4 of CATHARE. It acts as a post-processing module. It has been qualified by comparison with the {open_quotes}brute force{close_quotes} technique. In a second part, an application of the DASM in CATHARE 2 is presented. It deals with the determination of the uncertainties of the constitutive relationships, which is a compulsory step for calculating the final uncertainty of a given response. First, the general principles of the method are explained: the constitutive relationship are represented by several parameters and the aim is to calculate the variance-covariance matrix of these parameters. The experimental results of the separate effect tests used to establish the correlation are considered. The variance of the corresponding results calculated by CATHARE are estimated by comparing experiment and calculation. A DASM calculation is carried out to provide the derivatives of the responses. The final covariance matrix is obtained by combination of the variance of the responses and those derivatives. Then, the application of this method to a simple case-the blowdown Canon experiment-is presented. This application has been successfully performed.

  15. Sensitivity-Informed De Novo Programming for Many-Objective Water Portfolio Planning Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Kasprzyk, J. R.; Reed, P. M.; Kirsch, B. R.; Characklis, G. W.

    2009-12-01

    Risk-based water supply management presents severe cognitive, computational, and social challenges to planning in a changing world. Decision aiding frameworks must confront the cognitive biases implicit to risk, the severe uncertainties associated with long term planning horizons, and the consequent ambiguities that shape how we define and solve water resources planning and management problems. This paper proposes and demonstrates a new interactive framework for sensitivity informed de novo programming. The theoretical focus of our many-objective de novo programming is to promote learning and evolving problem formulations to enhance risk-based decision making. We have demonstrated our proposed de novo programming framework using a case study for a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas. Key decisions in this case study include the purchase of permanent rights to reservoir inflows and anticipatory thresholds for acquiring transfers of water through optioning and spot leases. A 10-year Monte Carlo simulation driven by historical data is used to provide performance metrics for the supply portfolios. The three major components of our methodology include Sobol globoal sensitivity analysis, many-objective evolutionary optimization and interactive tradeoff visualization. The interplay between these components allows us to evaluate alternative design metrics, their decision variable controls and the consequent system vulnerabilities. Our LRGV case study measures water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market. The sensitivity analysis is used interactively over interannual, annual, and monthly time scales to indicate how the problem controls change as a function of the timescale of interest. These results have been used then to improve our exploration and understanding of LRGV costs, vulnerabilities, and the water portfolios’ critical reliability constraints. These results

  16. First and Second Floor Window Sills; First Floor, Second Floor, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    First and Second Floor Window Sills; First Floor, Second Floor, and Third Floor Door Jambs; Stair and Second Floor Baseboards; First Floor Window Jamb - National Home for Disabled Volunteer Soldiers - Battle Mountain Sanitarium, Treasurer's Quarters, 500 North Fifth Street, Hot Springs, Fall River County, SD

  17. Elements of systemic sensitivity and propagated uncertainty in LiDAR-based forest attribute maps (Invited)

    NASA Astrophysics Data System (ADS)

    Hopkinson, C.; Chasmer, L.; Kljun, N.; van Gorsel, E.

    2013-12-01

    The application of airborne LiDAR to vegetation and forest attribute extraction and modeling is now common place. Direct estimates of tree-, plot- or stand-level height and canopy cover are frequently made as pre-cursors to more complex and indirect attribute derivations such as leaf area, biomass, basal area, fuel, even species. Frequently, the faith placed in LiDAR to produce these spatial variables appears so complete that raw data properties or the methods employed in the modeling of direct or indirect attributes are glossed over. The assumption being that if basic variables and derivatives can be easily predicted across a few studies, then it follows this will always be the case. Few studies address explicitly the range of sensitivity in direct and indirect forest attribute estimations: a) derived from LiDAR data of differing fundamental acquisition or point cloud properties; or b) produced using different data extraction, filtering or raster interpolation approaches. The paper will illustrate some of the critical acquisition and point cloud attributes (such as pulse power, flight line configuration, timing and point density) that strongly influence mapped and modeled forest attributes at a range of case study sites in North America and Australia. Further, the influence of multiple seemingly defensible canopy height model generation criteria will be compared to illustrate the high sensitivity in even the most basic of LiDAR-based forest attribute maps. We conclude that not all LiDAR are created equal and that both raw data properties and all data manipulation steps must be communicated when utilising such data. Finally, we believe that as with more standard products like LiDAR point cloud formats and digital terrain models (DTMs), an international committee is needed to provide guidance on airborne LiDAR vegetation products so that uncertainties can be mitigated when data are shared or compared across sites and through time.

  18. Finite-element modeling of bones from CT data: sensitivity to geometry and material uncertainties.

    PubMed

    Taddei, Fulvia; Martelli, Saulo; Reggiani, Barbara; Cristofolini, Luca; Viceconti, Marco

    2006-11-01

    The aim of this paper is to analyze how the uncertainties in modelling the geometry and the material properties of a human bone affect the predictions of a finite-element model derived from computed tomography (CT) data. A sensitivity analysis, based on a Monte Carlo method, was performed using three femur models generated from in vivo CT datasets, each subjected to two different loading conditions. The geometry, the density and the mechanical properties of the bone tissue were considered as random input variables. Finite-element results typically used in biomechanics research were considered as statistical output variables, and their sensitivity to the inputs variability assessed. The results showed that it is not possible to define a priori the influence of the errors related to the geometry definition process and to the material assignment process on the finite-element analysis results. The errors in the geometric representation of the bone are always the dominant variables for the stresses, as was expected. However, for all the variables, the results seemed to be dependent on the loading condition and to vary from subject to subject. The most interesting result is, however, that using the proposed method to build a finite-element model of a femur from a CT dataset of the quality typically achievable in the clinical practice, the coefficients of variation of the output variables never exceed the 9%. The presented method is hence robust enough to be used for investigating the mechanical behavior of bones with subject-specific finite-element models derived from CT data taken in vivo.

  19. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations

    NASA Astrophysics Data System (ADS)

    Solomon, Gemma C.; Reimers, Jeffrey R.; Hush, Noel S.

    2005-06-01

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  20. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations.

    PubMed

    Solomon, Gemma C; Reimers, Jeffrey R; Hush, Noel S

    2005-06-08

    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  1. Anxiety sensitivity and intolerance of uncertainty as potential risk factors for cyberchondria.

    PubMed

    Norr, Aaron M; Albanese, Brian J; Oglesby, Mary E; Allan, Nicholas P; Schmidt, Norman B

    2015-03-15

    Online medical information seeking has become an increasingly common behavior. Despite the benefits of easily accessible medical information on the Internet, researchers have identified a vicious cycle of increased physical health concerns and online medical information seeking known as "cyberchondria". Despite proposed theoretical models of cyberchondria, there is a dearth of research investigating risk factors for the development of cyberchondria. Two potential risk factors are anxiety sensitivity (AS) and intolerance of uncertainty (IU). The current study investigated the relationships among AS, IU, and cyberchondria in a large community sample. Participants (N=526) completed self-report questionnaires via online crowdsourcing. Structural equation models utilizing latent variables revealed a significant unique positive relationship between AS, as well as the IU Inhibitory lower-order factor, and cyberchondria, controlling for the effects of health anxiety. Additionally, results revealed a significant unique relationship between the IU Inhibitory factor and mistrust of medical professionals, a proposed cyberchondria-relevant construct. The cross-sectional data in the current study do not offer a true test of AS and IU as risk factors. However, establishing these unique relationships is an important step forward in the literature. The results of the current study suggest the potential importance of both AS and IU in the development of cyberchondria. Future research is needed to establish the temporal precedence of elevated AS and/or IU to determine if they are true risk factors or simply correlates of cyberchondria. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Parameter optimization, sensitivity, and uncertainty analysis of an ecosystem model at a forest flux tower site in the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende

    2014-01-01

    Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.

  3. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  4. Sensitivity and uncertainty analysis of atmospheric ozone photochemistry models. Final report, September 30, 1993--December 31, 1998

    SciTech Connect

    Smith, G.P.

    1999-03-01

    The author has examined the kinetic reliability of ozone model predictions by computing direct first-order sensitivities of model species concentrations to input parameters: S{sub ij} = [dC{sub i}/C{sub i}]/[dk{sub j}/k{sub j}], where C{sub i} is the abundance of species i (e.g., ozone) and k{sub j} is the rate constant of step j (reaction, photolysis, or transport), for localized boxes from the LLNL 2-D diurnally averaged atmospheric model. An ozone sensitivity survey of boxes at altitudes of 10--55 km, 2--62N latitude, for spring, equinox, and winter is presented. Ozone sensitivities are used to evaluate the response of model predictions of ozone to input rate coefficient changes, to propagate laboratory rate uncertainties through the model, and to select processes and regions suited to more precise measurements. By including the local chemical feedbacks, the sensitivities quantify the important roles of oxygen and ozone photolysis, transport from the tropics, and the relation of key catalytic steps and cycles in regulating stratospheric ozone as a function of altitude, latitude, and season. A sensitivity-uncertainty analysis uses the sensitivity coefficients to propagate laboratory error bars in input photochemical parameters and estimate the net model uncertainties of predicted ozone in isolated boxes; it was applied to potential problems in the upper stratospheric ozone budget, and also highlights superior regions for model validation.

  5. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    SciTech Connect

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    2014-06-15

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment

  6. PEBBED Uncertainty and Sensitivity Analysis of the CRP-5 PBMR DLOFC Transient Benchmark with the SUSA Code

    SciTech Connect

    Gerhard Strydom

    2011-01-01

    The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA, utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the

  7. Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology

    NASA Astrophysics Data System (ADS)

    Ratto, M.; Young, P. C.; Romanowicz, R.; Pappenberge, F.; Saltelli, A.; Pagano, A.

    2006-09-01

    In this paper, we discuss the problem of calibration and uncertainty estimation for hydrologic systems from two points of view: a bottom-up, reductionist approach; and a top-down, data-based mechanistic (DBM) approach. The two approaches are applied to the modelling of the River Hodder catchment in North-West England. The bottom-up approach is developed using the TOPMODEL, whose structure is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters; and the subsequent exercises in calibration and validation are carried out in the light of this sensitivity analysis. GSA helps to improve the calibration of hydrological models, making their properties more transparent and highlighting mis-specification problems. The DBM model provides a quick and efficient analysis of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. TOPMODEL calibration takes more time and it explains the flow data a little less well than the DBM model. The main differences in the modelling results are in the nature of the models and the flow decomposition they suggest. The "quick'' (63%) and "slow'' (37%) components of the decomposed flow identified in the DBM model show a clear partitioning of the flow, with the quick component apparently accounting for the effects of surface and near surface processes; and the slow component arising from the displacement of groundwater into the river channel (base flow). On the other hand, the two output flow components in TOPMODEL have a different physical interpretation, with a single flow component (95%) accounting for both slow (subsurface) and fast (surface) dynamics, while the other, very small component (5%) is interpreted as an instantaneous surface runoff generated by rainfall falling on areas of saturated soil. The results of

  8. Sensitivity of Surface Flux Simulations to Hydrologic Parameters Based on an Uncertainty Quantification Framework Applied to the Community Land Model

    SciTech Connect

    Hou, Zhangshuan; Huang, Maoyi; Leung, Lai-Yung R.; Lin, Guang; Ricciuto, Daniel M.

    2012-08-10

    Uncertainties in hydrologic parameters could have significant impacts on the simulated water and energy fluxes and land surface states, which will in turn affect atmospheric processes and the carbon cycle. Quantifying such uncertainties is an important step toward better understanding and quantification of uncertainty of integrated earth system models. In this paper, we introduce an uncertainty quantification (UQ) framework to analyze sensitivity of simulated surface fluxes to selected hydrologic parameters in the Community Land Model (CLM4) through forward modeling. Thirteen flux tower footprints spanning a wide range of climate and site conditions were selected to perform sensitivity analyses by perturbing the parameters identified. In the UQ framework, prior information about the parameters was used to quantify the input uncertainty using the Minimum-Relative-Entropy approach. The quasi-Monte Carlo approach was applied to generate samples of parameters on the basis of the prior pdfs. Simulations corresponding to sampled parameter sets were used to generate response curves and response surfaces and statistical tests were used to rank the significance of the parameters for output responses including latent (LH) and sensible heat (SH) fluxes. Overall, the CLM4 simulated LH and SH show the largest sensitivity to subsurface runoff generation parameters. However, study sites with deep root vegetation are also affected by surface runoff parameters, while sites with shallow root zones are also sensitive to the vadose zone soil water parameters. Generally, sites with finer soil texture and shallower rooting systems tend to have larger sensitivity of outputs to the parameters. Our results suggest the necessity of and possible ways for parameter inversion/calibration using available measurements of latent/sensible heat fluxes to obtain the optimal parameter set for CLM4. This study also provided guidance on reduction of parameter set dimensionality and parameter

  9. Economic Value of Narrowing the Uncertainty in Climate Sensitivity: Decadal Change in Shortwave Cloud Radiative Forcing and Low Cloud Feedback

    NASA Astrophysics Data System (ADS)

    Wielicki, B. A.; Cooke, R. M.; Golub, A. A.; Mlynczak, M. G.; Young, D. F.; Baize, R. R.

    2016-12-01

    Several previous studies have been published on the economic value of narrowing the uncertainty in climate sensitivity (Cooke et al. 2015, Cooke et al. 2016, Hope, 2015). All three of these studies estimated roughly 10 Trillion U.S. dollars for the Net Present Value and Real Option Value at a discount rate of 3%. This discount rate is the nominal discount rate used in the U.S. Social Cost of Carbon Memo (2010). The Cooke et al studies approached this problem by examining advances in accuracy of global temperature measurements, while the Hope 2015 study did not address the type of observations required. While temperature change is related to climate sensitivity, large uncertainties of a factor of 3 in current anthropogenic radiative forcing (IPCC, 2013) would need to be solved for advanced decadal temperature change observations to assist the challenge of narrowing climate sensitivity. The present study takes a new approach by extending the Cooke et al. 2015,2016 papers to replace observations of temperature change to observations of decadal change in the effects of changing clouds on the Earths radiative energy balance, a measurement known as Cloud Radiative Forcing, or Cloud Radiative Effect. Decadal change in this observation is direclty related to the largest uncertainty in climate sensitivity which is cloud feedback from changing amount of low clouds, primarily low clouds over the world's oceans. As a result, decadal changes in shortwave cloud radiative forcing are more directly related to cloud feedback uncertainty which is the dominant uncertainty in climate sensitivity. This paper will show results for the new approach, and allow an examination of the sensitivity of economic value results to different observations used as a constraint on uncertainty in climate sensitivity. The analysis suggests roughly a doubling of economic value to 20 Trillion Net Present Value or Real Option Value at 3% discount rate. The higher economic value results from two changes: a

  10. Intensity modulated radiation therapy for oropharyngeal cancer: the sensitivity of plan objectives and constraints to set-up uncertainty

    NASA Astrophysics Data System (ADS)

    Ploquin, Nicolas; Song, William; Lau, Harold; Dunscombe, Peter

    2005-08-01

    The goal of this study was to assess the impact of set-up uncertainty on compliance with the objectives and constraints of an intensity modulated radiation therapy protocol for early stage cancer of the oropharynx. As the convolution approach to the quantitative study of set-up uncertainties cannot accommodate either surface contours or internal inhomogeneities, both of which are highly relevant to sites in the head and neck, we have employed the more resource intensive direct simulation method. The impact of both systematic (variable from 0 to 6 mm) and random (fixed at 2 mm) set-up uncertainties on compliance with the criteria of the RTOG H-0022 protocol has been examined for eight geometrically complex structures: CTV66 (gross tumour volume and palpable lymph nodes suspicious for metastases), CTV54 (lymph node groups or surgical neck levels at risk of subclinical metastases), glottic larynx, spinal cord, brainstem, mandible and left and right parotids. In a probability-based approach, both dose-volume histograms and equivalent uniform doses were used to describe the dose distributions achieved by plans for two patients, in the presence of set-up uncertainty. The equivalent uniform dose is defined to be that dose which, when delivered uniformly to the organ of interest, will lead to the same response as the non-uniform dose under consideration. For systematic set-up uncertainties greater than 2 mm and 5 mm respectively, coverage of the CTV66 and CTV54 could be significantly compromised. Directional sensitivity was observed in both cases. Most organs at risk (except the glottic larynx which did not comply under static conditions) continued to meet the dose constraints up to 4 mm systematic uncertainty for both plans. The exception was the contra lateral parotid gland, which this protocol is specifically designed to protect. Sensitivity to systematic set-up uncertainty of 2 mm was observed for this organ at risk in both clinical plans.

  11. Sensitivity and uncertainty analysis for a field-scale P loss model

    USDA-ARS?s Scientific Manuscript database

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...

  12. Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model

    USDA-ARS?s Scientific Manuscript database

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...

  13. Two-dimensional cross-section sensitivity and uncertainty analysis of the LBM (Lithium Blanket Module) experiments at LOTUS

    SciTech Connect

    Davidson, J.W.; Dudziak, D.J.; Pelloni, S.; Stepanek, J.

    1988-01-01

    In a recent common Los Alamos/PSI effort, a sensitivity and nuclear data uncertainty path for the modular code system AARE (Advanced Analysis for Reactor Engineering) was developed. This path includes the cross-section code TRAMIX, the one-dimensional finite difference S/sub N/-transport code ONEDANT, the two-dimensional finite element S/sub N/-transport code TRISM, and the one- and two-dimensional sensitivity and nuclear data uncertainty code SENSIBL. Within the framework of the present work a complete set of forward and adjoint two-dimensional TRISM calculations were performed both for the bare, as well as for the Pb- and Be-preceeded, LBM using MATXS8 libraries. Then a two-dimensional sensitivity and uncertainty analysis for all cases was performed. The goal of this analysis was the determination of the uncertainties of a calculated tritium production per source neutron from lithium along the central Li/sub 2/O rod in the LBM. Considered were the contributions from /sup 1/H, /sup 6/Li, /sup 7/Li, /sup 9/Be, /sup nat/C, /sup 14/N, /sup 16/O, /sup 23/Na, /sup 27/Al, /sup nat/Si, /sup nat/Cr, /sup nat/Fe, /sup nat/Ni, and /sup nat/Pb. 22 refs., 1 fig., 3 tabs.

  14. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    SciTech Connect

    Ivanova, T.; Laville, C.; Dyrda, J.; Mennerdahl, D.; Golovko, Y.; Raskach, K.; Tsiboulia, A.; Lee, G. S.; Woo, S. W.; Bidaud, A.; Sabouri, P.; Bledsoe, K.; Rearden, B.; Gulliford, J.; Michel-Sendis, F.

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)

  15. Sensitivity of the remote sensing reflectance of ocean and coastal waters to uncertainties in aerosol characteristics

    NASA Astrophysics Data System (ADS)

    Seidel, F. C.; Garay, M. J.; Zhai, P.; Kalashnikova, O. V.; Diner, D. J.

    2015-12-01

    Remote sensing is a powerful tool for optical oceanography and limnology to monitor and study ocean, coastal, and inland water ecosystems. However, the highly spatially and temporally variable nature of water conditions and constituents, as well as atmospheric conditions are challenging factors, especially for spaceborne observations.Here, we study the quantitative impact of uncertainties in the spectral aerosol optical and microphysical properties, namely aerosol optical depth (AOD), spectral absorption, and particle size, on the remote sensing reflectance (Rrs) of simulated typical open ocean and coastal waters. Rrs is related to the inherent optical properties of the water column and is a fundamental parameter in ocean optics retrievals. We use the successive order of scattering (SOS) method to perform radiative transfer calculations of the coupled system of atmosphere and water. The optics of typical open ocean and coastal waters are simulated with bio-optical models. We derive sensitivities by comparing spectral SOS calculations of Rrs with a reference aerosol model against similar calculations performed using a different aerosol model. One particular focus of this study lies on the impact of the spectral absorption of dust and brown carbon, or similar particles with greater absorption at short wavelengths on Rrs. The results are presented in terms of the minimum expected error in Rrs due to the choice of an incorrect aerosol model during the atmospheric correction of ocean color remote sensing data from space. This study is independent of errors related to observational data or retrieval techniques.The results are relevant for quantifying requirements of aerosol retrievals to derive accurate Rrs from spaceborne observations, such as NASA's future Pre-Aerosol, Clouds, and ocean Ecosystem (PACE) mission.

  16. Weichselian permafrost depth in the Netherlands: a comprehensive uncertainty and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Govaerts, Joan; Beerten, Koen; ten Veen, Johan

    2016-11-01

    The Rupelian clay in the Netherlands is currently the subject of a feasibility study with respect to the storage of radioactive waste in the Netherlands (OPERA-project). Many features need to be considered in the assessment of the long-term evolution of the natural environment surrounding a geological waste disposal facility. One of these is permafrost development as it may have an impact on various components of the disposal system, including the natural environment (hydrogeology), the natural barrier (clay) and the engineered barrier. Determining how deep permafrost might develop in the future is desirable in order to properly address the possible impact on the various components. It is expected that periglacial conditions will reappear at some point during the next several hundred thousands of years, a typical time frame considered in geological waste disposal feasibility studies. In this study, the Weichselian glaciation is used as an analogue for future permafrost development. Permafrost depth modelling using a best estimate temperature curve of the Weichselian indicates that permafrost would reach depths between 155 and 195 m. Without imposing a climatic gradient over the country, deepest permafrost is expected in the south due to the lower geothermal heat flux and higher average sand content of the post-Rupelian overburden. Accounting for various sources of uncertainty, such as type and impact of vegetation, snow cover, surface temperature gradients across the country, possible errors in palaeoclimate reconstructions, porosity, lithology and geothermal heat flux, stochastic calculations point out that permafrost depth during the coldest stages of a glacial cycle such as the Weichselian, for any location in the Netherlands, would be 130-210 m at the 2σ level. In any case, permafrost would not reach depths greater than 270 m. The most sensitive parameters in permafrost development are the mean annual air temperatures and porosity, while the geothermal heat

  17. Effective groundwater model calibration: With analysis of data, sensitivities, predictions, and uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire R.

    2007-01-01

    Methods and guidelines for developing and using mathematical modelsTurn to Effective Groundwater Model Calibration for a set of methods and guidelines that can help produce more accurate and transparent mathematical models. The models can represent groundwater flow and transport and other natural and engineered systems. Use this book and its extensive exercises to learn methods to fully exploit the data on hand, maximize the model's potential, and troubleshoot any problems that arise. Use the methods to perform:Sensitivity analysis to evaluate the information content of dataData assessment to identify (a) existing measurements that dominate model development and predictions and (b) potential measurements likely to improve the reliability of predictionsCalibration to develop models that are consistent with the data in an optimal mannerUncertainty evaluation to quantify and communicate errors in simulated results that are often used to make important societal decisionsMost of the methods are based on linear and nonlinear regression theory.Fourteen guidelines show the reader how to use the methods advantageously in practical situations.Exercises focus on a groundwater flow system and management problem, enabling readers to apply all the methods presented in the text. The exercises can be completed using the material provided in the book, or as hands-on computer exercises using instructions and files available on the text's accompanying Web site.Throughout the book, the authors stress the need for valid statistical concepts and easily understood presentation methods required to achieve well-tested, transparent models. Most of the examples and all of the exercises focus on simulating groundwater systems; other examples come from surface-water hydrology and geophysics.The methods and guidelines in the text are broadly applicable and can be used by students, researchers, and engineers to simulate many kinds systems.

  18. Understanding flood risk sensitivity and uncertainty in a subcatchment of the Thames River (United Kingdom)

    NASA Astrophysics Data System (ADS)

    Theofanidi, Sofia; Cloke, Hannah Louise; Clark, Joanna

    2017-04-01

    of the flood events will follow, using simple hydrological boundary conditions. The sensitivity testing of the model, will permit to assess which parameters have the potential to alter significantly the peak discharge during the flood, flood water levels and flood inundation extent. Assessing the model's sensitivity and uncertainty, contributes to the improvement of the flood risk knowledge. The area of study is a subcatchment of the River Thames in the southern part of the United Kingdom. The Thames with its tributaries, support a wide range of social, economic and recreational activities. In addition, the historical and environmental importance of the Thames valley highlights the need for a sustainable flood mitigation planning which includes the better understanding of the flood mechanisms and flood risks.

  19. Uncertainty and Sensitivity Analysis Results Obtained in the 1996 Performance Assessment for the Waste Isolation Pilot Plant

    SciTech Connect

    Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O'Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.

    1998-09-01

    The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide

  20. AN OVERVIEW OF THE UNCERTAINTY ANALYSIS, SENSITIVITY ANALYSIS, AND PARAMETER ESTIMATION (UA/SA/PE) API AND HOW TO IMPLEMENT IT

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
    Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...

  1. AN OVERVIEW OF THE UNCERTAINTY ANALYSIS, SENSITIVITY ANALYSIS, AND PARAMETER ESTIMATION (UA/SA/PE) API AND HOW TO IMPLEMENT IT

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
    Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...

  2. Uncertainty Analysis of Ozone Formation and Response to Emission Controls Using Higher-Order Sensitivities

    EPA Science Inventory

    Understanding ozone response to its precursor emissions is crucial for effective air quality management practices. This nonlinear response is usually simulated using chemical transport models, and the modeling results are affected by uncertainties in emissions inputs. In this stu...

  3. Uncertainty Analysis of Ozone Formation and Response to Emission Controls Using Higher-Order Sensitivities

    EPA Science Inventory

    Understanding ozone response to its precursor emissions is crucial for effective air quality management practices. This nonlinear response is usually simulated using chemical transport models, and the modeling results are affected by uncertainties in emissions inputs. In this stu...

  4. Kiwi: An Evaluated Library of Uncertainties in Nuclear Data and Package for Nuclear Sensitivity Studies

    SciTech Connect

    Pruet, J

    2007-06-23

    This report describes Kiwi, a program developed at Livermore to enable mature studies of the relation between imperfectly known nuclear physics and uncertainties in simulations of complicated systems. Kiwi includes a library of evaluated nuclear data uncertainties, tools for modifying data according to these uncertainties, and a simple interface for generating processed data used by transport codes. As well, Kiwi provides access to calculations of k eigenvalues for critical assemblies. This allows the user to check implications of data modifications against integral experiments for multiplying systems. Kiwi is written in python. The uncertainty library has the same format and directory structure as the native ENDL used at Livermore. Calculations for critical assemblies rely on deterministic and Monte Carlo codes developed by B division.

  5. SENSITIVITY OF THE NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION MULTILAYER MODEL TO INSTRUMENT ERROR AND PARAMETERIZATION UNCERTAINTY

    EPA Science Inventory

    The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...

  6. Valley Floor

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-529, 30 October 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows the floor of an ancient valley located near the Pyrrhae Chaos region of Mars. This valley might have been carved by liquid water, but today no evidence remains that a fluid ever flowed through it. Long after the valley formed, its floor was covered by large, windblown, ripple-like dunes. This picture is located near 13.0oS, 31.2oW. The image is illuminated by sunlight from the upper left and covers an area 3 km (1.9 mi) wide.

  7. Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model

    NASA Astrophysics Data System (ADS)

    Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.

    2016-05-01

    Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.

  8. Thoughts on Sensitivity Analysis and Uncertainty Propagation Methods with Respect to the Prompt Fission Neutron Spectrum Impact on Critical Assemblies

    SciTech Connect

    Rising, M.E.

    2015-01-15

    The prompt fission neutron spectrum (PFNS) uncertainties in the n+{sup 239}Pu fission reaction are used to study the impact on several fast critical assemblies modeled in the MCNP6.1 code. The newly developed sensitivity capability in MCNP6.1 is used to compute the k{sub eff} sensitivity coefficients with respect to the PFNS. In comparison, the covariance matrix given in the ENDF/B-VII.1 library is decomposed and randomly sampled realizations of the PFNS are propagated through the criticality calculation, preserving the PFNS covariance matrix. The information gathered from both approaches, including the overall k{sub eff} uncertainty, is statistically analyzed. Overall, the forward and backward approaches agree as expected. The results from a new method appear to be limited by the process used to evaluate the PFNS and is not necessarily a flaw of the method itself. Final thoughts and directions for future work are suggested.

  9. Using stochastic sampling of parametric uncertainties to quantify relationships between CAM3.1 bias and climate sensitivity

    NASA Astrophysics Data System (ADS)

    Jackson, C. S.; Tobis, M.

    2011-12-01

    It is an untested assumption in climate model evaluation that climate model biases affect its credibility. Models with the smaller biases are often regarded as being more plausible than models with larger biases. However not all biases affect predictions. It is only those biases that are involved with feedback mechanisms can lead to scatter in its predictions of change. To date no metric of model skill has been defined that can predict a model's sensitivity greenhouse gas forcing. Being able to do so will be an important step to how we can use observations to define a model's credibility. We shall present results of a calculation in which we attempt to isolate the contribution of errors in particular regions and fields to uncertainties in CAM3.1 equilibrium sensitivity to a doubling of CO2 forcing. In this calculation, observations, Bayesian inference, and stochastic sampling are used to identify a large ensemble of CAM3.1 configurations that represent uncertainties in selecting 15 model parameters important to clouds, convection, and radiation. A slab ocean configuration of CAM3.1 is then used to estimate the effects of these parametric uncertainties on projections of global warming through its equilibrium response to 2 x CO2 forcing. We then correlate the scatter in the control climate at each grid point and field to the scatter in climate sensitivities. The presentation will focus on the analysis of these results.

  10. Sensitivity of Polar Stratospheric Ozone Loss to Uncertainties in Chemical Reaction Kinetics

    NASA Technical Reports Server (NTRS)

    Kawa, S. Randolph; Stolarksi, Richard S.; Douglass, Anne R.; Newman, Paul A.

    2008-01-01

    Several recent observational and laboratory studies of processes involved in polar stratospheric ozone loss have prompted a reexamination of aspects of our understanding for this key indicator of global change. To a large extent, our confidence in understanding and projecting changes in polar and global ozone is based on our ability to simulate these processes in numerical models of chemistry and transport. The fidelity of the models is assessed in comparison with a wide range of observations. These models depend on laboratory-measured kinetic reaction rates and photolysis cross sections to simulate molecular interactions. A typical stratospheric chemistry mechanism has on the order of 50- 100 species undergoing over a hundred intermolecular reactions and several tens of photolysis reactions. The rates of all of these reactions are subject to uncertainty, some substantial. Given the complexity of the models, however, it is difficult to quantify uncertainties in many aspects of system. In this study we use a simple box-model scenario for Antarctic ozone to estimate the uncertainty in loss attributable to known reaction kinetic uncertainties. Following the method of earlier work, rates and uncertainties from the latest laboratory evaluations are applied in random combinations. We determine the key reactions and rates contributing the largest potential errors and compare the results to observations to evaluate which combinations are consistent with atmospheric data. Implications for our theoretical and practical understanding of polar ozone loss will be assessed.

  11. Children's sensitivity to their own relative ignorance: handling of possibilities under epistemic and physical uncertainty.

    PubMed

    Robinson, Elizabeth J; Rowley, Martin G; Beck, Sarah R; Carroll, Dan J; Apperly, Ian A

    2006-01-01

    Children more frequently specified possibilities correctly when uncertainty resided in the physical world (physical uncertainty) than in their own perspective of ignorance (epistemic uncertainty). In Experiment 1 (N=61), 4- to 6-year-olds marked both doors from which a block might emerge when the outcome was undetermined, but a single door when they knew the block was hidden behind one door. In Experiments 2 (N=30; 5- to 6-year-olds) and 3 (N=80; 5- to 8-year-olds), children placed food in both possible locations when an imaginary pet was yet to occupy one, but in a single location when the pet was already hidden in one. The results have implications for interpretive theory of mind and "curse of knowledge."

  12. SU-E-T-292: Sensitivity of Fractionated Lung IMPT Treatments to Setup Uncertainties and Motion Effects

    SciTech Connect

    Dowdell, S; Grassberger, C; Paganetti, H

    2014-06-01

    Purpose: Evaluate the sensitivity of intensity-modulated proton therapy (IMPT) lung treatments to systematic and random setup uncertainties combined with motion effects. Methods: Treatment plans with single-field homogeneity restricted to ±20% (IMPT-20%) were compared to plans with no restriction (IMPT-full). 4D Monte Carlo simulations were performed for 10 lung patients using the patient CT geometry with either ±5mm systematic or random setup uncertainties applied over a 35 × 2.5Gy(RBE) fractionated treatment course. Intra-fraction, inter-field and inter-fraction motions were investigated. 50 fractionated treatments with systematic or random setup uncertainties applied to each fraction were generated for both IMPT delivery methods and three energy-dependent spot sizes (big spots - BS σ=18-9mm, intermediate spots - IS σ=11-5mm, small spots - SS σ=4-2mm). These results were compared to a Monte Carlo recalculation of the original treatment plan, with results presented as the difference in EUD (ΔEUD), V{sub 95} (ΔV{sub 95}) and target homogeneity (ΔD{sub 1}–D{sub 99}) between the 4D simulations and the Monte Carlo calculation on the planning CT. Results: The standard deviations in the ΔEUD were 1.95±0.47(BS), 1.85±0.66(IS) and 1.31±0.35(SS) times higher in IMPT-full compared to IMPT-20% when ±5mm systematic setup uncertainties were applied. The ΔV{sub 95} variations were also 1.53±0.26(BS), 1.60±0.50(IS) and 1.38±0.38(SS) times higher for IMPT-full. For random setup uncertainties, the standard deviations of the ΔEUD from 50 simulated fractionated treatments were 1.94±0.90(BS), 2.13±1.08(IS) and 1.45±0.57(SS) times higher in IMPTfull compared to IMPT-20%. For all spot sizes considered, the ΔD{sub 1}-D{sub 99} coincided within the uncertainty limits for the two IMPT delivery methods, with the mean value always higher for IMPT-full. Statistical analysis showed significant differences between the IMPT-full and IMPT-20% dose distributions for the

  13. UNCERTAINTY AND SENSITIVITY ANALYSES FOR INTEGRATED HUMAN HEALTH AND ECOLOGICAL RISK ASSESSMENT OF HAZARDOUS WASTE DISPOSAL

    EPA Science Inventory

    While there is a high potential for exposure of humans and ecosystems to chemicals released from hazardous waste sites, the degree to which this potential is realized is often uncertain. Conceptually divided among parameter, model, and modeler uncertainties imparted during simula...

  14. UNCERTAINTY AND SENSITIVITY ANALYSES FOR INTEGRATED HUMAN HEALTH AND ECOLOGICAL RISK ASSESSMENT OF HAZARDOUS WASTE DISPOSAL

    EPA Science Inventory

    While there is a high potential for exposure of humans and ecosystems to chemicals released from hazardous waste sites, the degree to which this potential is realized is often uncertain. Conceptually divided among parameter, model, and modeler uncertainties imparted during simula...

  15. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    SciTech Connect

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  16. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    SciTech Connect

    Hughes, Justin Matthew

    2016-07-28

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition B main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.

  17. Uncertainty and sensitivity analysis for performance of Solar Invictus 53S - A parabolic dish solar collector for direct steam generation

    NASA Astrophysics Data System (ADS)

    Ali, Wajahat; Usman, Zubair; Jamil, Umer

    2017-06-01

    This paper presents an uncertainty and sensitivity analysis for performance of Solar Invictus 53S; a solar steam generation solution consisting of a parabolic solar dish and cavity type receiver which works as Once Through direct Solar Steam Generator (OTSSG). Probabilistic model is used for solar to steam conversion process of Solar Invictus 53S to predict its performance and efficiency. System conversion performance model is discussed in this work and input parameters have been assigned with relevant probability distribution learned from available data and experience. The simulation of the model gives the probability distribution for net solar to steam efficiency with variations in the input parameters. Sensitivity analysis shows that performance of the system is most sensitive to Direct Normal Irradiance (DNI) and cavity temperature of the OTSSG. Whereas wind has almost negligible effect on the performance of the collector.

  18. Sensitivity and a priori uncertainty analysis of the CFRMF central flux spectrum

    SciTech Connect

    Ryskamp, J.M.; Anderl, R.A.; Broadhead, B.L.; Ford, W.E. III; Lucius, J.L.; Marable, J.H.; Wagschal, J.J.

    1980-09-01

    The Coupled Fast Reactor Measurement Facility (CFRMF), is a zoned-core critical assembly with a fast-neutron-spectrum zone in the center of an enriched U-235, water-moderated thermal driver. The central neutron spectrum is a Cross-Section Evaluation Working Group (CSEWG) benchmark field for data testing of fission-product, actinide, and dosimetry cross sections important to fast reactor technology. The AMPX and FORSS code systems were used to determine a flux covariance matrix for the CFRMF central neutron spectrum due to neutral cross-section and fission-spectrum uncertainties and correlations. Uncertainties in the /sup 238/U inelastic-scattering cross sections and /sup 235/U fission spectrum contribute most to the standard deviations in the central flux spectrum. The full flux-spectrum covariance matrix contains strong correlations. This strongly motivates including the off-diagonal elements in data-testing and cross-section adjustment application.

  19. Floors: Selection and Maintenance.

    ERIC Educational Resources Information Center

    Berkeley, Bernard

    Flooring for institutional, commercial, and industrial use is described with regard to its selection, care, and maintenance. The following flooring and subflooring material categories are discussed--(1) resilient floor coverings, (2) carpeting, (3) masonry floors, (4) wood floors, and (5) "formed-in-place floors". The properties, problems,…

  20. Floors: Selection and Maintenance.

    ERIC Educational Resources Information Center

    Berkeley, Bernard

    Flooring for institutional, commercial, and industrial use is described with regard to its selection, care, and maintenance. The following flooring and subflooring material categories are discussed--(1) resilient floor coverings, (2) carpeting, (3) masonry floors, (4) wood floors, and (5) "formed-in-place floors". The properties, problems,…

  1. Considerations for sensitivity analysis, uncertainty quantification, and data assimilation for grid-to-rod fretting

    SciTech Connect

    Michael Pernice

    2012-10-01

    Grid-to-rod fretting is the leading cause of fuel failures in pressurized water reactors, and is one of the challenge problems being addressed by the Consortium for Advanced Simulation of Light Water Reactors to guide its efforts to develop a virtual reactor environment. Prior and current efforts in modeling and simulation of grid-to-rod fretting are discussed. Sources of uncertainty in grid-to-rod fretting are also described.

  2. Nuclear data sensitivity and uncertainty assessment of sodium voiding reactivity coefficients of an ASTRID-like sodium fast reactor

    NASA Astrophysics Data System (ADS)

    Nuria, García-Herranz; Anne-Laurène, Panadero; Ana, Martinez; Sandro, Pelloni; Konstantin, Mikityuk; Andreas, Pautz

    2017-09-01

    The EU 7th Framework ESNII+ project was launched in 2013 with the strategic orientation of preparing ESNII for Horizon 2020. ESNII stands for the European Industrial Initiative on Nuclear Energy, created by the European Commission in 2010 to promote the development of a new generation of nuclear systems in order to provide a sustainable solution to cope with Europe's growing energy needs while meeting the greenhouse gas emissions reduction target. The designs selected by the ESNII+ project are technological demonstrators of Generation-IV systems. The prototype for the sodium cooled fast reactor technology is ASTRID (standing for Advanced Sodium Technological Reactor for Industrial Demonstration), which detailed design phase is foreseen to be initiated in 2019. The ASTRID core has a peculiar design which was created in order to tackle the main neutronic challenge of sodium cooled fast reactors: the inherent overall positive reactivity feedback in case of sodium voiding occurring in the core. Indeed, the core is claimed by its designers to have an overall negative reactivity feedback in this scenario. This feature was demonstrated for an ASTRID-like core within the ESNII+ framework studies performed by nine European institutions. In order to shift the paradigm towards best-estimate plus uncertainties, the nuclear data sensitivity analysis and uncertainty propagation on reactivity coefficients has to be carried out. The goal of this work is to assess the impact of nuclear data uncertainties on sodium voiding reactivity feedback coefficients in order to get a more complete picture of the actual safety margins of the ASTRID low void-core design. The nuclear data sensitivity analysis is performed in parallel using SCALE TSUNAMI-3D and the newly developed GPT SERPENT 2 module. A comparison is carried out between the two methodologies. Uncertainty on the sodium reactivity feedbacks is then calculated using TSAR module of SCALE and the necessary safety margins conclusions

  3. Uncertainty and sensitivity analyses of a decision analytic model for posteradication polio risk management.

    PubMed

    Duintjer Tebbens, Radboud J; Pallansch, Mark A; Kew, Olen M; Sutter, Roland W; Bruce Aylward, R; Watkins, Margaret; Gary, Howard; Alexander, James; Jafari, Hamid; Cochi, Stephen L; Thompson, Kimberly M

    2008-08-01

    Decision analytic modeling of polio risk management policies after eradication may help inform decisionmakers about the quantitative tradeoffs implied by various options. Given the significant dynamic complexity and uncertainty involving posteradication decisions, this article aims to clarify the structure of a decision analytic model developed to help characterize the risks, costs, and benefits of various options for polio risk management after eradication of wild polioviruses and analyze the implications of different sources of uncertainty. We provide an influence diagram of the model with a description of each component, explore the impact of different assumptions about model inputs, and present probability distributions of model outputs. The results show that choices made about surveillance, response, and containment for different income groups and immunization policies play a major role in the expected final costs and polio cases. While the overall policy implications of the model remain robust to the variations of assumptions and input uncertainty we considered, the analyses suggest the need for policymakers to carefully consider tradeoffs and for further studies to address the most important knowledge gaps.

  4. Sensitivity of Polar Stratospheric Ozone Loss to Uncertainties in Chemical Reaction Kinetics

    NASA Technical Reports Server (NTRS)

    Kawa, S. Randolph; Stolarski, Richard S.; Douglass, Anne R.; Newman, Paul A.

    2008-01-01

    Several recent observational and laboratory studies of processes involved in polar stratospheric ozone loss have prompted a reexamination of aspect of out understanding for this key indicator of global change. To a large extent, our confidence in understanding and projecting changes in polar and global ozone is based on our ability to to simulate these process in numerical models of chemistry and transport. These models depend on laboratory-measured kinetic reaction rates and photlysis cross section to simulate molecular interactions. In this study we use a simple box-model scenario for Antarctic ozone to estimate the uncertainty in loss attributable to known reaction kinetic uncertainties. Following the method of earlier work, rates and uncertainties from the latest laboratory evaluation are applied in random combinations. We determine the key reaction and rates contributing the largest potential errors and compare the results to observations to evaluate which combinations are consistent with atmospheric data. Implications for our theoretical and practical understanding of polar ozone loss will be assessed.

  5. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    SciTech Connect

    McGraw, David; Hershey, Ronald L.

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  6. Analysis of Sensitivity and Uncertainty in an Individual-Based Model of a Threatened Wildlife Species

    EPA Science Inventory

    We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...

  7. Analysis of sensitivity and uncertainty in an individual-based model of a threatened wildlife species

    Treesearch

    Bruce G. Marcot; Peter H. Singleton; Nathan H. Schumaker

    2015-01-01

    Sensitivity analysis—determination of how prediction variables affect response variables—of individual-based models (IBMs) are few but important to the interpretation of model output. We present sensitivity analysis of a spatially explicit IBM (HexSim) of a threatened species, the Northern Spotted Owl (NSO; Strix occidentalis caurina) in Washington...

  8. Analysis of Sensitivity and Uncertainty in an Individual-Based Model of a Threatened Wildlife Species

    EPA Science Inventory

    We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...

  9. Assessing uncertainty in ecological systems using global sensitivity analyses: a case example of simulated wolf reintroduction effects on elk

    USGS Publications Warehouse

    Fieberg, J.; Jenkins, Kurt J.

    2005-01-01

    Often landmark conservation decisions are made despite an incomplete knowledge of system behavior and inexact predictions of how complex ecosystems will respond to management actions. For example, predicting the feasibility and likely effects of restoring top-level carnivores such as the gray wolf (Canis lupus) to North American wilderness areas is hampered by incomplete knowledge of the predator-prey system processes and properties. In such cases, global sensitivity measures, such as Sobola?? indices, allow one to quantify the effect of these uncertainties on model predictions. Sobola?? indices are calculated by decomposing the variance in model predictions (due to parameter uncertainty) into main effects of model parameters and their higher order interactions. Model parameters with large sensitivity indices can then be identified for further study in order to improve predictive capabilities. Here, we illustrate the use of Sobola?? sensitivity indices to examine the effect of parameter uncertainty on the predicted decline of elk (Cervus elaphus) population sizes following a hypothetical reintroduction of wolves to Olympic National Park, Washington, USA. The strength of density dependence acting on survival of adult elk and magnitude of predation were the most influential factors controlling elk population size following a simulated wolf reintroduction. In particular, the form of density dependence in natural survival rates and the per-capita predation rate together accounted for over 90% of variation in simulated elk population trends. Additional research on wolf predation rates on elk and natural compensations in prey populations is needed to reliably predict the outcome of predatora??prey system behavior following wolf reintroductions.

  10. Scalloped Floor

    NASA Technical Reports Server (NTRS)

    2006-01-01

    13 July 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows erosional remnants of layered rock and large windblown ripples on the floor of a crater in the Tyrrhena Terra region of Mars. The layered rocks are most likely sedimentary.

    Location near: 15.5oS, 270.5oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern Autumn

  11. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes

  12. Random vibration sensitivity studies of modeling uncertainties in the NIF structures

    SciTech Connect

    Swensen, E.A.; Farrar, C.R.; Barron, A.A.; Cornwell, P.

    1996-12-31

    The National Ignition Facility is a laser fusion project that will provide an above-ground experimental capability for nuclear weapons effects simulation. This facility will achieve fusion ignition utilizing solid-state lasers as the energy driver. The facility will cover an estimated 33,400 m{sup 2} at an average height of 5--6 stories. Within this complex, a number of beam transport structures will be houses that will deliver the laser beams to the target area within a 50 {micro}m ms radius of the target center. The beam transport structures are approximately 23 m long and reach approximately heights of 2--3 stories. Low-level ambient random vibrations are one of the primary concerns currently controlling the design of these structures. Low level ambient vibrations, 10{sup {minus}10} g{sup 2}/Hz over a frequency range of 1 to 200 Hz, are assumed to be present during all facility operations. Each structure described in this paper will be required to achieve and maintain 0.6 {micro}rad ms laser beam pointing stability for a minimum of 2 hours under these vibration levels. To date, finite element (FE) analysis has been performed on a number of the beam transport structures. Certain assumptions have to be made regarding structural uncertainties in the FE models. These uncertainties consist of damping values for concrete and steel, compliance within bolted and welded joints, and assumptions regarding the phase coherence of ground motion components. In this paper, the influence of these structural uncertainties on the predicted pointing stability of the beam line transport structures as determined by random vibration analysis will be discussed.

  13. Optimization algorithm for overlapping-field plans of scanned ion beam therapy with reduced sensitivity to range and setup uncertainties

    NASA Astrophysics Data System (ADS)

    Inaniwa, Taku; Kanematsu, Nobuyuki; Furukawa, Takuji; Noda, Koji

    2011-03-01

    A 'patch-field' strategy is often used for tumors with large volumes exceeding the available field size in passive irradiations with ion beams. Range and setup errors can cause hot and cold spots at the field junction within the target. Such errors will also displace the field to miss the target periphery. With scanned ion beams with fluence modulation, the two junctional fields can be overlapped rather than patched, which may potentially reduce the sensitivity to these uncertainties. In this study, we have developed such a robust optimization algorithm. This algorithm is composed of the following two steps: (1) expanding the target volume with margins against the uncertainties, and (2) solving the inverse problem where the terms suppressing the dose gradient of individual fields are added into the objective function. The validity of this algorithm is demonstrated through simulation studies for two extreme cases of two fields with unidirectional and opposing geometries and for a prostate-cancer case. With the proposed algorithm, we can obtain a more robust plan with minimized influence of range and setup uncertainties than the conventional plan. Compared to conventional optimization, the calculation time for the robust optimization increased by a factor of approximately 3.

  14. Sensitivity and uncertainty analysis of the Coupled Fast Reactivity Measurements Facility central flux spectrum

    SciTech Connect

    Ryskamp, J.M.; Andrel, R.A.; Broadhead, B.L.; Ford, W.E.; Lucius, J.L.; Marable, J.H.; Wagschal, J.J.

    1982-04-01

    The Coupled Fast Reactivity Measurements Facility (CFRMF) is a zoned-core critical assembly with a fast neutron spectrum zone in the center of an enriched /sup 235/U, water-moderated thermal driver. The central neutron field is a Cross-Section Evaluation Working Group benchmark for data testing of fission product, actinide, and dosimetry cross sections important to fast reactor technology. The AMPX and FORSS code systems were used to determine a covariance matrix for the CFRMF central neutron spectrum. The covariance matrix accounts for neutron cross section and fission spectrum uncertainties and correlations. Uncertainties in the /sup 238/U inelasticscattering cross sections and in the /sup 235/U fission spectrum were found to contribute most to the standard deviations in the central flux spectrum. The flux-spectrum covariance matrix contains strong correlations. This strongly motivates including the off-diagonal elements in data testing and cross section adjustment applications. The flux spectrum covariance matrix was applied in this work for integral data testing for dosimeter cross sections.

  15. Sensitivity of the photolysis rate to the uncertainties in spectral solar irradiance variability

    NASA Astrophysics Data System (ADS)

    Sukhodolov, Timofei; Rozanov, Eugene; Bais, Alkiviadis; Tourpali, Kleareti; Shapiro, Alexander; Telford, Paul; Peter, Thomas; Schmutz, Werner

    2014-05-01

    The state of the stratospheric ozone layer and temperature structure are mostly maintained by the photolytical processes. Therefore, the uncertainties in the magnitude and spectral composition of the spectral solar irradiance (SSI) evolution during the declining phase of 23rd solar cycle have substantial implications for the modeling of the middle atmosphere evolution, leading not only to a pronounced differences in the heating rates but also affecting photolysis rates. To estimate the role of SSI uncertainties we have compared the most important photolysis rates (O2, O3, and NO2) calculated with the reference radiation code libRadtran using SSI for June 2004 and February 2009 obtained from two models (NRL, COSI) and one observation data set based on SORCE observations. We found that below 40 km changes in the ozone and oxygen photolysis can reach several tenths of % caused by the changes of the SSI in the Harley and Huggins bands for ozone and several % for oxygen caused by the changes of the SSI in the Herzberg continuum and Schumann-Runge bands. For the SORCE data set these changes are 2-4 times higher. We have also evaluated the ability of the several photolysis rates calculation methods widely used in atmospheric models to reproduce the absolute values of the photolysis rates and their response to the implied SSI changes. With some remarks all schemes show good results in the middle stratosphere compare to libRadtran. However, in the troposphere and mesosphere there are more noticeable differences.

  16. New SCALE Sensitivity/Uncertainty Capabilities Applied to Bias Estimation and to Design of MIRTE Reference Experiments

    SciTech Connect

    Rearden, Bradley T; Duhamel, Isabelle; Letang, Eric

    2009-01-01

    New TSUNAMI tools of SCALE 6, TSURFER and TSAR, are demonstrated to examine the bias effects of small-worth test materials, relative to reference experiments. TSURFER is a data adjustment bias and bias uncertainty assessment tool, and TSAR computes the sensitivity of the change in reactivity between two systems to the cross-section data common to their calculation. With TSURFER, it is possible to examine biases and bias uncertainties in fine detail. For replacement experiments, the application of TSAR to TSUNAMI-3D sensitivity data for pairs of experiments allows the isolation of sources of bias that could otherwise be obscured by materials with more worth in an individual experiment. The application of TSUNAMI techniques in the design of nine reference experiments for the MIRTE program will allow application of these advanced techniques to data acquired in the experimental series. The validation of all materials in a complex criticality safety application likely requires consolidating information from many different critical experiments. For certain materials, such as structural materials or fission products, only a limited number of critical experiments are available, and the fuel and moderator compositions of the experiments may differ significantly from those of the application. In these cases, it is desirable to extract the computational bias of a specific material from an integral keff measurement and use that information to quantify the bias due to the use of the same material in the application system. Traditional parametric and nonparametric methods are likely to prove poorly suited for such a consolidation of specific data components from a diverse set of experiments. An alternative choice for consolidating specific data from numerous sources is a data adjustment tool, like the ORNL tool TSURFER (Tool for Sensitivity/Uncertainty analysis of Response Functionals using Experimental Results) from SCALE 6.1 However, even with TSURFER, it may be difficult to

  17. Sensitivity and uncertainty analysis of the simulation of 123I and 54Mn gamma and X-ray emissions in a liquid scintillation vial.

    PubMed

    Bignell, L J; Mo, L; Alexiev, D; Hashemi-Nezhad, S R

    2010-01-01

    Radiation transport simulations of the most probable gamma- and X-ray emissions of (123)I and (54)Mn in a three photomultiplier tube liquid scintillation detector have been carried out. A Geant4 simulation was used to acquire energy deposition spectra and interaction probabilities with the scintillant, as required for absolute activity measurement using the triple to double coincidence ratio (TDCR) method. A sensitivity and uncertainty analysis of the simulation model is presented here. The uncertainty in the Monte Carlo simulation results due to the input parameter uncertainties was found to be more significant than the statistical uncertainty component for a typical number of simulated decay events. The model was most sensitive to changes in the volume of the scintillant. Estimates of the relative uncertainty associated with the simulation outputs due to the combined stochastic and input uncertainties are provided. A Monte Carlo uncertainty analysis of an (123)I TDCR measurement indicated that accounting for the simulation uncertainties increases the uncertainty of efficiency of the logical sum of double coincidence by 5.1%. Copyright 2009 Elsevier Ltd. All rights reserved.

  18. Using Uncertainty and Sensitivity Analyses in Socioecological Agent-Based Models to Improve Their Analytical Performance and Policy Relevance

    PubMed Central

    Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764

  19. Uncertainty exposure causes behavioural sensitization and increases risky decision-making in male rats: toward modelling gambling disorder.

    PubMed

    Zeeb, Fiona D; Li, Zhaoxia; Fisher, Daniel C; Zack, Martin H; Fletcher, Paul J

    2017-08-23

    An animal model of gambling disorder, previously known as pathological gambling, could advance our understanding of the disorder and help with treatment development. We hypothesized that repeated exposure to uncertainty during gambling induces behavioural and dopamine (DA) sensitization - similar to chronic exposure to drugs of abuse. Uncertainty exposure (UE) may also increase risky decision-making in an animal model of gambling disorder. Male Sprague Dawley rats received 56 UE sessions, during which animals responded for saccharin according to an unpredictable, variable ratio schedule of reinforcement (VR group). Control animals responded on a predictable, fixed ratio schedule (FR group). Rats yoked to receive unpredictable reward were also included (Y group). Animals were then tested on the Rat Gambling Task (rGT), an analogue of the Iowa Gambling Task, to measure decision-making. Compared with the FR group, the VR and Y groups experienced a greater locomotor response following administration of amphetamine. On the rGT, the FR and Y groups preferred the advantageous options over the risky, disadvantageous options throughout testing (40 sessions). However, rats in the VR group did not have a significant preference for the advantageous options during sessions 20-40. Amphetamine had a small, but significant, effect on decision-making only in the VR group. After rGT testing, only the VR group showed greater hyperactivity following administration of amphetamine compared with the FR group. Reward uncertainty was the only gambling feature modelled. Actively responding for uncertain reward likely sensitized the DA system and impaired the ability to make optimal decisions, modelling some aspects of gambling disorder.

  20. Status Report on Scoping Reactor Physics and Sensitivity/Uncertainty Analysis of LR-0 Reactor Molten Salt Experiments

    SciTech Connect

    Brown, Nicholas R.; Mueller, Donald E.; Patton, Bruce W.; Powers, Jeffrey J.

    2016-08-31

    Experiments are being planned at Research Centre Rež (RC Rež) to use the FLiBe (2 7LiF-BeF2) salt from the Molten Salt Reactor Experiment (MSRE) to perform reactor physics measurements in the LR-0 low power nuclear reactor. These experiments are intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems utilizing FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL) is performing sensitivity/uncertainty (S/U) analysis of these planned experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objective of these analyses is to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a status update on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. The S/U analyses will be used to inform design of FLiBe-based experiments using the salt from MSRE.

  1. Sensitivity and uncertainty analyses of unsaturated flow travel time in the CHnz unit of Yucca Mountain, Nevada

    SciTech Connect

    Nichols, W.E.; Freshley, M.D.

    1991-10-01

    This report documents the results of sensitivity and uncertainty analyses conducted to improve understanding of unsaturated zone ground-water travel time distribution at Yucca Mountain, Nevada. The US Department of Energy (DOE) is currently performing detailed studies at Yucca Mountain to determine its suitability as a host for a geologic repository for the containment of high-level nuclear wastes. As part of these studies, DOE is conducting a series of Performance Assessment Calculational Exercises, referred to as the PACE problems. The work documented in this report represents a part of the PACE-90 problems that addresses the effects of natural barriers of the site that will stop or impede the long-term movement of radionuclides from the potential repository to the accessible environment. In particular, analyses described in this report were designed to investigate the sensitivity of the ground-water travel time distribution to different input parameters and the impact of uncertainty associated with those input parameters. Five input parameters were investigated in this study: recharge rate, saturated hydraulic conductivity, matrix porosity, and two curve-fitting parameters used for the van Genuchten relations to quantify the unsaturated moisture-retention and hydraulic characteristics of the matrix. 23 refs., 20 figs., 10 tabs.

  2. Integral experiment information for fast reactors: Sensitivity and uncertainty analysis of reactor performance parameters

    SciTech Connect

    Collins, P.J.

    1982-01-01

    This chapter offers a detailed analysis of uncertainties in experimental parameters for the ZPR benchmark cores. Discusses the critical facilities and measurements; the need for well documented data; the relevance of data for reactor design; uses of integral data; benchmark data; mockup cores; accuracy of experimental data; critical mass; reaction rate ratios; covariance matrices; selection of reliable integral data; cavity measurements; and the SCHERZO 556 core. Points out that substantial revisions of data in the CSEWG benchmark book have resulted from a reevaluation of analytical corrections using modern methods and codes. Concludes that the integral data presently being utilized represent a very limited base, which will be enlarged considerably before application to a wider range of power reactor parameters.

  3. Sensitivity and a priori uncertainty analysis of the CFRMF central flux spectrum

    SciTech Connect

    Ryskamp, J.M.; Anderl, R.A.; Broadhead, B.L.; Ford, W.E. III; Lucius, J.L.; Marable, J.H.

    1980-01-01

    The Coupled Fast Reactivity Measurements Facility (CFRMF), located at the Idaho National Engineering Laboratory, is a zoned-core critical assembly with a fast-neutron-spectrum zone in the center of an enriched /sup 235/U, water-moderated thermal driver. An accurate knowledge of the central neutron spectrum is important to data-testing analyses which utilize integral reaction-rate data measured for samples placed in the CFRMF field. The purpose of this paper is to present the results of a study made with the AMPX-II and FORSS code systems to deterine the central-spectrum flux covariance matrix due to uncertainties and correlations in the nuclear data for the materials which comprise the facility.

  4. Combining Apples and Oranges: Lessons from Weighting, Inversion, Sensitivity Analysis, and Uncertainty

    NASA Astrophysics Data System (ADS)

    Hill, Mary

    2016-04-01

    Combining different data types can seem like combining apples and oranges. Yet combining different data types into inverse modeling and uncertainty quantification are important in all types of environmental systems. There are two main methods for combining different data types. - Single objective optimization (SOO) with weighting. - Multi-objective optimization (MOO) in which coefficients for data groups are defined and changed during model development. SOO and MOO are related in that different coefficient values in MOO are equivalent to considering alternative weightings. MOO methods often take many model runs and tend to be much more computationally expensive than SOO, but for SOO the weighting needs to be defined. When alternative models are more important to consider than alternate weightings, SOO can be advantageous (Lu et al. 2012). This presentation considers how to determine the weighting when using SOO. A saltwater intrusion example is used to examine two methods of weighting three data types. The two methods of determining weighting are based on contributions to the objective function, as suggested by Anderson et al. (2015) and error-based weighting, as suggested by Hill and Tiedeman (2007). The consequences of weighting on measures of uncertainty, the importance and interdependence of parameters, and the importance of observations are presented. This work is important to many types of environmental modeling, including climate models, because integrating many kinds of data is often important. The advent of rainfall-runoff models with fewer numerical deamons, such as TOPKAPI and SUMMA, make the convenient model analysis methods used in this work more useful for many hydrologic problems.

  5. The sensitivity of oxidant formation rates to uncertainties in temperature, water vapor, and cloud cover

    SciTech Connect

    Walcek, C.J.; Yuan, H.H.

    1994-12-31

    Photochemical reaction mechanisms have been used for several decades to understand the formation of acids, oxidants, and other pollutants in the atmosphere. With complex chemical reaction mechanisms, it is useful to perform sensitivity studies to identify the most important or uncertain components within the system of reactions. In this study, we quantify the sensitivity of a chemical reaction mechanism to changes in three meteorological factors: temperature, relative humidity, and sunlight intensity. We perform these sensitivity studies over a wide range of nitrogen oxides (NO{sub x} = NO + NO{sub 2}) and nonmethane hydrocarbon (NMHC) concentrations, since these two chemicals are the dominant controllable pollutants that influence the chemical reactivity of the atmosphere.

  6. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a

  7. Extension of sensitivity and uncertainty analysis for long term dose assessment of high level nuclear waste disposal sites to uncertainties in the human behaviour.

    PubMed

    Albrecht, Achim; Miquel, Stéphan

    2010-01-01

    Biosphere dose conversion factors are computed for the French high-level geological waste disposal concept and to illustrate the combined probabilistic and deterministic approach. Both (135)Cs and (79)Se are used as examples. Probabilistic analyses of the system considering all parameters, as well as physical and societal parameters independently, allow quantification of their mutual impact on overall uncertainty. As physical parameter uncertainties decreased, for example with the availability of further experimental and field data, the societal uncertainties, which are less easily constrained, particularly for the long term, become more and more significant. One also has to distinguish uncertainties impacting the low dose portion of a distribution from those impacting the high dose range, the latter having logically a greater impact in an assessment situation. The use of cumulative probability curves allows us to quantify probability variations as a function of the dose estimate, with the ratio of the probability variation (slope of the curve) indicative of uncertainties of different radionuclides. In the case of (135)Cs with better constrained physical parameters, the uncertainty in human behaviour is more significant, even in the high dose range, where they increase the probability of higher doses. For both radionuclides, uncertainties impact more strongly in the intermediate than in the high dose range. In an assessment context, the focus will be on probabilities of higher dose values. The probabilistic approach can furthermore be used to construct critical groups based on a predefined probability level and to ensure that critical groups cover the expected range of uncertainty.

  8. Sensitivity of seasonal weather prediction and extreme precipitation events to soil moisture initialization uncertainty using SMOS soil moisture products

    NASA Astrophysics Data System (ADS)

    Khodayar-Pardo, Samiro; Lopez-Baeza, Ernesto; Coll Pajaron, M. Amparo

    Sensitivity of seasonal weather prediction and extreme precipitation events to soil moisture initialization uncertainty using SMOS soil moisture products (1) S. Khodayar, (2) A. Coll, (2) E. Lopez-Baeza (1) Institute for Meteorology and Climate Research, Karlsruhe Institute of Technology (KIT), Karlsruhe Germany (2) University of Valencia. Earth Physics and Thermodynamics Department. Climatology from Satellites Group Soil moisture is an important variable in agriculture, hydrology, meteorology and related disciplines. Despite its importance, it is complicated to obtain an appropriate representation of this variable, mainly because of its high temporal and spatial variability. SVAT (Soil-Vegetation-Atmosphere-Transfer) models can be used to simulate the temporal behaviour and spatial distribution of soil moisture in a given area and/or state of the art products such as the soil moisture measurements from the SMOS (Soil Moisture and Ocean Salinity) space mission may be also convenient. The potential role of soil moisture initialization and associated uncertainty in numerical weather prediction is illustrated in this study through sensitivity numerical experiments using the SVAT SURFEX model and the non-hydrostatic COSMO model. The aim of this investigation is twofold, (a) to demonstrate the sensitivity of model simulations of convective precipitation to soil moisture initial uncertainty, as well as the impact on the representation of extreme precipitation events, and (b) to assess the usefulness of SMOS soil moisture products to improve the simulation of water cycle components and heavy precipitation events. Simulated soil moisture and precipitation fields are compared with observations and with level-1(~1km), level-2(~15 km) and level-3(~35 km) soil moisture maps generated from SMOS over the Iberian Peninsula, the SMOS validation area (50 km x 50 km, eastern Spain) and selected stations, where in situ measurements are available covering different vegetation cover

  9. Quantifying the economic competitiveness of cellulosic biofuel pathways under uncertainty and regional sensitivity

    NASA Astrophysics Data System (ADS)

    Brown, Tristan R.

    The revised Renewable Fuel Standard requires the annual blending of 16 billion gallons of cellulosic biofuel by 2022 from zero gallons in 2009. The necessary capacity investments have been underwhelming to date, however, and little is known about the likely composition of the future cellulosic biofuel industry as a result. This dissertation develops a framework for identifying and analyzing the industry's likely future composition while also providing a possible explanation for why investment in cellulosic biofuels capacity has been low to date. The results of this dissertation indicate that few cellulosic biofuel pathways will be economically competitive with petroleum on an unsubsidized basis. Of five cellulosic biofuel pathways considered under 20-year price forecasts with volatility, only two achieve positive mean 20-year net present value (NPV) probabilities. Furthermore, recent exploitation of U.S. shale gas reserves and the subsequent fall in U.S. natural gas prices have negatively impacted the economic competitiveness of all but two of the cellulosic biofuel pathways considered; only two of the five pathways achieve substantially higher 20-year NPVs under a post-shale gas economic scenario relative to a pre-shale gas scenario. The economic competitiveness of cellulosic biofuel pathways with petroleum is reduced further when considered under price uncertainty in combination with realistic financial assumptions. This dissertation calculates pathway-specific costs of capital for five cellulosic biofuel pathway scenarios. The analysis finds that the large majority of the scenarios incur costs of capital that are substantially higher than those commonly assumed in the literature. Employment of these costs of capital in a comparative TEA greatly reduces the mean 20-year NPVs for each pathway while increasing their 10-year probabilities of default to above 80% for all five scenarios. Finally, this dissertation quantifies the economic competitiveness of six

  10. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary

  11. Understanding hydrological flow paths in conceptual catchment models using uncertainty and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Mockler, Eva M.; O'Loughlin, Fiachra E.; Bruen, Michael

    2016-05-01

    Increasing pressures on water quality due to intensification of agriculture have raised demands for environmental modeling to accurately simulate the movement of diffuse (nonpoint) nutrients in catchments. As hydrological flows drive the movement and attenuation of nutrients, individual hydrological processes in models should be adequately represented for water quality simulations to be meaningful. In particular, the relative contribution of groundwater and surface runoff to rivers is of interest, as increasing nitrate concentrations are linked to higher groundwater discharges. These requirements for hydrological modeling of groundwater contribution to rivers initiated this assessment of internal flow path partitioning in conceptual hydrological models. In this study, a variance based sensitivity analysis method was used to investigate parameter sensitivities and flow partitioning of three conceptual hydrological models simulating 31 Irish catchments. We compared two established conceptual hydrological models (NAM and SMARG) and a new model (SMART), produced especially for water quality modeling. In addition to the criteria that assess streamflow simulations, a ratio of average groundwater contribution to total streamflow was calculated for all simulations over the 16 year study period. As observations time-series of groundwater contributions to streamflow are not available at catchment scale, the groundwater ratios were evaluated against average annual indices of base flow and deep groundwater flow for each catchment. The exploration of sensitivities of internal flow path partitioning was a specific focus to assist in evaluating model performances. Results highlight that model structure has a strong impact on simulated groundwater flow paths. Sensitivity to the internal pathways in the models are not reflected in the performance criteria results. This demonstrates that simulated groundwater contribution should be constrained by independent data to ensure results

  12. A probabilistic arsenic exposure assessment for children who contact chromated copper arsenate (CCA)-treated playsets and decks, Part 2: Sensitivity and uncertainty analyses.

    PubMed

    Xue, Jianping; Zartarian, Valerie G; Ozkaynak, Halûk; Dang, Winston; Glen, Graham; Smith, Luther; Stallings, Casson

    2006-04-01

    A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two-part article. This Part 2 article discusses sensitivity and uncertainty analyses conducted to assess the key model inputs and areas of needed research for children's exposure to CCA-treated playsets and decks. The following types of analyses were conducted: (1) sensitivity analyses using a percentile scaling approach and multiple stepwise regression; and (2) uncertainty analyses using the bootstrap and two-stage Monte Carlo techniques. The five most important variables, based on both sensitivity and uncertainty analyses, were: wood surface residue-to-skin transfer efficiency; wood surface residue levels; fraction of hand surface area mouthed per mouthing event; average fraction of nonresidential outdoor time a child plays on/around CCA-treated public playsets; and frequency of hand washing. In general, there was a factor of 8 for the 5th and 95th percentiles and a factor of 4 for the 50th percentile in the uncertainty of predicted population dose estimates due to parameter uncertainty. Data were available for most of the key model inputs identified with sensitivity and uncertainty analyses; however, there were few or no data for some key inputs. To evaluate and improve the accuracy of model results, future measurement studies should obtain longitudinal time-activity diary information on children, spatial and temporal measurements of residue and soil concentrations on or near CCA-treated playsets and decks, and key exposure factors. Future studies should also address other sources of uncertainty in addition to parameter uncertainty, such as scenario and model uncertainty.

  13. Pelvic Floor Disorders

    MedlinePlus

    ... NICHD Research Information Clinical Trials Resources and Publications Pelvic Floor Disorders: Condition Information Skip sharing on social media links Share this: Page Content What is the pelvic floor? The term "pelvic floor" refers to the group ...

  14. Uncertainty and Sensitivity of Contaminant Travel Times from the Upgradient Nevada Test Site to the Yucca Mountain Area

    SciTech Connect

    J. Zhu; K. Pohlmann; J. Chapman; C. Russell; R.W.H. Carroll; D. Shafer

    2009-09-10

    Yucca Mountain (YM), Nevada, has been proposed by the U.S. Department of Energy as the nation’s first permanent geologic repository for spent nuclear fuel and highlevel radioactive waste. In this study, the potential for groundwater advective pathways from underground nuclear testing areas on the Nevada Test Site (NTS) to intercept the subsurface of the proposed land withdrawal area for the repository is investigated. The timeframe for advective travel and its uncertainty for possible radionuclide movement along these flow pathways is estimated as a result of effective-porosity value uncertainty for the hydrogeologic units (HGUs) along the flow paths. Furthermore, sensitivity analysis is conducted to determine the most influential HGUs on the advective radionuclide travel times from the NTS to the YM area. Groundwater pathways are obtained using the particle tracking package MODPATH and flow results from the Death Valley regional groundwater flow system (DVRFS) model developed by the U.S. Geological Survey (USGS). Effectiveporosity values for HGUs along these pathways are one of several parameters that determine possible radionuclide travel times between the NTS and proposed YM withdrawal areas. Values and uncertainties of HGU porosities are quantified through evaluation of existing site effective-porosity data and expert professional judgment and are incorporated in the model through Monte Carlo simulations to estimate mean travel times and uncertainties. The simulations are based on two steady-state flow scenarios, the pre-pumping (the initial stress period of the DVRFS model), and the 1998 pumping (assuming steady-state conditions resulting from pumping in the last stress period of the DVRFS model) scenarios for the purpose of long-term prediction and monitoring. The pumping scenario accounts for groundwater withdrawal activities in the Amargosa Desert and other areas downgradient of YM. Considering each detonation in a clustered region around Pahute Mesa (in

  15. Uncertainty quantification and sensitivity analysis of volcanic columns models: Results from the integral model PLUME-MoM

    NASA Astrophysics Data System (ADS)

    de'Michieli Vitturi, M.; Engwell, S. L.; Neri, A.; Barsotti, S.

    2016-10-01

    The behavior of plumes associated with explosive volcanic eruptions is complex and dependent on eruptive source parameters (e.g. exit velocity, gas fraction, temperature and grain-size distribution). It is also well known that the atmospheric environment interacts with volcanic plumes produced by explosive eruptions in a number of ways. The wind field can bend the plume but also affect atmospheric air entrainment into the column, enhancing its buoyancy and in some cases, preventing column collapse. In recent years, several numerical simulation tools and observational systems have investigated the action of eruption parameters and wind field on volcanic column height and column trajectory, revealing an important influence of these variables on plume behavior. In this study, we assess these dependencies using the integral model PLUME-MoM, whereby the continuous polydispersity of pyroclastic particles is described using a quadrature-based moment method, an innovative approach in volcanology well-suited for the description of the multiphase nature of magmatic mixtures. Application of formalized uncertainty quantification and sensitivity analysis techniques enables statistical exploration of the model, providing information on the extent to which uncertainty in the input or model parameters propagates to model output uncertainty. In particular, in the framework of the IAVCEI Commission on tephra hazard modeling inter-comparison study, PLUME-MoM is used to investigate the parameters exerting a major control on plume height, applying it to a weak plume scenario based on 26 January 2011 Shinmoe-dake eruptive conditions and a strong plume scenario based on the climatic phase of the 15 June 1991 Pinatubo eruption.

  16. Sensitivity of Last Glacial Maximum climate to uncertainties in tropical and subtropical ocean temperatures

    USGS Publications Warehouse

    Hostetler, S.; Pisias, N.; Mix, A.

    2006-01-01

    The faunal and floral gradients that underlie the CLIMAP (1981) sea-surface temperature (SST) reconstructions for the Last Glacial Maximum (LGM) reflect ocean temperature gradients and frontal positions. The transfer functions used to reconstruct SSTs from biologic gradients are biased, however, because at the warmest sites they display inherently low sensitivity in translating fauna to SST and they underestimate SST within the euphotic zones where the pycnocline is strong. Here we assemble available data and apply a statistical approach to adjust for hypothetical biases in the faunal-based SST estimates of LGM temperature. The largest bias adjustments are distributed in the tropics (to address low sensitivity) and subtropics (to address underestimation in the euphotic zones). The resulting SSTs are generally in better agreement than CLIMAP with recent geochemical estimates of glacial-interglacial temperature changes. We conducted a series of model experiments using the GENESIS general atmospheric circulation model to assess the sensitivity of the climate system to our bias-adjusted SSTs. Globally, the new SST field results in a modeled LGM surface-air cooling relative to present of 6.4 ??C (1.9 ??C cooler than that of CLIMAP). Relative to the simulation with CLIMAP SSTs, modeled precipitation over the oceans is reduced by 0.4 mm d-1 (an anomaly -0.4 versus 0.0 mm d-1 for CLIMAP) and increased over land (an anomaly -0.2 versus -0.5 mm d-1 for CLIMAP). Regionally strong responses are induced by changes in SST gradients. Data-model comparisons indicate improvement in agreement relative to CLIMAP, but differences among terrestrial data inferences and simulated moisture and temperature remain. Our SSTs result in positive mass balance over the northern hemisphere ice sheets (primarily through reduced summer ablation), supporting the hypothesis that tropical and subtropical ocean temperatures may have played a role in triggering glacial changes at higher latitudes.

  17. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    SciTech Connect

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  18. A Leslie matrix model for Sicyopterus lagocephalus in La Réunion: sensitivity, uncertainty and research prioritization.

    PubMed

    Artzrouni, Marc; Teichert, Nils; Mara, Thierry

    2014-10-01

    We propose a Leslie matrix model for the population dynamics of Sicyopterus lagocephalus in La Réunion. In order to capture both the amphidromous and the seasonal natures of the species' life history the model has four stages (sea+three river sites) and is cyclical with a 12 month period. Baseline parameters (age-specific fecundity, spatial dispersion patterns and survival rates) were chosen in such a way that the dominant eigenvalue of the year-on-year projection matrix is 1. Large uncertainties on the parameter values preclude the use of the model for management purpose. A sensitivity/uncertainty analysis sheds light on the parameters that cause much of the output to vary and that are poorly known: the life expectancy in rivers and the mortality both at river mouths and during the drift of larvae to sea. The aim is to help policymakers and researchers prioritize data acquisition efforts. The ultimate goal is a sustainable management of Sicyopterus lagocephalus in La Réunion.

  19. Development, sensitivity analysis, and uncertainty quantification of high-fidelity arctic sea ice models.

    SciTech Connect

    Peterson, Kara J.; Bochev, Pavel Blagoveston; Paskaleva, Biliana S.

    2010-09-01

    Arctic sea ice is an important component of the global climate system and due to feedback effects the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice to model physical parameters. A new sea ice model that has the potential to improve sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of the Los Alamos National Laboratory CICE code and the MPM sea ice code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness, and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.

  20. Numerical study of premixed HCCI engine combustion and its sensitivity to computational mesh and model uncertainties

    NASA Astrophysics Data System (ADS)

    Kong, Song-Charng; Reitz, Rolf D.

    2003-06-01

    This study used a numerical model to investigate the combustion process in a premixed iso-octane homogeneous charge compression ignition (HCCI) engine. The engine was a supercharged Cummins C engine operated under HCCI conditions. The CHEMKIN code was implemented into an updated KIVA-3V code so that the combustion could be modelled using detailed chemistry in the context of engine CFD simulations. The model was able to accurately simulate the ignition timing and combustion phasing for various engine conditions. The unburned hydrocarbon emissions were also well predicted while the carbon monoxide emissions were under predicted. Model results showed that the majority of unburned hydrocarbon is located in the piston-ring crevice region and the carbon monoxide resides in the vicinity of the cylinder walls. A sensitivity study of the computational grid resolution indicated that the combustion predictions were relatively insensitive to the grid density. However, the piston-ring crevice region needed to be simulated with high resolution to obtain accurate emissions predictions. The model results also indicated that HCCI combustion and emissions are very sensitive to the initial mixture temperature. The computations also show that the carbon monoxide emissions prediction can be significantly improved by modifying a key oxidation reaction rate constant.

  1. Regional-scale yield simulations using crop and climate models: assessing uncertainties, sensitivity to temperature and adaptation options

    NASA Astrophysics Data System (ADS)

    Challinor, A. J.

    2010-12-01

    Recent progress in assessing the impacts of climate variability and change on crops using multiple regional-scale simulations of crop and climate (i.e. ensembles) is presented. Simulations for India and China used perturbed responses to elevated carbon dioxide constrained using observations from FACE studies and controlled environments. Simulations with crop parameter sets representing existing and potential future adapted varieties were also carried out. The results for India are compared to sensitivity tests on two other crop models. For China, a parallel approach used socio-economic data to account for autonomous farmer adaptation. Results for the USA analysed cardinal temperatures under a range of local warming scenarios for 2711 varieties of spring wheat. The results are as follows: 1. Quantifying and reducing uncertainty. The relative contribution of uncertainty in crop and climate simulation to the total uncertainty in projected yield changes is examined. The observational constraints from FACE and controlled environment studies are shown to be the likely critical factor in maintaining relatively low crop parameter uncertainty. Without these constraints, crop simulation uncertainty in a doubled CO2 environment would likely be greater than uncertainty in simulating climate. However, consensus across crop models in India varied across different biophysical processes. 2. The response of yield to changes in local mean temperature was examined and compared to that found in the literature. No consistent response to temperature change was found across studies. 3. Implications for adaptation. China. The simulations of spring wheat in China show the relative importance of tolerance to water and heat stress in avoiding future crop failures. The greatest potential for reducing the number of harvests less than one standard deviation below the baseline mean yield value comes from alleviating water stress; the greatest potential for reducing harvests less than two

  2. A two dimensional modeling study of the sensitivity of ozone to radiative flux uncertainties

    SciTech Connect

    Grant, K.E.; Wuebbles, D.J.

    1988-08-01

    Radiative processes strongly effect equilibrium trace gas concentrations both directly, through photolysis reactions, and indirectly through temperature and transport processes. We have used the LLNL 2-D chemical-radiative-transport model to investigate the net sensitivity of equilibrium ozone concentrations to several changes in radiative forcing. Doubling CO/sub 2/ from 300 ppmv to 600 ppmv resulted in a temperature decrease of 5 K to 8 K in the middle stratosphere along with an 8% to 16% increase in ozone in the same region. Replacing our usual shortwave scattering algorithms with a simplified Rayleigh algorithm led to a 1% to 2% increase in ozone in the lower stratosphere. Finally, modifying our normal CO/sub 2/ cooling rates by corrections derived from line-by-line calculations resulted in several regions of heating and cooling. We observed temperature changes on the order of 1 K to 1.5 K with corresponding changes of 0.5% to 1.5% in O/sub 3/. Our results for doubled CO/sub 2/ compare favorably with those by other authors. Results for our two perturbation scenarios stress the need for accurately modeling radiative processes while confirming the general validity of current models. 15 refs., 5 figs.

  3. Sensitivity of quantitative groundwater recharge estimates to volumetric and distribution uncertainty in rainfall forcing products

    NASA Astrophysics Data System (ADS)

    Werner, Micha; Westerhoff, Rogier; Moore, Catherine

    2017-04-01

    Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model

  4. Assessing spatial uncertainties of land allocation using a scenario approach and sensitivity analysis: a study for land use in Europe.

    PubMed

    Verburg, Peter H; Tabeau, Andrzej; Hatna, Erez

    2013-09-01

    Land change model outcomes are vulnerable to multiple types of uncertainty, including uncertainty in input data, structural uncertainties in the model and uncertainties in model parameters. In coupled model systems the uncertainties propagate between the models. This paper assesses uncertainty of changes in future spatial allocation of agricultural land in Europe as they arise from a general equilibrium model coupled to a spatial land use allocation model. Two contrasting scenarios are used to capture some of the uncertainty in the development of typical combinations of economic, demographic and policy variables. The scenario storylines include different measurable assumptions concerning scenario specific drivers (variables) and parameters. Many of these assumptions are estimations and thus include a certain level of uncertainty regarding their true values. This leads to uncertainty within the scenario outcomes. In this study we have explored how uncertainty in national-level assumptions within the contrasting scenario assumptions translates into uncertainty in the location of changes in agricultural land use in Europe. The results indicate that uncertainty in coarse-scale assumptions does not translate into a homogeneous spread of the uncertainty within Europe. Some regions are more certain than others in facing specific land change trajectories irrespective of the uncertainty in the macro-level assumptions. The spatial spread of certain and more uncertain locations of land change is dependent on location conditions as well as on the overall scenario conditions. Translating macro-level uncertainties to uncertainties in spatial patterns of land change makes it possible to better understand and visualize the land change consequences of uncertainties in model input variables.

  5. A one- and two-dimensional cross-section sensitivity and uncertainty path of the AARE (Advanced Analysis for Reactor Engineering) modular code system

    SciTech Connect

    Davidson, J.W.; Dudziak, D.J.; Higgs, C.E.; Stepanek, J.

    1988-01-01

    AARE, a code package to perform Advanced Analysis for Reactor Engineering, is a linked modular system for fission reactor core and shielding, as well as fusion blanket, analysis. Its cross-section sensitivity and uncertainty path presently includes the cross-section processing and reformatting code TRAMIX, cross-section homogenization and library reformatting code MIXIT, the 1-dimensional transport code ONEDANT, the 2-dimensional transport code TRISM, and the 1- and 2- dimensional cross-section sensitivity and uncertainty code SENSIBL. IN the present work, a short description of the whole AARE system is given, followed by a detailed description of the cross-section sensitivity and uncertainty path. 23 refs., 2 figs.

  6. Impact of implicit effects on uncertainties and sensitivities of the Doppler coefficient of a LWR pin cell

    NASA Astrophysics Data System (ADS)

    Hursin, Mathieu; Leray, Olivier; Perret, Gregory; Pautz, Andreas; Bostelmann, Friederike; Aures, Alexander; Zwermann, Winfried

    2017-09-01

    In the present work, PSI and GRS sensitivity analysis (SA) and uncertainty quantification (UQ) methods, SHARK-X and XSUSA respectively, are compared for reactivity coefficient calculation; for reference the results of the TSUNAMI and SAMPLER modules of the SCALE code package are also provided. The main objective of paper is to assess the impact of the implicit effect, e.g., considering the effect of cross section perturbation on the self-shielding calculation, on the Doppler coefficient SA and UQ. Analyses are done for a Light Water Reactor (LWR) pin cell based on Phase I of the UAM LWR benchmark. The negligence of implicit effects in XSUSA and TSUNAMI leads to deviations of a few percent between the sensitivity profiles compared to SAMPLER and TSUNAMI (incl. implicit effects) except for 238U elastic scattering. The implicit effect is much larger for the SHARK-X calculations because of its coarser energy group structure between 10 eV and 10 keV compared to the applied SCALE libraries. It is concluded that the influence of the implicit effect strongly depends on the energy mesh of the nuclear data library of the neutron transport solver involved in the UQ calculations and may be magnified by the response considered.

  7. CASL L1 Milestone report : CASL.P4.01, sensitivity and uncertainty analysis for CIPS with VIPRE-W and BOA.

    SciTech Connect

    Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.

    2011-12-01

    The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. This report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.

  8. Sensitivity and uncertainty investigations for Hiroshima dose estimates and the applicability of the Little Boy mockup measurements

    SciTech Connect

    Bartine, D.E.; Cacuci, D.G.

    1983-09-13

    This paper describes sources of uncertainty in the data used for calculating dose estimates for the Hiroshima explosion and details a methodology for systematically obtaining best estimates and reduced uncertainties for the radiation doses received. (ACR)

  9. Sensitivity of Ice and Climate Evolution Patterns to Modelling Uncertainties During the Last Glacial-Interglacial Transitions

    NASA Astrophysics Data System (ADS)

    Bahadory, T.; Tarasov, L.

    2015-12-01

    How did ice grow (volume, total area, extent) over North America (NA)and Eurasia (EA) during inception? Did the ice-sheets grow and shrinksimultaneously, or each had its own inception time and maximum extentand volume? How did the atmosphere respond to the changes in surfacealbedo, altitude, dust concentration, and other feedbacks in thesystem? And more interestingly, given the uncertainties in theclimate system, is there more than one way glacial inception anddeglaciation could happen? By exploring the sensitivity of the lastglacial inception and deglaciation to uncertainties in modelling suchas representation of radiative effect of clouds, initial state of theocean, downscaling and upscaling various climatic fields between theatmospheric and ice model, and albedo calculation, we try to answerthese questions. Therefore, we set up an ensemble of simulations for both inception anddeglaciation to investigate the extent to which such modellinguncertainties can affect ice volume, area, and regional thicknessevolution patterns, in addition to various climatic fields, such asthe Rossby number, jet-stream location and strength, and sea-iceexpansion, during these two periods of interest. We analyze theensemble results to 1. investigate how important the parameters weincluded in our ensemble can be in simulating glacial-interglacialtransitions, and 2. explore different possible patterns of the lastglacial inception and deglaciation. The ensemble is set up using a fully-coupled Earth Model ofIntermediate Complexity, LOVECLIM, previously used in severalpaleoclimate modelling studies, and a 3D thermo-mechanically coupledice sheet model. The coupled model is capable of simulating 1000years in about 24 hours using a single core, making it possible toaccomplish an ensemble of 1000s of runs for both transition periodswithin a few weeks.

  10. Robust Adaptation? Assessing the sensitivity of safety margins in flood defences to uncertainty in future simulations - a case study from Ireland.

    NASA Astrophysics Data System (ADS)

    Murphy, Conor; Bastola, Satish; Sweeney, John

    2013-04-01

    Climate change impact and adaptation assessments have traditionally adopted a 'top-down' scenario based approach, where information from different Global Climate Models (GCMs) and emission scenarios are employed to develop impacts led adaptation strategies. Due to the tradeoffs in the computational cost and need to include a wide range of GCMs for fuller characterization of uncertainties, scenarios are better used for sensitivity testing and adaptation options appraisal. One common approach to adaptation that has been defined as robust is the use of safety margins. In this work the sensitivity of safety margins that have been adopted by the agency responsible for flood risk management in Ireland, to the uncertainty in future projections are examined. The sensitivity of fluvial flood risk to climate change is assessed for four Irish catchments using a large number of GCMs (17) forced with three emissions scenarios (SRES A1B, A2, B1) as input to four hydrological models. Both uncertainty within and between hydrological models is assessed using the GLUE framework. Regionalisation is achieved using a change factor method to infer changes in the parameters of a weather generator using monthly output from the GCMs, while flood frequency analysis is conducted using the method of probability weighted moments to fit the Generalised Extreme Value distribution to ~20,000 annual maxima series. The sensitivity of design margins to the uncertainty space considered is visualised using risk response surfaces. The hydrological sensitivity is measured as the percentage change in flood peak for specified recurrence intervals. Results indicate that there is a considerable residual risk associated with allowances of +20% when uncertainties are accounted for and that the risk of exceedence of design allowances is greatest for more extreme, low frequency events with considerable implication for critical infrastructure, e.g., culverts, bridges, flood defences whose designs are normally

  11. A Single Bout of Aerobic Exercise Reduces Anxiety Sensitivity But Not Intolerance of Uncertainty or Distress Tolerance: A Randomized Controlled Trial.

    PubMed

    LeBouthillier, Daniel M; Asmundson, Gordon J G

    2015-01-01

    Several mechanisms have been posited for the anxiolytic effects of exercise, including reductions in anxiety sensitivity through interoceptive exposure. Studies on aerobic exercise lend support to this hypothesis; however, research investigating aerobic exercise in comparison to placebo, the dose-response relationship between aerobic exercise anxiety sensitivity, the efficacy of aerobic exercise on the spectrum of anxiety sensitivity and the effect of aerobic exercise on other related constructs (e.g. intolerance of uncertainty, distress tolerance) is lacking. We explored reductions in anxiety sensitivity and related constructs following a single session of exercise in a community sample using a randomized controlled trial design. Forty-one participants completed 30 min of aerobic exercise or a placebo stretching control. Anxiety sensitivity, intolerance of uncertainty and distress tolerance were measured at baseline, post-intervention and 3-day and 7-day follow-ups. Individuals in the aerobic exercise group, but not the control group, experienced significant reductions with moderate effect sizes in all dimensions of anxiety sensitivity. Intolerance of uncertainty and distress tolerance remained unchanged in both groups. Our trial supports the efficacy of aerobic exercise in uniquely reducing anxiety sensitivity in individuals with varying levels of the trait and highlights the importance of empirically validating the use of aerobic exercise to address specific mental health vulnerabilities. Aerobic exercise may have potential as a temporary substitute for psychotherapy aimed at reducing anxiety-related psychopathology.

  12. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    PubMed Central

    Curtis, Janelle M.R.

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  13. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    PubMed

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  14. Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

    SciTech Connect

    Rearden, B.T.; Anderson, W.J.; Harms, G.A.

    2005-08-15

    Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.

  15. Distress tolerance in OCD and anxiety disorders, and its relationship with anxiety sensitivity and intolerance of uncertainty.

    PubMed

    Laposa, Judith M; Collimore, Kelsey C; Hawley, Lance L; Rector, Neil A

    2015-06-01

    There is a growing interest in the role of distress tolerance (i.e., the capacity to withstand negative emotions) in the onset and maintenance of anxiety. However, both empirical and theoretical knowledge regarding the role of distress tolerance in the anxiety disorders is relatively under examined. Accumulating evidence supports the relationship between difficulties tolerating distress and anxiety in nonclinical populations; however, very few studies have investigated distress tolerance in participants with diagnosed anxiety disorders. Individuals with social anxiety disorder (SAD), generalized anxiety disorder (GAD), panic disorder with and without agoraphobia (PD/A) and obsessive-compulsive disorder (OCD) completed measures of distress tolerance (DT), conceptually related measures (i.e., anxiety sensitivity (AS), intolerance of uncertainty (IU)), and anxiety symptom severity. Results showed that DT was negatively associated with AS and IU. DT was correlated with GAD, SAD and OCD symptoms, but not PD/A symptoms, in individuals with those respective anxiety disorders. DT was no longer a significant predictor of OCD or anxiety disorder symptom severity when AS and IU were also taken into account. There were no between group differences on DT across OCD and the anxiety disorder groups. Implications for the role of distress tolerance in anxiety pathology are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Sensitivity of WallDYN material migration modeling to uncertainties in mixed-material surface binding energies

    DOE PAGES

    Nichols, J. H.; Jaworski, M. A.; Schmid, K.

    2017-03-09

    The WallDYN package has recently been applied to a number of tokamaks to self-consistently model the evolution of mixed-material plasma facing surfaces. A key component of the WallDYN model is the concentration-dependent surface sputtering rate, calculated using SDTRIM.SP. This modeled sputtering rate is strongly influenced by the surface binding energies (SBEs) of the constituent materials, which are well known for pure elements but often are poorly constrained for mixed-materials. This work examines the sensitivity of WallDYN surface evolution calculations to different models for mixed-material SBEs, focusing on the carbon/lithium/oxygen/deuterium system present in NSTX. A realistic plasma background is reconstructed frommore » a high density, H-mode NSTX discharge, featuring an attached outer strike point with local density and temperature of 4 × 1020 m-3 and 4 eV, respectively. It is found that various mixed-material SBE models lead to significant qualitative and quantitative changes in the surface evolution profile at the outer divertor, with the highest leverage parameter being the C-Li binding model. Uncertainties of order 50%, appearing on time scales relevant to tokamak experiments, highlight the importance of choosing an appropriate mixed-material sputtering representation when modeling the surface evolution of plasma facing components. Lastly, these results are generalized to other fusion-relevant materials with different ranges of SBEs.« less

  17. Uncertainty and sensitivity analyses for gas and brine migration at the Waste Isolation Pilot Plant, May 1992

    SciTech Connect

    Helton, J.C.; Bean, J.E.; Butcher, B.M.; Garner, J.W.; Vaughn, P.; Schreiber, J.D.; Swift, P.N.

    1993-08-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis, stepwise regression analysis and examination of scatterplots are used in conjunction with the BRAGFLO model to examine two phase flow (i.e., gas and brine) at the Waste Isolation Pilot Plant (WIPP), which is being developed by the US Department of Energy as a disposal facility for transuranic waste. The analyses consider either a single waste panel or the entire repository in conjunction with the following cases: (1) fully consolidated shaft, (2) system of shaft seals with panel seals, and (3) single shaft seal without panel seals. The purpose of this analysis is to develop insights on factors that are potentially important in showing compliance with applicable regulations of the US Environmental Protection Agency (i.e., 40 CFR 191, Subpart B; 40 CFR 268). The primary topics investigated are (1) gas production due to corrosion of steel, (2) gas production due to microbial degradation of cellulosics, (3) gas migration into anhydrite marker beds in the Salado Formation, (4) gas migration through a system of shaft seals to overlying strata, and (5) gas migration through a single shaft seal to overlying strata. Important variables identified in the analyses include initial brine saturation of the waste, stoichiometric terms for corrosion of steel and microbial degradation of cellulosics, gas barrier pressure in the anhydrite marker beds, shaft seal permeability, and panel seal permeability.

  18. A PROBABILISTIC ARSENIC EXPOSURE ASSESSMENT FOR CHILDREN WHO CONTACT CHROMATED COPPER ARSENATE ( CAA )-TREATED PLAYSETS AND DECKS: PART 2 SENSITIVITY AND UNCERTAINTY ANALYSIS

    EPA Science Inventory

    A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two part paper. This Part 2 paper discusses sensitivity and uncertainty analyses conducted to assess the key m...

  19. A PROBABILISTIC ARSENIC EXPOSURE ASSESSMENT FOR CHILDREN WHO CONTACT CHROMATED COPPER ARSENATE ( CAA )-TREATED PLAYSETS AND DECKS: PART 2 SENSITIVITY AND UNCERTAINTY ANALYSIS

    EPA Science Inventory

    A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two part paper. This Part 2 paper discusses sensitivity and uncertainty analyses conducted to assess the key m...

  20. SU-E-T-146: Effects of Uncertainties of Radiation Sensitivity of Biological Modelling for Treatment Planning

    SciTech Connect

    Oita, M; Uto, Y; Hori, H; Tominaga, M; Sasaki, M

    2014-06-01

    Purpose: The aim of this study was to evaluate the distribution of uncertainty of cell survival by radiation, and assesses the usefulness of stochastic biological model applying for gaussian distribution. Methods: For single cell experiments, exponentially growing cells were harvested from the standard cell culture dishes by trypsinization, and suspended in test tubes containing 1 ml of MEM(2x10{sup 6} cells/ml). The hypoxic cultures were treated with 95% N{sub 2}−5% CO{sub 2} gas for 30 minutes. In vitro radiosensitization was also measured in EMT6/KU single cells to add radiosensitizer under hypoxic conditions. X-ray irradiation was carried out by using an Xray unit (Hitachi X-ray unit, model MBR-1505R3) with 0.5 mm Al/1.0 mm Cu filter, 150 kV, 4 Gy/min). In vitro assay, cells on the dish were irradiated with 1 Gy to 24 Gy, respectively. After irradiation, colony formation assays were performed. Variations of biological parameters were investigated at standard cell culture(n=16), hypoxic cell culture(n=45) and hypoxic cell culture(n=21) with radiosensitizers, respectively. The data were obtained by separate schedule to take account for the variation of radiation sensitivity of cell cycle. Results: At standard cell culture, hypoxic cell culture and hypoxic cell culture with radiosensitizers, median and standard deviation of alpha/beta ratio were 37.1±73.4 Gy, 9.8±23.7 Gy, 20.7±21.9 Gy, respectively. Average and standard deviation of D{sub 50} were 2.5±2.5 Gy, 6.1±2.2 Gy, 3.6±1.3 Gy, respectively. Conclusion: In this study, we have challenged to apply these uncertainties of parameters for the biological model. The variation of alpha values, beta values, D{sub 50} as well as cell culture might have highly affected by probability of cell death. Further research is in progress for precise prediction of the cell death as well as tumor control probability for treatment planning.

  1. Using global sensitivity analysis to evaluate the uncertainties of future shoreline changes under the Bruun rule assumption

    NASA Astrophysics Data System (ADS)

    Le Cozannet, Gonéri; Oliveros, Carlos; Castelle, Bruno; Garcin, Manuel; Idier, Déborah; Pedreros, Rodrigo; Rohmer, Jeremy

    2016-04-01

    Future sandy shoreline changes are often assed by summing the contributions of longshore and cross-shore effects. In such approaches, a contribution of sea-level rise can be incorporated by adding a supplementary term based on the Bruun rule. Here, our objective is to identify where and when the use of the Bruun rule can be (in)validated, in the case of wave-exposed beaches with gentle slopes. We first provide shoreline change scenarios that account for all uncertain hydrosedimentary processes affecting the idealized low- and high-energy coasts described by Stive (2004)[Stive, M. J. F. 2004, How important is global warming for coastal erosion? an editorial comment, Climatic Change, vol. 64, n 12, doi:10.1023/B:CLIM.0000024785.91858. ISSN 0165-0009]. Then, we generate shoreline change scenarios based on probabilistic sea-level rise projections based on IPCC. For scenario RCP 6.0 and 8.5 and in the absence of coastal defenses, the model predicts an observable shift toward generalized beach erosion by the middle of the 21st century. On the contrary, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. To get insight into the relative importance of each source of uncertainties, we quantify each contributions to the variance of the model outcome using a global sensitivity analysis. This analysis shows that by the end of the 21st century, a large part of shoreline change uncertainties are due to the climate change scenario if all anthropogenic greenhousegas emission scenarios are considered equiprobable. To conclude, the analysis shows that under the assumptions above, (in)validating the Bruun rule should be straightforward during the second half of the 21st century and for the RCP 8.5 scenario. Conversely, for RCP 2.6, the noise in shoreline change evolution should continue dominating the signal due to the Bruun effect. This last conclusion can be interpreted as an important potential benefit of climate change mitigation.

  2. Quasi-Monte Carlo based global uncertainty and sensitivity analysis in modeling free product migration and recovery from petroleum-contaminated aquifers.

    PubMed

    He, Li; Huang, Gordon; Lu, Hongwei; Wang, Shuo; Xu, Yi

    2012-06-15

    This paper presents a global uncertainty and sensitivity analysis (GUSA) framework based on global sensitivity analysis (GSA) and generalized likelihood uncertainty estimation (GLUE) methods. Quasi-Monte Carlo (QMC) is employed by GUSA to obtain realizations of uncertain parameters, which are then input to the simulation model for analysis. Compared to GLUE, GUSA can not only evaluate global sensitivity and uncertainty of modeling parameter sets, but also quantify the uncertainty in modeling prediction sets. Moreover, GUSA's another advantage lies in alleviation of computational effort, since those globally-insensitive parameters can be identified and removed from the uncertain-parameter set. GUSA is applied to a practical petroleum-contaminated site in Canada to investigate free product migration and recovery processes under aquifer remediation operations. Results from global sensitivity analysis show that (1) initial free product thickness has the most significant impact on total recovery volume but least impact on residual free product thickness and recovery rate; (2) total recovery volume and recovery rate are sensitive to residual LNAPL phase saturations and soil porosity. Results from uncertainty predictions reveal that the residual thickness would remain high and almost unchanged after about half-year of skimmer-well scheme; the rather high residual thickness (0.73-1.56 m 20 years later) indicates that natural attenuation would not be suitable for the remediation. The largest total recovery volume would be from water pumping, followed by vacuum pumping, and then skimmer. The recovery rates of the three schemes would rapidly decrease after 2 years (less than 0.05 m(3)/day), thus short-term remediation is not suggested. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Mixed-Up Floors.

    ERIC Educational Resources Information Center

    Shaw, Richard

    2001-01-01

    Examines the maintenance management problems inherent in cleaning multiple flooring materials revealing the need for school officials to keep it simple when choosing flooring types. Also highlighted is a carpet recycling program used by Wright State University (Ohio). (GR)

  4. Cleaning up Floor Care.

    ERIC Educational Resources Information Center

    Carr, Richard; McLean, Doug

    1995-01-01

    Discusses how educational-facility maintenance departments can cut costs in floor cleaning through careful evaluation of floor equipment and products. Tips for choosing carpet detergents are highlighted. (GR)

  5. Mixed-Up Floors.

    ERIC Educational Resources Information Center

    Shaw, Richard

    2001-01-01

    Examines the maintenance management problems inherent in cleaning multiple flooring materials revealing the need for school officials to keep it simple when choosing flooring types. Also highlighted is a carpet recycling program used by Wright State University (Ohio). (GR)

  6. Cleaning up Floor Care.

    ERIC Educational Resources Information Center

    Carr, Richard; McLean, Doug

    1995-01-01

    Discusses how educational-facility maintenance departments can cut costs in floor cleaning through careful evaluation of floor equipment and products. Tips for choosing carpet detergents are highlighted. (GR)

  7. FIRST FLOOR FRONT ROOM. SECOND FLOOR HAS BEEN REMOVED NOTE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FIRST FLOOR FRONT ROOM. SECOND FLOOR HAS BEEN REMOVED-- NOTE PRESENCE OF SECOND FLOOR WINDOWS (THE LATTER FLOOR WAS REMOVED MANY YEARS AGO), See also PA-1436 B-12 - Kid-Physick House, 325 Walnut Street, Philadelphia, Philadelphia County, PA

  8. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    NASA Astrophysics Data System (ADS)

    Hartini, Entin; Andiwijayakusuma, Dinan

    2014-09-01

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.

  9. Sensitivity of a radiative transfer model to the uncertainty in the aerosol optical depth used as input

    NASA Astrophysics Data System (ADS)

    Román, Roberto; Bilbao, Julia; de Miguel, Argimiro; Pérez-Burgos, Ana

    2014-05-01

    The radiative transfer models can be used to obtain solar radiative quantities in the Earth surface as the erythemal ultraviolet (UVER) irradiance, which is the spectral irradiance weighted with the erythemal (sunburn) action spectrum, and the total shortwave irradiance (SW; 305-2,8000 nm). Aerosol and atmospheric properties are necessary as inputs in the model in order to calculate the UVER and SW irradiances under cloudless conditions, however the uncertainty in these inputs causes another uncertainty in the simulations. The objective of this work is to quantify the uncertainty in UVER and SW simulations generated by the aerosol optical depth (AOD) uncertainty. The data from different satellite retrievals were downloaded at nine Spanish places located in the Iberian Peninsula: Total ozone column from different databases, spectral surface albedo and water vapour column from MODIS instrument, AOD at 443 nm and Angström Exponent (between 443 nm and 670 nm) from MISR instrument onboard Terra satellite, single scattering albedo from OMI instrument onboard Aura satellite. The obtained AOD at 443 nm data from MISR were compared with AERONET measurements in six Spanish sites finding an uncertainty in the AOD from MISR of 0.074. In this work the radiative transfer model UVSPEC/Libradtran (1.7 version) was used to obtain the SW and UVER irradiance under cloudless conditions for each month and for different solar zenith angles (SZA) in the nine mentioned locations. The inputs used for these simulations were monthly climatology tables obtained with the available data in each location. Once obtained the UVER and SW simulations, they were repeated twice but changing the AOD monthly values by the same AOD plus/minus its uncertainty. The maximum difference between the irradiance run with AOD and the irradiance run with AOD plus/minus its uncertainty was calculated for each month, SZA, and location. This difference was considered as the uncertainty on the model caused by the AOD

  10. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 4: Uncertainty and sensitivity analyses for 40 CFR 191, Subpart B

    SciTech Connect

    Not Available

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to the EPA`s Environmental Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Additional information about the 1992 PA is provided in other volumes. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions, the choice of parameters selected for sampling, and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect compliance with 40 CFR 191B are: drilling intensity, intrusion borehole permeability, halite and anhydrite permeabilities, radionuclide solubilities and distribution coefficients, fracture spacing in the Culebra Dolomite Member of the Rustler Formation, porosity of the Culebra, and spatial variability of Culebra transmissivity. Performance with respect to 40 CFR 191B is insensitive to uncertainty in other parameters; however, additional data are needed to confirm that reality lies within the assigned distributions.

  11. Sensitivity of tropospheric ozone to chemical kinetic uncertainties in air masses influenced by anthropogenic and biomass burning emissions

    NASA Astrophysics Data System (ADS)

    Ridley, D. A.; Cain, M.; Methven, J.; Arnold, S. R.

    2017-07-01

    We use a Lagrangian chemical transport model with a Monte Carlo approach to determine impacts of kinetic rate uncertainties on simulated concentrations of ozone, NOy and OH in a high-altitude biomass burning plume and a low-level industrial pollution plume undergoing long-range transport. Uncertainties in kinetic rate constants yield 10-12 ppbv (5th to 95th percentile) uncertainty in the ozone concentration, dominated by reactions that cycle NO and NO2, control NOx conversion to NOy reservoir species, and key reactions contributing to O3 loss (O(1D) + H2O, HO2 + O3). Our results imply that better understanding of the peroxyacetylnitrate (PAN) thermal decomposition constant is key to predicting large-scale O3 production from fire emissions and uncertainty in the reaction of NO + O3 at low temperatures is particularly important for both the anthropogenic and biomass burning plumes. The highlighted reactions serve as a useful template for targeting new laboratory experiments aimed at reducing uncertainties in our understanding of tropospheric O3 photochemistry.

  12. Pre-waste-emplacement ground-water travel time sensitivity and uncertainty analyses for Yucca Mountain, Nevada; Yucca Mountain Site Characterization Project

    SciTech Connect

    Kaplan, P.G.

    1993-01-01

    Yucca Mountain, Nevada is a potential site for a high-level radioactive-waste repository. Uncertainty and sensitivity analyses were performed to estimate critical factors in the performance of the site with respect to a criterion in terms of pre-waste-emplacement ground-water travel time. The degree of failure in the analytical model to meet the criterion is sensitive to the estimate of fracture porosity in the upper welded unit of the problem domain. Fracture porosity is derived from a number of more fundamental measurements including fracture frequency, fracture orientation, and the moisture-retention characteristic inferred for the fracture domain.

  13. Sensitivity analysis and probabilistic re-entry modeling for debris using high dimensional model representation based uncertainty treatment

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; Kubicek, Martin; Minisci, Edmondo; Vasile, Massimiliano

    2017-01-01

    Well-known tools developed for satellite and debris re-entry perform break-up and trajectory simulations in a deterministic sense and do not perform any uncertainty treatment. The treatment of uncertainties associated with the re-entry of a space object requires a probabilistic approach. A Monte Carlo campaign is the intuitive approach to performing a probabilistic analysis, however, it is computationally very expensive. In this work, we use a recently developed approach based on a new derivation of the high dimensional model representation method for implementing a computationally efficient probabilistic analysis approach for re-entry. Both aleatoric and epistemic uncertainties that affect aerodynamic trajectory and ground impact location are considered. The method is applicable to both controlled and un-controlled re-entry scenarios. The resulting ground impact distributions are far from the typically used Gaussian or ellipsoid distributions.

  14. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    SciTech Connect

    Not Available

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.

  15. Modelling ecological and human exposure to POPs in Venice lagoon - Part II: Quantitative uncertainty and sensitivity analysis in coupled exposure models.

    PubMed

    Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio

    2016-11-01

    The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Automating calibration, sensitivity and uncertainty analysis of complex models using the R package Flexible Modeling Environment (FME): SWAT as an example

    USGS Publications Warehouse

    Wu, Y.; Liu, S.

    2012-01-01

    Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty

  17. Global sensitivity and uncertainty analysis of the nitrate leaching and crop yield simulation under different water and nitrogen management practices

    USDA-ARS?s Scientific Manuscript database

    Agricultural system models have become important tools in studying water and nitrogen (N) dynamics, as well as crop growth, under different management practices. Complexity in input parameters often leads to significant uncertainty when simulating dynamic processes such as nitrate leaching or crop y...

  18. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    SciTech Connect

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.

  19. Comparative chemical sensitivity between marine Australian and Northern Hemisphere ecosystems: is an uncertainty factor warranted for water-quality-guideline setting?

    PubMed

    Hagen, Tarah G; Douglas, Richard W

    2014-05-01

    The lack of Australian species data has pragmatically led to the use of toxicological data from the Northern Hemisphere to develop water-quality guidelines. However, it is unknown whether Australian species and ecosystems are equally as sensitive and if an uncertainty factor is warranted for Australian guideline setting. In the present study, it is hypothesized that an uncertainty factor is not required. This was tested by generating species sensitivity distributions by 2 parametric methods using marine Northern Hemisphere and Australian/New Zealand data. Sufficient acute data were found for only 3 compounds: 4-chlorophenol, phenol, and ammonia. For ammonia and 4-chlorophenol, the 95% species protection levels generated with Australian and Northern Hemisphere data were essentially the same. For phenol, protection levels derived from Australian data were approximately 10-fold higher. Therefore, the derived benchmark concentration from Northern Hemisphere data should be protective. It is tentatively concluded that there is no need for an uncertainty factor when deriving water-quality guidelines for marine Australian ecosystems using Northern Hemisphere data. It is, however, noted that this is based on only 3 compounds. © 2014 SETAC.

  20. Uncertainty evaluation of ozone production and its sensitivity to emission changes over the Ile-de-France region during summer periods

    NASA Astrophysics Data System (ADS)

    Deguillaume, L.; Beekmann, M.; Derognat, C.

    2008-01-01

    The approach of Bayesian Monte Carlo analysis has been used for evaluating model uncertainty in ozone production and its sensitivity to emission changes. This approach has been applied to the ozone fields calculated by the CHIMERE regional model in the Ile-de-France area during the 1998 and 1999 summer seasons. The AIRPARIF network measurements of urban NO and O3 concentrations and rural O3 over the Ile-de-France region have been used for constraining the Monte Carlo simulations. Our results yield the following major conclusions: (1) The simulated formation of O3 plumes are mainly located in southwestern to southeastern directions downwind the Paris area. (2) Uncertainties on simulated ozone concentrations and several derived quantities are evaluated and reduced using the BMC approach; simulated urban and plume ozone concentrations are enhanced through the observational constraint as compared to those obtained from the unconstrained model. (3) The chemical regime over the urban area of Paris and within plumes is clearly VOC sensitive on the average over two summers. This statement is robust with respect to the BMC uncertainty analysis.

  1. School Flooring Factors

    ERIC Educational Resources Information Center

    McGrath, John

    2012-01-01

    With all of the hype that green building is receiving throughout the school facility-management industry, it's easy to overlook some elements that may not be right in front of a building manager's nose. It is helpful to examine the role floor covering plays in a green building project. Flooring is one of the most significant and important systems…

  2. School Flooring Factors

    ERIC Educational Resources Information Center

    McGrath, John

    2012-01-01

    With all of the hype that green building is receiving throughout the school facility-management industry, it's easy to overlook some elements that may not be right in front of a building manager's nose. It is helpful to examine the role floor covering plays in a green building project. Flooring is one of the most significant and important systems…

  3. Floors: Care and Maintenance.

    ERIC Educational Resources Information Center

    Post Office Dept., Washington, DC.

    Guidelines, methods and policies regarding the care and maintenance of post office building floors are overviewed in this handbook. Procedures outlined are concerned with maintaining a required level of appearance without wasting manpower. Flooring types and characteristics and the particular cleaning requirements of each type are given along with…

  4. Maximizing Hard Floor Maintenance.

    ERIC Educational Resources Information Center

    Steger, Michael

    2000-01-01

    Explains the maintenance options available for hardwood flooring that can help ensure long life cycles and provide inviting spaces. Developing a maintenance system, knowing the type of traffic that the floor must endure, using entrance matting, and adhering to manufacturers guidelines are discussed. Daily, monthly or quarterly, and long-term…

  5. Direct Aerosol Forcing Uncertainty

    DOE Data Explorer

    Mccomiskey, Allison

    2008-01-15

    Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.

  6. Sensitivity analysis of the GEMS soil organic carbon model to land cover land use classification uncertainties under different climate scenarios in Senegal

    NASA Astrophysics Data System (ADS)

    Dieye, A. M.; Roy, D. P.; Hanan, N. P.; Liu, S.; Hansen, M.; Touré, A.

    2011-07-01

    Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SOC modeled by the Global Ensemble Biogeochemical Modeling System (GEMS). The GEMS was run for an area of 1560 km2 in Senegal under three climate change scenarios with LCLU maps generated using different Landsat classification approaches. This research provides a method to estimate the variability of SOC, specifically the SOC uncertainty due to satellite classification errors, which we show is dependent not only on the LCLU classification errors but also on where the LCLU classes occur relative to the other GEMS model inputs.

  7. Sensitivity analysis of the GEMS soil organic carbon model to land cover land use classification uncertainties under different climate scenarios in senegal

    NASA Astrophysics Data System (ADS)

    Dieye, A. M.; Roy, D. P.; Hanan, N. P.; Liu, S.; Hansen, M.; Touré, A.

    2012-02-01

    Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SOC modeled by the Global Ensemble Biogeochemical Modeling System (GEMS). The GEMS was run for an area of 1560 km2 in Senegal under three climate change scenarios with LCLU maps generated using different Landsat classification approaches. This research provides a method to estimate the variability of SOC, specifically the SOC uncertainty due to satellite classification errors, which we show is dependent not only on the LCLU classification errors but also on where the LCLU classes occur relative to the other GEMS model inputs.

  8. DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

    SciTech Connect

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  9. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's reference manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  10. Sensitivity analysis of the GEMS soil organic carbon model to land cover land use classification uncertainties under different climate scenarios in Senegal

    USGS Publications Warehouse

    Dieye, A.M.; Roy, D.P.; Hanan, N.P.; Liu, S.; Hansen, M.; Toure, A.

    2011-01-01

    Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SOC modeled by the Global Ensemble Biogeochemical Modeling System (GEMS). The GEMS was run for an area of 1560 km2 in Senegal under three climate change scenarios with LCLU maps generated using different Landsat classification approaches. This research provides a method to estimate the variability of SOC, specifically the SOC uncertainty due to satellite classification errors, which we show is dependent not only on the LCLU classification errors but also on where the LCLU classes occur relative to the other GEMS model inputs. ?? 2011 Author(s).

  11. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Undisturbed conditions

    SciTech Connect

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.; GARNER,J.W.; MACKINNON,ROBERT J.; MILLER,JOEL D.; SCHREIBER,JAMES D.; VAUGHN,PALMER

    2000-05-19

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformation are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to which the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.

  12. FIRST FLOOR REAR ROOM. SECOND FLOOR HAS BEEN REMOVED NOTE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FIRST FLOOR REAR ROOM. SECOND FLOOR HAS BEEN REMOVED-- NOTE PRESENCE OF SECOND FLOOR WINDOWS AT LEFT. See also PA-1436 B-6 - Kid-Physick House, 325 Walnut Street, Philadelphia, Philadelphia County, PA

  13. FIRST FLOOR REAR ROOM. SECOND FLOOR HAS BEEN REMOVED NOTE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FIRST FLOOR REAR ROOM. SECOND FLOOR HAS BEEN REMOVED-- NOTE PRESENCE OF SECOND FLOOR WINDOWS AT LEFT. See also PA-1436 B-13 - Kid-Physick House, 325 Walnut Street, Philadelphia, Philadelphia County, PA

  14. Optimal parameter and uncertainty estimation of a land surface model: Sensitivity to parameter ranges and model complexities

    NASA Astrophysics Data System (ADS)

    Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  15. Sensitivity of model assessments of high-speed civil transport effects on stratospheric ozone resulting from uncertainties in the NO x production from lightning

    NASA Astrophysics Data System (ADS)

    Smyshlyaev, Sergei P.; Geller, Marvin A.; Yudin, Valery A.

    1999-11-01

    Lightning NOx production is one of the most important and most uncertain sources of reactive nitrogen in the atmosphere. To examine the role of NOx lightning production uncertainties in supersonic aircraft assessment studies, we have done a number of numerical calculations with the State University of New York at Stony Brook-Russian State Hydrometeorological Institute of Saint-Petersburg two-dimensional model. The amount of nitrogen oxides produced by lightning discharges was varied within its quoted uncertainty from 2 to 12 Tg N/yr. Different latitudinal, altitudinal, and seasonal distributions of lightning NOx production were considered. Results of these model calculations show that the assessment of supersonic aircraft impacts on the ozone layer is very sensitive to the strength of NOx production from lightning. The high-speed civil transport produced NOx leads to positive column ozone changes for lightning NOx production less than 4 Tg N/yr, and to total ozone decrease for lightning NOx production more than 5 Tg N/yr for the same NOx emission scenario. For large lightning production the ozone response is mostly decreasing with increasing emission index, while for low lightning production the ozone response is mostly increasing with increasing emission index. Uncertainties in the global lightning NOx production strength may lead to uncertainties in column ozone up to 4%. The uncertainties due to neglecting the seasonal variations of the lightning NOx production and its simplified latitude distribution are about 2 times less (1.5-2%). The type of altitude distribution for the lightning NOx production does not significally impact the column ozone, but is very important for the assessment studies of aircraft perturbations of atmospheric ozone. Increased global lightning NOx production causes increased total ozone, but for assessment of the column ozone response to supersonic aircraft emissions, the increase of lightning NOx production leads to column ozone

  16. Global sensitivity analysis of a model related to memory formation in synapses: Model reduction based on epistemic parameter uncertainties and related issues.

    PubMed

    Kulasiri, Don; Liang, Jingyi; He, Yao; Samarasinghe, Sandhya

    2017-02-09

    We investigate the epistemic uncertainties of parameters of a mathematical model that describes the dynamics of CaMKII-NMDAR complex related to memory formation in synapses using global sensitivity analysis (GSA). The model, which was published in this journal, is nonlinear and complex with Ca(2+) patterns with different level of frequencies as inputs. We explore the effects of parameter on the key outputs of the model to discover the most sensitive ones using GSA and partial ranking correlation coefficient (PRCC) and to understand why they are sensitive and others are not based on the biology of the problem. We also extend the model to add presynaptic neurotransmitter vesicles release to have action potentials as inputs of different frequencies. We perform GSA on this extended model to show that the parameter sensitivities are different for the extended model as shown by PRCC landscapes. Based on the results of GSA and PRCC, we reduce the original model to a less complex model taking the most important biological processes into account. We validate the reduced model against the outputs of the original model. We show that the parameter sensitivities are dependent on the inputs and GSA would make us understand the sensitivities and the importance of the parameters. A thorough phenomenological understanding of the relationships involved is essential to interpret the results of GSA and hence for the possible model reduction.

  17. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    SciTech Connect

    Tang, Guoping; Mayes, Melanie; Parker, Jack C; Jardine, Philip M

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.

  18. CXTFIT/Excel-A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    NASA Astrophysics Data System (ADS)

    Tang, Guoping; Mayes, Melanie A.; Parker, Jack C.; Jardine, Philip M.

    2010-09-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.

  19. PROCEEDINGS OF THE INTERNATIONAL WORKSHOP ON UNCERTAINTY, SENSITIVITY, AND PARAMETER ESTIMATION FOR MULTIMEDIA ENVIRONMENTAL MODELING. EPA/600/R-04/117, NUREG/CP-0187, ERDC SR-04-2.

    EPA Science Inventory

    An International Workshop on Uncertainty, Sensitivity, and Parameter Estimation for Multimedia Environmental Modeling was held August 1921, 2003, at the U.S. Nuclear Regulatory Commission Headquarters in Rockville, Maryland, USA. The workshop was organized and convened by the Fe...

  20. PROCEEDINGS OF THE INTERNATIONAL WORKSHOP ON UNCERTAINTY, SENSITIVITY, AND PARAMETER ESTIMATION FOR MULTIMEDIA ENVIRONMENTAL MODELING. EPA/600/R-04/117, NUREG/CP-0187, ERDC SR-04-2.

    EPA Science Inventory

    An International Workshop on Uncertainty, Sensitivity, and Parameter Estimation for Multimedia Environmental Modeling was held August 1921, 2003, at the U.S. Nuclear Regulatory Commission Headquarters in Rockville, Maryland, USA. The workshop was organized and convened by the Fe...

  1. Global Uncertainty Propagation and Sensitivity Analysis in the CH3OCH2 + O2 System: Combining Experiment and Theory To Constrain Key Rate Coefficients in DME Combustion.

    PubMed

    Shannon, R J; Tomlin, A S; Robertson, S H; Blitz, M A; Pilling, M J; Seakins, P W

    2015-07-16

    Statistical rate theory calculations, in particular formulations of the chemical master equation, are widely used to calculate rate coefficients of interest in combustion environments as a function of temperature and pressure. However, despite the increasing accuracy of electronic structure calculations, small uncertainties in the input parameters for these master equation models can lead to relatively large uncertainties in the calculated rate coefficients. Master equation input parameters may be constrained further by using experimental data and the relationship between experiment and theory warrants further investigation. In this work, the CH3OCH2 + O2 system, of relevance to the combustion of dimethyl ether (DME), is used as an example and the input parameters for master equation calculations on this system are refined through fitting to experimental data. Complementing these fitting calculations, global sensitivity analysis is used to explore which input parameters are constrained by which experimental conditions, and which parameters need to be further constrained to accurately predict key elementary rate coefficients. Finally, uncertainties in the calculated rate coefficients are obtained using both correlated and uncorrelated distributions of input parameters.

  2. A Sea Floor Penetrometer.

    DTIC Science & Technology

    processed through an analog-to-digital (A/D) converter, and stored in the memory of a mini-computer. Computer algorithms are applied to the deceleration data to provide real-time sea floor classification.

  3. Crater Wall and Floor

    NASA Image and Video Library

    2003-02-18

    The impact crater observed in this NASA Mars Odyssey image taken in Terra Cimmeria suggests sediments have filled the crater due to the flat and smooth nature of the floor compared to rougher surfaces at higher elevations.

  4. Modelling the exposure to chemicals for risk assessment: a comprehensive library of multimedia and PBPK models for integration, prediction, uncertainty and sensitivity analysis - the MERLIN-Expo tool.

    PubMed

    Ciffroy, P; Alfonso, B; Altenpohl, A; Banjac, Z; Bierkens, J; Brochot, C; Critto, A; De Wilde, T; Fait, G; Fierens, T; Garratt, J; Giubilato, E; Grange, E; Johansson, E; Radomyski, A; Reschwann, K; Suciu, N; Tanaka, T; Tediosi, A; Van Holderbeke, M; Verdonck, F

    2016-10-15

    MERLIN-Expo is a library of models that was developed in the frame of the FP7 EU project 4FUN in order to provide an integrated assessment tool for state-of-the-art exposure assessment for environment, biota and humans, allowing the detection of scientific uncertainties at each step of the exposure process. This paper describes the main features of the MERLIN-Expo tool. The main challenges in exposure modelling that MERLIN-Expo has tackled are: (i) the integration of multimedia (MM) models simulating the fate of chemicals in environmental media, and of physiologically based pharmacokinetic (PBPK) models simulating the fate of chemicals in human body. MERLIN-Expo thus allows the determination of internal effective chemical concentrations; (ii) the incorporation of a set of functionalities for uncertainty/sensitivity analysis, from screening to variance-based approaches. The availability of such tools for uncertainty and sensitivity analysis aimed to facilitate the incorporation of such issues in future decision making; (iii) the integration of human and wildlife biota targets with common fate modelling in the environment. MERLIN-Expo is composed of a library of fate models dedicated to non biological receptor media (surface waters, soils, outdoor air), biological media of concern for humans (several cultivated crops, mammals, milk, fish), as well as wildlife biota (primary producers in rivers, invertebrates, fish) and humans. These models can be linked together to create flexible scenarios relevant for both human and wildlife biota exposure. Standardized documentation for each model and training material were prepared to support an accurate use of the tool by end-users. One of the objectives of the 4FUN project was also to increase the confidence in the applicability of the MERLIN-Expo tool through targeted realistic case studies. In particular, we aimed at demonstrating the feasibility of building complex realistic exposure scenarios and the accuracy of the

  5. Uncertainty analysis

    SciTech Connect

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.

  6. Using Real-time Event Tracking Sensitivity Analysis to Overcome Sensor Measurement Uncertainties of Geo-Information Management in Drilling Disasters

    NASA Astrophysics Data System (ADS)

    Tavakoli, S.; Poslad, S.; Fruhwirth, R.; Winter, M.

    2012-04-01

    This paper introduces an application of a novel EventTracker platform for instantaneous Sensitivity Analysis (SA) of large scale real-time geo-information. Earth disaster management systems demand high quality information to aid a quick and timely response to their evolving environments. The idea behind the proposed EventTracker platform is the assumption that modern information management systems are able to capture data in real-time and have the technological flexibility to adjust their services to work with specific sources of data/information. However, to assure this adaptation in real time, the online data should be collected, interpreted, and translated into corrective actions in a concise and timely manner. This can hardly be handled by existing sensitivity analysis methods because they rely on historical data and lazy processing algorithms. In event-driven systems, the effect of system inputs on its state is of value, as events could cause this state to change. This 'event triggering' situation underpins the logic of the proposed approach. Event tracking sensitivity analysis method describes the system variables and states as a collection of events. The higher the occurrence of an input variable during the trigger of event, the greater its potential impact will be on the final analysis of the system state. Experiments were designed to compare the proposed event tracking sensitivity analysis with existing Entropy-based sensitivity analysis methods. The results have shown a 10% improvement in a computational efficiency with no compromise for accuracy. It has also shown that the computational time to perform the sensitivity analysis is 0.5% of the time required compared to using the Entropy-based method. The proposed method has been applied to real world data in the context of preventing emerging crises at drilling rigs. One of the major purposes of such rigs is to drill boreholes to explore oil or gas reservoirs with the final scope of recovering the content

  7. [Pelvic floor and pregnancy].

    PubMed

    Fritel, X

    2010-05-01

    Congenital factor, obesity, aging, pregnancy and childbirth are the main risk factors for female pelvic floor disorders (urinary incontinence, anal incontinence, pelvic organ prolapse, dyspareunia). Vaginal delivery may cause injury to the pudendal nerve, the anal sphincter, or the anal sphincter. However the link between these injuries and pelvic floor symptoms is not always determined and we still ignore what might be the ways of prevention. Of the many obstetrical methods proposed to prevent postpartum symptoms, episiotomy, delivery in vertical position, delayed pushing, perineal massage, warm pack, pelvic floor rehabilitation, results are disappointing or limited. Caesarean section is followed by less postnatal urinary incontinence than vaginal childbirth. However this difference tends to disappear with time and following childbirth. Limit the number of instrumental extractions and prefer the vacuum to forceps could reduce pelvic floor disorders after childbirth. Ultrasound examination of the anal sphincter after a second-degree perineal tear is useful to detect and repair infra-clinic anal sphincter lesions. Scientific data is insufficient to justify an elective cesarean section in order to avoid pelvic floor symptoms in a woman without previous disorders.

  8. Application of an Adaptive Polynomial Chaos Expansion on Computationally Expensive Three-Dimensional Cardiovascular Models for Uncertainty Quantification and Sensitivity Analysis.

    PubMed

    Quicken, Sjeng; Donders, Wouter P; van Disseldorp, Emiel M J; Gashi, Kujtim; Mees, Barend M E; van de Vosse, Frans N; Lopata, Richard G P; Delhaas, Tammo; Huberts, Wouter

    2016-12-01

    When applying models to patient-specific situations, the impact of model input uncertainty on the model output uncertainty has to be assessed. Proper uncertainty quantification (UQ) and sensitivity analysis (SA) techniques are indispensable for this purpose. An efficient approach for UQ and SA is the generalized polynomial chaos expansion (gPCE) method, where model response is expanded into a finite series of polynomials that depend on the model input (i.e., a meta-model). However, because of the intrinsic high computational cost of three-dimensional (3D) cardiovascular models, performing the number of model evaluations required for the gPCE is often computationally prohibitively expensive. Recently, Blatman and Sudret (2010, "An Adaptive Algorithm to Build Up Sparse Polynomial Chaos Expansions for Stochastic Finite Element Analysis," Probab. Eng. Mech., 25(2), pp. 183-197) introduced the adaptive sparse gPCE (agPCE) in the field of structural engineering. This approach reduces the computational cost with respect to the gPCE, by only including polynomials that significantly increase the meta-model's quality. In this study, we demonstrate the agPCE by applying it to a 3D abdominal aortic aneurysm (AAA) wall mechanics model and a 3D model of flow through an arteriovenous fistula (AVF). The agPCE method was indeed able to perform UQ and SA at a significantly lower computational cost than the gPCE, while still retaining accurate results. Cost reductions ranged between 70-80% and 50-90% for the AAA and AVF model, respectively.

  9. Risk sensitivity for amounts of and delay to rewards: adaptation for uncertainty or by-product of reward rate maximising?

    PubMed

    Shapiro, Martin S; Schuck-Paim, Cynthia; Kacelnik, Alex

    2012-02-01

    Observations that humans and other species are sensitive to variability in the outcome of their choices has led to the widespread assumption that this sensitivity reflects adaptations to cope with risk (stochasticity of action consequences). We question this assumption in experiments with starlings. We show that choices between outcomes that are risky in both amount and delay to food are predictable from preferences in the absence of risk. We find that the overarching best predictor of an option's value is the average of the ratios of amount to delay across its (frequency weighted) outcomes, an expression known as "Expectation of the Ratios", or EoR. Most tests of risk sensitivity focus on the predicted impact of energetic state on preference for risk. We show instead that under controlled state conditions subjects are variance- and risk-neutral with respect to EoR, and this implies variance neutrality for amounts and variance-proneness for delays. The weak risk aversion for amounts often reported requires a small modification of EoR. EoR is consistent with associative learning: acquisition of value for initially neutral stimuli is roughly proportional to the magnitude of their consequences and inversely proportional to the interval between the stimulus and its consequence's onset. If, as is likely, the effect of amount on acquisition is sublinear, the result is a deviation from EoR towards risk aversion for amount. In 3 experiments, we first establish individual birds' preferences between pairs of fixed options that differ in both amount and delay (small-sooner vs. large-later), and then examine choices between stochastic mixtures that include these options. Experiment 1 uses a titration to establish certainty equivalents, while experiments 2 and 3 measure degree of preference between options with static parameters. The mixtures differ in the coefficient of variation of amount, delay, or both, but EoR is sufficient to predict all results, with no additional

  10. Sensitivity analysis of seismic hazard for Western Liguria (North Western Italy): A first attempt towards the understanding and quantification of hazard uncertainty

    NASA Astrophysics Data System (ADS)

    Barani, Simone; Spallarossa, Daniele; Bazzurro, Paolo; Eva, Claudio

    2007-05-01

    The use of logic trees in probabilistic seismic hazard analyses often involves a large number of branches that reflect the uncertainty in the selection of different models and in the selection of the parameter values of each model. The sensitivity analysis, as proposed by Rabinowitz and Steinberg [Rabinowitz, N., Steinberg, D.M., 1991. Seismic hazard sensitivity analysis: a multi-parameter approach. Bull. Seismol. Soc. Am. 81, 796-817], is an efficient tool that allows the construction of logic trees focusing attention on the parameters that have greater impact on the hazard. In this paper the sensitivity analysis is performed in order to identify the parameters that have the largest influence on the Western Liguria (North Western Italy) seismic hazard. The analysis is conducted for six strategic sites following the multi-parameter approach developed by Rabinowitz and Steinberg [Rabinowitz, N., Steinberg, D.M., 1991. Seismic hazard sensitivity analysis: a multi-parameter approach. Bull. Seismol. Soc. Am. 81, 796-817] and accounts for both mean hazard values and hazard values corresponding to different percentiles (e.g., 16%-ile and 84%-ile). The results are assessed in terms of the expected PGA with a 10% probability of exceedance in 50 years for rock conditions and account for both the contribution from specific source zones using the Cornell approach [Cornell, C.A., 1968. Engineering seismic risk analysis. Bull. Seismol. Soc. Am. 58, 1583-1606] and the spatially smoothed seismicity [Frankel, A., 1995. Mapping seismic hazard in the Central and Eastern United States. Seismol. Res. Lett. 66, 8-21]. The influence of different procedures for calculating seismic hazard, seismic catalogues (epicentral parameters), source zone models, frequency-magnitude parameters, maximum earthquake magnitude values and attenuation relationships is considered. As a result, the sensitivity analysis allows us to identify the parameters with higher influence on the hazard. Only these

  11. Cooling Floor AC Systems

    NASA Astrophysics Data System (ADS)

    Jun, Lu; Hao, Ding; Hong, Zhang; Ce, Gao Dian

    The present HVAC equipments for the residential buildings in the Hot-summer-and-Cold-winter climate region are still at a high energy consuming level. So that the high efficiency HVAC system is an urgently need for achieving the preset government energy saving goal. With its advantage of highly sanitary, highly comfortable and uniform of temperature field, the hot-water resource floor radiation heating system has been widely accepted. This paper has put forward a new way in air-conditioning, which combines the fresh-air supply unit and such floor radiation system for the dehumidification and cooling in summer or heating in winter. By analyze its advantages and limitations, we found that this so called Cooling/ Heating Floor AC System can improve the IAQ of residential building while keep high efficiency quality. We also recommend a methodology for the HVAC system designing, which will ensure the reduction of energy cost of users.

  12. DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual.

    SciTech Connect

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  13. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  14. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, developers manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  15. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual.

    SciTech Connect

    Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  16. Uncertainty quantification in MD simulations of concentration driven ionic flow through a silica nanopore. I. Sensitivity to physical parameters of the pore.

    PubMed

    Rizzi, F; Jones, R E; Debusschere, B J; Knio, O M

    2013-05-21

    In this article, uncertainty quantification is applied to molecular dynamics (MD) simulations of concentration driven ionic flow through a silica nanopore. We consider a silica pore model connecting two reservoirs containing a solution of sodium (Na(+)) and chloride (Cl(-)) ions in water. An ad hoc concentration control algorithm is developed to simulate a concentration driven counter flow of ions through the pore, with the ionic flux being the main observable extracted from the MD system. We explore the sensitivity of the system to two physical parameters of the pore, namely, the pore diameter and the gating charge. First we conduct a quantitative analysis of the impact of the pore diameter on the ionic flux, and interpret the results in terms of the interplay between size effects and ion mobility. Second, we analyze the effect of gating charge by treating the charge density over the pore surface as an uncertain parameter in a forward propagation study. Polynomial chaos expansions and Bayesian inference are exploited to isolate the effect of intrinsic noise and quantify the impact of parametric uncertainty on the MD predictions. We highlight the challenges arising from the heterogeneous nature of the system, given the several components involved, and from the substantial effect of the intrinsic thermal noise.

  17. Nonlinear sensitivity and uncertainty analysis in support of the blowdown heat transfer program. [Test 177 at Thermal-Hydraulic Test Facility

    SciTech Connect

    Ronen, Y.; Bjerke, M.A.; Cacuci, D.G.; Barhen, J.

    1980-11-01

    A nonlinear uncertainty analysis methodology based on the use of first and second order sensitivity coefficients is presented. As a practical demonstration, an uncertainty analysis of several responses of interest is performed for Test 177, which is part of a series of tests conducted at the Thermal-Hydraulic Test Facility (THTF) of the ORNL Engineering Technology Division Pressurized Water Reactor-Blowdown Heat Transfer (PWR-BDHT) program. These space- and time-dependent responses are: mass flow rate, temperature, pressure, density, enthalpy, and water qualtiy - in several volumetric regions of the experimental facility. The analysis shows that, over parts of the transient, the responses behave as linear functions of the input parameters; in these cases, their standard deviations are of the same order of magnitude as those of the input parameters. Otherwise, the responses exhibit nonlinearities and their standard deviations are considerably larger. The analysis also shows that the degree of nonlinearity of the responses is highly dependent on their volumetric locations.

  18. Sensitivity and uncertainty in the measurement of H*(10) in neutron fields using an REM500 and a multi-element TEPC.

    PubMed

    Waker, Anthony; Taylor, Graeme

    2014-10-01

    The REM500 is a commercial instrument based on a tissue-equivalent proportional counter (TEPC) that has been successfully deployed as a hand-held neutron monitor, although its sensitivity is regarded by some workers as low for nuclear power plant radiation protection work. Improvements in sensitivity can be obtained using a multi-element proportional counter design in which a large number of small detecting cavities replace the single large volume cavity of conventional TEPCs. In this work, the authors quantify the improvement in uncertainty that can be obtained by comparing the ambient dose equivalent measured with a REM500, which utilises a 5.72 cm (2(1/4) inch) diameter Rossi counter, with that of a multi-element TEPC designed to have the sensitivity of a 12.7 cm (5 inch) spherical TEPC. The results obtained also provide some insight into the influence of other design features of TEPCs, such as geometry and gas filling, on the measurement of ambient dose equivalent.

  19. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Disturbed conditions

    SciTech Connect

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.; GARNER,J.W.; MACKINNON,ROBERT J.; MILLER,JOEL D.; SCHREIBER,J.D.; VAUGHN,PALMER

    2000-05-22

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequent to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.

  20. Uncertainty and sensitivity analysis in the 2008 performance assessment for the proposed repository for high-level radioactive waste at Yucca Mountain, Nevada.

    SciTech Connect

    Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.

    2010-05-01

    Extensive work has been carried out by the U.S. Department of Energy (DOE) in the development of a proposed geologic repository at Yucca Mountain (YM), Nevada, for the disposal of high-level radioactive waste. As part of this development, an extensive performance assessment (PA) for the YM repository was completed in 2008 [1] and supported a license application by the DOE to the U.S. Nuclear Regulatory Commission (NRC) for the construction of the YM repository [2]. This presentation provides an overview of the conceptual and computational structure of the indicated PA (hereafter referred to as the 2008 YM PA) and the roles that uncertainty analysis and sensitivity analysis play in this structure.

  1. Smart Kd-values, their uncertainties and sensitivities - Applying a new approach for realistic distribution coefficients in geochemical modeling of complex systems.

    PubMed

    Stockmann, M; Schikora, J; Becker, D-A; Flügge, J; Noseck, U; Brendler, V

    2017-08-23

    One natural retardation process to be considered in risk assessment for contaminants in the environment is sorption on mineral surfaces. A realistic geochemical modeling is of high relevance in many application areas such as groundwater protection, environmental remediation, or disposal of hazardous waste. Most often concepts with constant distribution coefficients (Kd-values) are applied in geochemical modeling with the advantage to be simple and computationally fast, but not reflecting changes in geochemical conditions. In this paper, we describe an innovative and efficient method, where the smart Kd-concept, a mechanistic approach mainly based on surface complexation modeling, is used (and modified for complex geochemical models) to calculate and apply realistic distribution coefficients. Using the geochemical speciation code PHREEQC, multidimensional smart Kd-matrices are computed as a function of varying (or uncertain) environmental conditions. On the one hand, sensitivity and uncertainty statements for the distribution coefficients can be derived. On the other hand, smart Kd-matrices can be used in reactive transport (or migration) codes (not shown here). This strategy has various benefits: (1) rapid computation of Kd-values for large numbers of environmental parameter combinations; (2) variable geochemistry is taken into account more realistically; (3) efficiency in computing time is ensured, and (4) uncertainty and sensitivity analysis are accessible. Results are presented exemplarily for the sorption of uranium(VI) onto a natural sandy aquifer material and are compared to results based on the conventional Kd-concept. In general, the sorption behavior of U(VI) in dependence of changing geochemical conditions is described quite well. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  3. 16. STATIC TEST TOWER REMOVABLE FLOOR LEVEL VIEW OF FLOOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. STATIC TEST TOWER REMOVABLE FLOOR LEVEL VIEW OF FLOOR THAT FOLDS BACK TO ALLOW ROCKET PLACEMENT. - Marshall Space Flight Center, Saturn Propulsion & Structural Test Facility, East Test Area, Huntsville, Madison County, AL

  4. Two and Three Bedroom Units: First Floor Plan, Second Floor ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Two and Three Bedroom Units: First Floor Plan, Second Floor Plan, South Elevation (As Built), North Elevation (As Built), East Elevation (As Built), East Elevation (Existing), North Elevation (Existing) - Aluminum City Terrace, East Hill Drive, New Kensington, Westmoreland County, PA

  5. 45. SECOND FLOOR WAREHOUSE, WITH CRANE AND WOODEN BLOCK FLOORING. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    45. SECOND FLOOR WAREHOUSE, WITH CRANE AND WOODEN BLOCK FLOORING. VIEW TO NORTH. - Ford Motor Company Long Beach Assembly Plant, Assembly Building, 700 Henry Ford Avenue, Long Beach, Los Angeles County, CA

  6. 16. SANDSORTING BUILDING, FIRST FLOOR, MEZZANINE ON LEFT (BELOW FLOOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. SAND-SORTING BUILDING, FIRST FLOOR, MEZZANINE ON LEFT (BELOW FLOOR ARE CONCRETE AND STORAGE BINS), LOOKING NORTH - Mill "C" Complex, Sand-Sorting Building, South of Dee Bennet Road, near Illinois River, Ottawa, La Salle County, IL

  7. Floor of Juventae Chasma

    NASA Image and Video Library

    2002-06-17

    Juventae Chasma is an enormous box canyon which opens to the north and forms the outflow channel Maja Vallis. This image from NASA Mars Odyssey spacecraft captures a portion of the western floor of Juventae Chasma and shows a wide variety of landforms.

  8. Use of high-order sensitivity analysis and reduced-form modeling to quantify uncertainty in particulate matter simulations in the presence of uncertain emissions rates: A case study in Houston

    NASA Astrophysics Data System (ADS)

    Zhang, Wenxian; Trail, Marcus A.; Hu, Yongtao; Nenes, Athanasios; Russell, Armistead G.

    2015-12-01

    Regional air quality models are widely used to evaluate control strategy effectiveness. As such, it is important to understand the accuracy of model simulations to establish confidence in model performance and to guide further model development. Particulate matter with aerodynamic diameter less than 2.5 μm (PM2.5) is regulated as one of the criteria pollutants by the National Ambient Air Quality Standards (NAAQS), and PM2.5 concentrations have a complex dependence on the emissions of a number of precursors, including SO2, NOx, NH3, VOCs, and primary particulate matter (PM). This study quantifies how the emission-associated uncertainties affect modeled PM2.5 concentrations and sensitivities using a reduced-form approach. This approach is computationally efficient compared to the traditional Monte Carlo simulation. The reduced-form model represents the concentration-emission response and is constructed using first- and second-order sensitivities obtained from a single CMAQ/HDDM-PM simulation. A case study is conducted in the Houston-Galveston-Brazoria (HGB) area. The uncertainty of modeled, daily average PM2.5 concentrations due to uncertain emissions is estimated to fall between 42% and 52% for different simulated concentration levels, and the uncertainty is evenly distributed in the modeling domain. Emission-associated uncertainty can account for much of the difference between simulation and ground measurements as 60% of observed PM2.5 concentrations fall within the range of one standard deviation of corresponding simulated PM2.5 concentrations. Uncertainties in meteorological fields as well as the model representation of secondary organic aerosol formation are the other two key contributors to the uncertainty of modeled PM2.5. This study also investigates the uncertainties of the simulated first-order sensitivities, and found that the larger the first-order sensitivity, the lower its uncertainty associated with emissions. Sensitivity of PM2.5 to primary PM has

  9. The Floor of Saha E

    NASA Image and Video Library

    2009-10-27

    Diverse textures on the floor of Saha E which could be the result of impact melt coating boulders and other deposits on the floor of the crater on the lunar farside in this image taken by NASA Lunar Reconnaissance Orbiter.

  10. Sources for PM air pollution in the Po Plain, Italy: II. Probabilistic uncertainty characterization and sensitivity analysis of secondary and primary sources

    NASA Astrophysics Data System (ADS)

    Larsen, B. R.; Gilardoni, S.; Stenström, K.; Niedzialek, J.; Jimenez, J.; Belis, C. A.

    2012-04-01

    Very high levels of ambient particulate matter (PM) are frequently encountered in the north of Italy and air quality limits are regularly exceeded. To obtain quantitative information on the pollution sources and to gain understanding of the dynamics of pollution episodes in this populated area PM10 and/or PM2.5 samples were collected daily at nine urban to regional sites distributed over the central Po Plain and one site in the Valtelline Valley. In total, 23 five-week winter campaigns and one comparative summer/autumn campaign (2007-2009) were organized. The PM was analyzed for 61 chemical constituents and a data-base was built up consisting of approx. 70000 records of the concentrations and their associated uncertainty. In addition 14C/12C ratios were determined in PM10 from four sites. Primary and secondary sources were quantified using macro-tracer methods in combination with chemical mass balance modelling and positive matrix factorization and the combined results were computed by probability- and sensitivity analysis. Monte Carlo simulations yielded probability distributions for seven source categories contributing to the carbonaceous fraction of PM and five major source categories contributing to the PM10 and PM2.5 mass. Despite large uncertainties in the combined source contribution estimates the paper demonstrates that secondary aerosol formed simultaneously over the Po Plain is the main responsible for the typical, rapid build-up of air pollution after clean-air episodes. Next to secondary sources, the most important sources are primary emissions from road transport followed by biomass burning (BB). In the Valtelline Valley, higher contributions from BB and lower contributions from secondary sources were observed.

  11. Comparison of approaches for measuring the mass accommodation coefficient for the condensation of water and sensitivities to uncertainties in thermophysical properties.

    PubMed

    Miles, Rachael E H; Reid, Jonathan P; Riipinen, Ilona

    2012-11-08

    We compare and contrast measurements of the mass accommodation coefficient of water on a water surface made using ensemble and single particle techniques under conditions of supersaturation and subsaturation, respectively. In particular, we consider measurements made using an expansion chamber, a continuous flow streamwise thermal gradient cloud condensation nuclei chamber, the Leipzig Aerosol Cloud Interaction Simulator, aerosol optical tweezers, and electrodynamic balances. Although this assessment is not intended to be comprehensive, these five techniques are complementary in their approach and give values that span the range from near 0.1 to 1.0 for the mass accommodation coefficient. We use the same semianalytical treatment to assess the sensitivities of the measurements made by the various techniques to thermophysical quantities (diffusion constants, thermal conductivities, saturation pressure of water, latent heat, and solution density) and experimental parameters (saturation value and temperature). This represents the first effort to assess and compare measurements made by different techniques to attempt to reduce the uncertainty in the value of the mass accommodation coefficient. Broadly, we show that the measurements are consistent within the uncertainties inherent to the thermophysical and experimental parameters and that the value of the mass accommodation coefficient should be considered to be larger than 0.5. Accurate control and measurement of the saturation ratio is shown to be critical for a successful investigation of the surface transport kinetics during condensation/evaporation. This invariably requires accurate knowledge of the partial pressure of water, the system temperature, the droplet curvature and the saturation pressure of water. Further, the importance of including and quantifying the transport of heat in interpreting droplet measurements is highlighted; the particular issues associated with interpreting measurements of condensation

  12. Model-based decision analysis of remedial alternatives using info-gap theory and Agent-Based Analysis of Global Uncertainty and Sensitivity (ABAGUS)

    NASA Astrophysics Data System (ADS)

    Harp, D.; Vesselinov, V. V.

    2011-12-01

    A newly developed methodology to model-based decision analysis is presented. The methodology incorporates a sampling approach, referred to as Agent-Based Analysis of Global Uncertainty and Sensitivity (ABAGUS; Harp & Vesselinov; 2011), that efficiently collects sets of acceptable solutions (i.e. acceptable model parameter sets) for different levels of a model performance metric representing the consistency of model predictions to observations. In this case, the performance metric is based on model residuals (i.e. discrepancies between observations and simulations). ABAGUS collects acceptable solutions from a discretized parameter space and stores them in a KD-tree for efficient retrieval. The parameter space domain (parameter minimum/maximum ranges) and discretization are predefined. On subsequent visits to collected locations, agents are provided with a modified value of the performance metric, and the model solution is not recalculated. The modified values of the performance metric sculpt the response surface (convexities become concavities), repulsing agents from collected regions. This promotes global exploration of the parameter space and discourages reinvestigation of regions of previously collected acceptable solutions. The resulting sets of acceptable solutions are formulated into a decision analysis using concepts from info-gap theory (Ben-Haim, 2006). Using info-gap theory, the decision robustness and opportuneness are quantified, providing measures of the immunity to failure and windfall, respectively, of alternative decisions. The approach is intended for cases where the information is extremely limited, resulting in non-probabilistic uncertainties concerning model properties such as boundary and initial conditions, model parameters, conceptual model elements, etc. The information provided by this analysis is weaker than the information provided by probabilistic decision analyses (i.e. posterior parameter distributions are not produced), however, this

  13. Chronic pelvic floor dysfunction.

    PubMed

    Hartmann, Dee; Sarton, Julie

    2014-10-01

    The successful treatment of women with vestibulodynia and its associated chronic pelvic floor dysfunctions requires interventions that address a broad field of possible pain contributors. Pelvic floor muscle hypertonicity was implicated in the mid-1990s as a trigger of major chronic vulvar pain. Painful bladder syndrome, irritable bowel syndrome, fibromyalgia, and temporomandibular jaw disorder are known common comorbidities that can cause a host of associated muscular, visceral, bony, and fascial dysfunctions. It appears that normalizing all of those disorders plays a pivotal role in reducing complaints of chronic vulvar pain and sexual dysfunction. Though the studies have yet to prove a specific protocol, physical therapists trained in pelvic dysfunction are reporting success with restoring tissue normalcy and reducing vulvar and sexual pain. A review of pelvic anatomy and common findings are presented along with suggested physical therapy management. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Measurement uncertainty.

    PubMed

    Bartley, David; Lidén, Göran

    2008-08-01

    The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.

  15. Floor of Hellas Basin

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [figure removed for brevity, see original site]

    With a diameter of roughly 2000 km and a depth of over 7 km, the Hellas Basin is the largest impact feature on Mars. Because of its great depth, there is significantly more atmosphere to peer through in order to see its floor, reducing the quality of the images taken from orbit. This THEMIS image straddles a scarp between the Hellas floor and an accumulation of material at least a half kilometer thick that covers much of the floor. The southern half of the image contains some of this material. Strange ovoid landforms are present here that give the appearance of flow. It is possible that water ice or even liquid water was present in the deposits and somehow responsible for the observed landscape. The floor of Hellas remains a poorly understood portion of the planet that should benefit from the analysis of new THEMIS data.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in

  16. Evaluating sub-national building-energy efficiency policy options under uncertainty: Efficient sensitivity testing of alternative climate, technolgical, and socioeconomic futures in a regional intergrated-assessment model.

    SciTech Connect

    Scott, Michael J.; Daly, Don S.; Zhou, Yuyu; Rice, Jennie S.; Patel, Pralit L.; McJeon, Haewon C.; Kyle, G. Page; Kim, Son H.; Eom, Jiyong; Clarke, Leon E.

    2014-05-01

    Improving the energy efficiency of the building stock, commercial equipment and household appliances can have a major impact on energy use, carbon emissions, and building services. Subnational regions such as U.S. states wish to increase their energy efficiency, reduce carbon emissions or adapt to climate change. Evaluating subnational policies to reduce energy use and emissions is difficult because of the uncertainties in socioeconomic factors, technology performance and cost, and energy and climate policies. Climate change may undercut such policies. Assessing these uncertainties can be a significant modeling and computation burden. As part of this uncertainty assessment, this paper demonstrates how a decision-focused sensitivity analysis strategy using fractional factorial methods can be applied to reveal the important drivers for detailed uncertainty analysis.

  17. Modular Flooring System

    NASA Technical Reports Server (NTRS)

    Thate, Robert

    2012-01-01

    The modular flooring system (MFS) was developed to provide a portable, modular, durable carpeting solution for NASA fs Robotics Alliance Project fs (RAP) outreach efforts. It was also designed to improve and replace a modular flooring system that was too heavy for safe use and transportation. The MFS was developed for use as the flooring for various robotics competitions that RAP utilizes to meet its mission goals. One of these competitions, the FIRST Robotics Competition (FRC), currently uses two massive rolls of broadloom carpet for the foundation of the arena in which the robots are contained during the competition. The area of the arena is approximately 30 by 72 ft (approximately 9 by 22 m). This carpet is very cumbersome and requires large-capacity vehicles, and handling equipment and personnel to transport and deploy. The broadloom carpet sustains severe abuse from the robots during a regular three-day competition, and as a result, the carpet is not used again for competition. Similarly, broadloom carpets used for trade shows at convention centers around the world are typically discarded after only one use. This innovation provides a green solution to this wasteful practice. Each of the flooring modules in the previous system weighed 44 lb (.20 kg). The improvements in the overall design of the system reduce the weight of each module by approximately 22 lb (.10 kg) (50 %), and utilize an improved "module-to-module" connection method that is superior to the previous system. The MFS comprises 4-by-4-ft (.1.2-by- 1.2-m) carpet module assemblies that utilize commercially available carpet tiles that are bonded to a lightweight substrate. The substrate surface opposite from the carpeted surface has a module-to-module connecting interface that allows for the modules to be connected, one to the other, as the modules are constructed. This connection is hidden underneath the modules, creating a smooth, co-planar flooring surface. The modules are stacked and strapped

  18. Anxiety sensitivity and intolerance of uncertainty as potential risk factors for cyberchondria: A replication and extension examining dimensions of each construct.

    PubMed

    Fergus, Thomas A

    2015-09-15

    Preliminary findings suggest that anxiety sensitivity (AS) and intolerance of uncertainty (IU) may confer vulnerability for cyberchondria, defined as repeated internet searches for medical information that exacerbates health anxiety. Prior studies are limited because it remains unclear whether specific AS or IU dimensions differentially relate to certain cyberchondria dimensions. The present study examined associations among AS, IU, and cyberchondria dimensions using a sample of community adults (N = 578) located in the United States. As predicted, physical AS and inhibitory IU were the only AS or IU dimensions to share unique associations with the distress cyberchondria dimension after controlling for the overlap among the AS dimensions, IU dimensions, and health anxiety. Cognitive AS and social AS unexpectedly evidenced unique associations with cyberchondria dimensions. The results are limited by the cross-sectional study design and use of a community, rather than clinical, sample. This study provides evidence that specific AS and IU dimensions may confer vulnerability to certain cyberchondria dimensions. Further clarifying associations among AS, IU, and cyberchondria may lead to improvements in our conceptualization and, ultimately, treatment of cyberchondria. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Uncertainty in hydrological signatures

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; McMillan, H. K.

    2015-09-01

    Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, e.g. for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40 % relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.

  20. Calculation of surface and top of atmosphere radiative fluxes from physical quantities based on ISCCP data sets. 1: Method and sensitivity to input data uncertainties

    NASA Technical Reports Server (NTRS)

    Zhang, Y.-C.; Rossow, W. B.; Lacis, A. A.

    1995-01-01

    The largest uncertainty in upwelling shortwave (SW) fluxes (approximately equal 10-15 W/m(exp 2), regional daily mean) is caused by uncertainties in land surface albedo, whereas the largest uncertainty in downwelling SW at the surface (approximately equal 5-10 W/m(exp 2), regional daily mean) is related to cloud detection errors. The uncertainty of upwelling longwave (LW) fluxes (approximately 10-20 W/m(exp 2), regional daily mean) depends on the accuracy of the surface temperature for the surface LW fluxes and the atmospheric temperature for the top of atmosphere LW fluxes. The dominant source of uncertainty is downwelling LW fluxes at the surface (approximately equal 10-15 W/m(exp 2)) is uncertainty in atmospheric temperature and, secondarily, atmospheric humidity; clouds play little role except in the polar regions. The uncertainties of the individual flux components and the total net fluxes are largest over land (15-20 W/m(exp 2)) because of uncertainties in surface albedo (especially its spectral dependence) and surface temperature and emissivity (including its spectral dependence). Clouds are the most important modulator of the SW fluxes, but over land areas, uncertainties in net SW at the surface depend almost as much on uncertainties in surface albedo. Although atmospheric and surface temperature variations cause larger LW flux variations, the most notable feature of the net LW fluxes is the changing relative importance of clouds and water vapor with latitude. Uncertainty in individual flux values is dominated by sampling effects because of large natrual variations, but uncertainty in monthly mean fluxes is dominated by bias errors in the input quantities.

  1. Crater Wall and Floor

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site]

    3D Projection onto MOLA data [figure removed for brevity, see original site]

    The impact crater observed in this THEMIS image taken in Terra Cimmeria suggests sediments have filled the crater due to the flat and smooth nature of the floor compared to rougher surfaces at higher elevations. The abundance of several smaller impact craters on the floor of the larger crater indicate however that the flat surface has been exposed for an extended period of time. The smooth surface of the crater floor and rougher surfaces at higher elevations are observed in the 3-D THEMIS image that is draped over MOLA topography (2X vertical exaggeration).

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

    Image information: VIS instrument. Latitude -22.9, Longitude 155.7 East (204.3 West). 19 meter/pixel resolution.

  2. Sensitivity of long-term soil carbon storage to vegetation parameter uncertainty in the GISS ModelE2 Earth System Model C4MIP experiments.

    NASA Astrophysics Data System (ADS)

    Aleinov, I. D.; Kiang, N. Y.; Montes, C.

    2016-12-01

    Land carbon dynamics are currently the greatest source of uncertainty in the trajectory of the global carbon budget over the next century. Of this, the most poorly constrained aspect is soil carbon storage as driven by litter inputs from vegetation, whether natural or managed, and climate change effects on soil biogeochemistry. We examine the sensitivity of simulated soil carbon storage to uncertainty in plant leaf traits in a set of experiments for the Coupled Climate-Carbon Cycle Model Intercomparison Project (C4MIP) performed with Goddard Institute for Space Studies (GISS) ModelE2 General Circulation Model (ModelE2 GCM). The GISS ModelE2 carbon cycle capabilities include terrestrial carbon fluxes computed by Ent Terrestrial Biosphere Model (Ent TBM), a demographic dynamic global vegetation model which includes vegetation biophysics and phenology modules, a Carnegie-Ames-Stanford Approach' (CASA')-based soil biogeochemistry module, and an original formulation of gap probability based vegetation canopy radiative transfer. In these experiments, we prescribe ocean carbon fluxes, but otherwise ModelE2 can simulate these with the NASA Ocean Biogeochemistry Model (NOBM) (Gregg et al., 2000). Propagation of CO2 tracers in the atmosphere is performed by a quadratic upstream method (Prather 1986), which is known for a very low numerical diffusivity. We use as boundary conditions a global vegetation structure data set derived from satellite data that specifies seasonal leaf area index and canopy heights, and historical land cover. The model is spun up for preindustrial climate and prescribed atmospheric CO2 for 300 simulation years for soil carbon to reach equilibrium. These spin-ups are repeated for scenarios of lower and upper estimates of leaf mass per area derived from the TRY database (Kattge et al. 2011), and respiration parameters tuned to equilibrate plant carbohydrate reserves under preindustrial climate. We then perform 20th century transient simulations from

  3. Modeling, Uncertainty Quantification and Sensitivity Analysis of Subsurface Fluid Migration in the Above Zone Monitoring Interval of a Geologic Carbon Storage

    NASA Astrophysics Data System (ADS)

    Namhata, A.; Dilmore, R. M.; Oladyshkin, S.; Zhang, L.; Nakles, D. V.

    2015-12-01

    Carbon dioxide (CO2) storage into geological formations has significant potential for mitigating anthropogenic CO2 emissions. An increasing emphasis on the commercialization and implementation of this approach to store CO2 has led to the investigation of the physical processes involved and to the development of system-wide mathematical models for the evaluation of potential geologic storage sites and the risk associated with them. The sub-system components under investigation include the storage reservoir, caprock seals, and the above zone monitoring interval, or AZMI, to name a few. Diffusive leakage of CO2 through the caprock seal to overlying formations may occur due to its intrinsic permeability and/or the presence of natural/induced fractures. This results in a potential risk to environmental receptors such as underground sources of drinking water. In some instances, leaking CO2 also has the potential to reach the ground surface and result in atmospheric impacts. In this work, fluid (i.e., CO2 and brine) flow above the caprock, in the region designated as the AZMI, is modeled for a leakage event of a typical geologic storage system with different possible boundary scenarios. An analytical and approximate solution for radial migration of fluids in the AZMI with continuous inflow of fluids from the reservoir through the caprock has been developed. In its present form, the AZMI model predicts the spatial changes in pressure - gas saturations over time in a layer immediately above the caprock. The modeling is performed for a benchmark case and the data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is used to quantify the uncertainty of the model outputs based on the uncertainty of model input parameters such as porosity, permeability, formation thickness, and residual brine saturation. The recently developed aPC approach performs stochastic model reduction and approximates the models by a polynomial-based response surface. Finally, a global

  4. Mesas on Depression Floor

    NASA Technical Reports Server (NTRS)

    2004-01-01

    3 August 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows mesas and buttes on the floor of a depression in the Labyrinthus Noctis region of Mars. This is part of the western Valles Marineris. Each mesa is a remnant of a formerly more extensive sequence of rock. The image is located near 7.0oS, 99.2oW. It covers an area about 3 km (1.9 mi) across; sunlight illuminates the scene from the lower left.

  5. [Pelvic floor muscle training and pelvic floor disorders in women].

    PubMed

    Thubert, T; Bakker, E; Fritel, X

    2015-05-01

    Our goal is to provide an update on the results of pelvic floor rehabilitation in the treatment of urinary incontinence and genital prolapse symptoms. Pelvic floor muscle training allows a reduction of urinary incontinence symptoms. Pelvic floor muscle contractions supervised by a healthcare professional allow cure in half cases of stress urinary incontinence. Viewing this contraction through biofeedback improves outcomes, but this effect could also be due by a more intensive and prolonged program with the physiotherapist. The place of electrostimulation remains unclear. The results obtained with vaginal cones are similar to pelvic floor muscle training with or without biofeedback or electrostimulation. It is not known whether pelvic floor muscle training has an effect after one year. In case of stress urinary incontinence, supervised pelvic floor muscle training avoids surgery in half of the cases at 1-year follow-up. Pelvic floor muscle training is the first-line treatment of post-partum urinary incontinence. Its preventive effect is uncertain. Pelvic floor muscle training may reduce the symptoms associated with genital prolapse. In conclusion, pelvic floor rehabilitation supervised by a physiotherapist is an effective short-term treatment to reduce the symptoms of urinary incontinence or pelvic organ prolapse.

  6. Floor of Juventae Chasma

    NASA Technical Reports Server (NTRS)

    2002-01-01

    (Released 30 May 2002) Juventae Chasma is an enormous box canyon (250 km X 100 km) which opens to the north and forms the outflow channel Maja Vallis. Most Martian outflow channels such as Maja, Kasei, and Ares Valles begin at point sources such as box canyons and chaotic terrain and then flow unconfined into a basin region. This image captures a portion of the western floor of Juventae Chasma and shows a wide variety of landforms. Conical hills, mesas, buttes and plateaus of layered material dominate this scene and seem to be 'swimming' in vast sand sheets. The conical hills have a spur and gully topography associated with them while the flat topped buttes and mesas do not. This may be indicative of different materials that compose each of these landforms or it could be that the flat-topped layer has been completely eroded off of the conical hills thereby exposing a different rock type. Both the conical hills and flat-topped buttes and mesas have extensive scree slopes (heaps of eroded rock and debris). Ripples, which are inferred to be dunes, can also be seen amongst the hills. No impact craters can be seen in this image, indicating that the erosion and transport of material down the canyon wall and across the floor is occurring at a relatively rapid rate, so that any craters that form are rapidly buried or eroded.

  7. Floor of Juventae Chasma

    NASA Technical Reports Server (NTRS)

    2002-01-01

    (Released 30 May 2002) Juventae Chasma is an enormous box canyon (250 km X 100 km) which opens to the north and forms the outflow channel Maja Vallis. Most Martian outflow channels such as Maja, Kasei, and Ares Valles begin at point sources such as box canyons and chaotic terrain and then flow unconfined into a basin region. This image captures a portion of the western floor of Juventae Chasma and shows a wide variety of landforms. Conical hills, mesas, buttes and plateaus of layered material dominate this scene and seem to be 'swimming' in vast sand sheets. The conical hills have a spur and gully topography associated with them while the flat topped buttes and mesas do not. This may be indicative of different materials that compose each of these landforms or it could be that the flat-topped layer has been completely eroded off of the conical hills thereby exposing a different rock type. Both the conical hills and flat-topped buttes and mesas have extensive scree slopes (heaps of eroded rock and debris). Ripples, which are inferred to be dunes, can also be seen amongst the hills. No impact craters can be seen in this image, indicating that the erosion and transport of material down the canyon wall and across the floor is occurring at a relatively rapid rate, so that any craters that form are rapidly buried or eroded.

  8. Candor Chasma Floor

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Context image for PIA03080 Candor Chasma Floor

    This VIS image shows part of the layered and wind sculpted deposit that occurs on the floor of Candor Chasma.

    Image information: VIS instrument. Latitude 6.6S, Longitude 284.4E. 17 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  9. Canyon Floor Deposits

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Context image for PIA03598 Canyon Floor Deposits

    The layered and wind eroded deposits seen in this VIS image occur on the floor of Chandor Chasma.

    Image information: VIS instrument. Latitude 5.2S, Longitude 283.4E. 17 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  10. Spallanzani Cr. Floor

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Context image for PIA03632 Spallanzani Cr. Floor

    This image was taken by one of the Mars Student Imaging Project (MSIP) teams. Their target is the unusual floor deposits in Spallanzani Crater. The wind may have affected the surface of the layered deposit. Small dunes have formed near the southern margin.

    Image information: VIS instrument. Latitude 57.9S, Longitude 86.5E. 17 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  11. Measurement Uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  12. Conceptual Uncertainty and Parameter Sensitivity in Subsurface Pathway Flow and Transport Modeling for the Idaho National Engineering and Environmental Laboratory's Subsurface Disposal Area

    NASA Astrophysics Data System (ADS)

    Magnuson, S. O.

    2002-05-01

    As part of an ongoing CERCLA evaluation, the migration of contaminants through the hydrologically complex subsurface at the Idaho National Engineering and Environmental Laboratory Subsurface Disposal Area (SDA) were modeled. The 180-meter thick vadose zone beneath the SDA is primarily composed of extrusive basalt flows that are extensively fractured. These flows are interrupted by thin, mostly continuous sedimentary interbeds that were deposited through aeolian and fluvial processes during periods of volcanic quiescence. The subsurface pathway modeling for the CERCLA assessment has been conducted in phases utilizing the results of characterization activities. The most recent model for the SDA used an equivalent porous continuum approach in a three-dimensional domain to represent movement of water and contaminants in the subsurface. Given the complexity of the subsurface at this site, the simulation results were acknowledged to be uncertain. This presentation will provide an overview of the current modeling effort for the SDA and how conceptual uncertainty was addressed by modeling different scenarios. These scenarios included assignment of infiltration boundary conditions, the effect of superimposing gaps in the interbeds, including the effect within the vadose zone from Big Lost River water discharged to the spreading areas approximately 1 km away, and a simplistic approximation to represent facilitated transport. Parametric sensitivity simulations were used to determine possible effects from assigned transport parameters such as partition coefficients and solubility limits that can vary widely with presumed geochemical conditions. Comparisons of simulated transport results to measured field concentrations in both the vadose zone and in the underlying Snake River Plain aquifer were made to determine the representativeness of the model results. Results of the SDA subsurface transport modeling have been used in part to guide additional field characterization

  13. Floor Plans: Section "AA", Section "BB"; Floor Framing Plans: Section ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Floor Plans: Section "A-A", Section "B-B"; Floor Framing Plans: Section "A-A", Section "B-B" - Fort Washington, Fort Washington Light, Northeast side of Potomac River at Fort Washington Park, Fort Washington, Prince George's County, MD

  14. 4. STAIR, FROM SECOND FLOOR TO THIRD FLOOR, FROM NORTHEAST. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. STAIR, FROM SECOND FLOOR TO THIRD FLOOR, FROM NORTHEAST. Plan of stair is elliptical, the inside well measuring 54' on major axis and 14' on minor axis. ALSO NOTE HIGH REEDED WAINSCOT - Saltus-Habersham House, 802 Bay Street, Beaufort, Beaufort County, SC

  15. 18. FOURTH FLOOR BLDG. 28, RAISED CONCRETE SLAB FLOOR WITH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. FOURTH FLOOR BLDG. 28, RAISED CONCRETE SLAB FLOOR WITH BLOCKS AND PULLEYS OVERHEAD LOOKING NORTHEAST. - Fafnir Bearing Plant, Bounded on North side by Myrtle Street, on South side by Orange Street, on East side by Booth Street & on West side by Grove Street, New Britain, Hartford County, CT

  16. 18. Interior view, middle floor, showing concrete floor beams and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. Interior view, middle floor, showing concrete floor beams and original openings for cables, looking west. - New York, New Haven, & Hartford Railroad, Shell Interlocking Tower, New Haven Milepost 16, approximately 100 feel east of New Rochelle Junction, New Rochelle, Westchester County, NY

  17. 13. Bottom floor, tower interior showing concrete floor and cast ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. Bottom floor, tower interior showing concrete floor and cast iron bases for oil butts (oil butts removed when lighthouse lamp was converted to electric power.) - Block Island Southeast Light, Spring Street & Mohegan Trail at Mohegan Bluffs, New Shoreham, Washington County, RI

  18. 18. MAIN FLOOR HOLDING TANKS Main floor, looking at ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. MAIN FLOOR - HOLDING TANKS Main floor, looking at holding tanks against the west wall, from which sluice gates are seen protruding. Right foreground-wooden holding tanks. Note narrow wooden flumes through which fish were sluiced into holding and brining tanks. - Hovden Cannery, 886 Cannery Row, Monterey, Monterey County, CA

  19. VIEW OF WIDE STAIR TO SECOND FLOOR FROM GROUND FLOOR. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF WIDE STAIR TO SECOND FLOOR FROM GROUND FLOOR. VIEW FACING SOUTH - U.S. Naval Base, Pearl Harbor, Ford Island Polaris Missile Lab & U.S. Fleet Ballistic Missile Submarine Training Center, Between Lexington Boulvevard and the sea plane ramps on the southwest side of Ford Island, Pearl City, Honolulu County, HI

  20. Simultaneous measurement of pelvic floor muscle activity and vaginal blood flow: a pilot study.

    PubMed

    Both, Stephanie; Laan, Ellen

    2007-05-01

    Dyspareunia, defined as persistent or recurrent genital pain associated with sexual intercourse, is hypothesized to be related to pelvic floor hyperactivity and to diminished sexual arousal. Empirical research to support these hypotheses is scarce and concentrates mostly on the role of either pelvic floor activity or genital arousal in female dyspareunia. Currently, however, there is no measurement device to assess pelvic floor activity and genital response simultaneously. The aim of this study was to investigate the validity of a new device that enables simultaneous measurement of pelvic floor activity and genital response in women. Genital arousal measured as vaginal pulse amplitude, and vaginal surface electromyogram (EMG). Thirty sexually functional women participated. To investigate the accuracy of genital response measurement with the adapted photoplethysmograph, and the sensitivity of the device for involuntary changes in pelvic floor activity, vaginal pulse amplitude and vaginal surface EMG were monitored during exposure to emotional, including erotic, films. In addition, vaginal surface EMG was monitored during instructed pelvic floor contractions. The genital data obtained during emotional films proved accurate measurement of genital response. EMG values during the emotional films indicated limited sensitivity of the device for small, involuntary changes in pelvic floor activity due to emotional state. The EMG measurements during the instructed pelvic floor contractions proved sensitivity of the new probe to voluntary pelvic floor activity. It is concluded that following improvement of the sensitivity of the EMG measurement for small, involuntary changes in pelvic floor activity, the device will be a valuable tool in research on superficial dyspareunia.

  1. Adjoint-Based Uncertainty Quantification with MCNP

    NASA Astrophysics Data System (ADS)

    Seifried, Jeffrey Edwin

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  2. Adjoint-Based Uncertainty Quantification with MCNP

    SciTech Connect

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  3. Floor-plan radar

    NASA Astrophysics Data System (ADS)

    Falconer, David G.; Ueberschaer, Ronald M.

    2000-07-01

    Urban-warfare specialists, law-enforcement officers, counter-drug agents, and counter-terrorism experts encounter operational situations where they must assault a target building and capture or rescue its occupants. To minimize potential casualties, the assault team needs a picture of the building's interior and a copy of its floor plan. With this need in mind, we constructed a scale model of a single- story house and imaged its interior using synthetic-aperture techniques. The interior and exterior walls nearest the radar set were imaged with good fidelity, but the distal ones appear poorly defined and surrounded by ghosts and artifacts. The latter defects are traceable to beam attenuation, wavefront distortion, multiple scattering, traveling waves, resonance phenomena, and other effects not accounted for in the traditional (noninteracting, isotropic point scatterer) model for radar imaging.

  4. Ocean floor boundaries.

    PubMed

    Hedberg, H D

    1979-04-13

    The base of the continental slope, combined with the concepts of a boudary zone, a technical advisory boundary commission, and special treatment for restricted seas, offers a readily attainable, natural, practicable, and equitable boundary between national and international jurisdiction over the ocean floor. There is no point in bringing into the boundary formula the unnecessary added complication of thickness of sediments, as recently proposed. Review of the U.S. offshore brings out the critical importance with respect to energy resources of proper choice of boundary principles and proper determination of the base-of-continent line about our shores. The advice of the pertinent science and technology community should urgently be sought and contributed to decisions on offshore boundaries.

  5. Flow Along Valley Floors

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site]

    Released 9 May 2003

    Lines indicative of flow in a valley floor (east to west) cut across similar lines in a slightly smaller valley (southeast to northwest), indicating both that material flowed along the valley floor (as opposed to across it) and that relative flow ages may be determined from crosscutting relationships.

    Image information: VIS instrument. Latitude 39.6, Longitude 31.1East (328.9). 19 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  6. Analysis of the Sensitivity and Uncertainty in 2-Stage Clonal Growth Models for Formaldehyde with Relevance to Other Biologically-Based Dose Response (BBDR) Models

    EPA Science Inventory

    The National Center for Environmental Assessment (NCEA) has conducted and supported research addressing uncertainties in 2-stage clonal growth models for cancer as applied to formaldehyde. In this report, we summarized publications resulting from this research effort, discussed t...

  7. Analysis of the Sensitivity and Uncertainty in 2-Stage Clonal Growth Models for Formaldehyde with Relevance to Other Biologically-Based Dose Response (BBDR) Models

    EPA Science Inventory

    The National Center for Environmental Assessment (NCEA) has conducted and supported research addressing uncertainties in 2-stage clonal growth models for cancer as applied to formaldehyde. In this report, we summarized publications resulting from this research effort, discussed t...

  8. What's New in Floor Care.

    ERIC Educational Resources Information Center

    Griffin, William R.

    1999-01-01

    Examines some of the new equipment, chemicals, and procedures in floor care to help educational facility managers develop floor care programs and improve performance. Trends include more mechanization, higher concentrations and environmentally preferable products for cleaning, and the use of written cleaning procedures. (GR)

  9. SU-E-J-166: Sensitivity of Clinically Relevant Dosimetric Parameters to Contouring Uncertainty During Post Implant Dosimetry of Prostate Permanent Seed Implants

    SciTech Connect

    Mashouf, S; Ravi, A; Morton, G; Song, W

    2015-06-15

    Purpose: There is a strong evidence relating post-implant dosimetry for permanent seed prostate brachytherpy to local control rates. The delineation of the prostate on CT images, however, represents a challenge as it is difficult to confidently identify the prostate borders from soft tissue surrounding it. This study aims at quantifying the sensitivity of clinically relevant dosimetric parameters to prostate contouring uncertainty. Methods: The post-implant CT images and plans for a cohort of 43 patients, who have received I–125 permanent prostate seed implant in our centre, were exported to MIM Symphony LDR brachytherapy treatment planning system (MIM Software Inc., Cleveland, OH). The prostate contours in post-implant CT images were expanded/contracted uniformly for margins of ±1.00mm, ±2.00mm, ±3.00mm, ±4.00mm and ±5.00mm (±0.01mm). The values for V100 and D90 were extracted from Dose Volume Histograms for each contour and compared. Results: The mean value of V100 and D90 was obtained as 92.3±8.4% and 108.4±12.3% respectively (Rx=145Gy). V100 was reduced by −3.2±1.5%, −7.2±3.0%, −12.8±4.0%, −19.0±4.8%, − 25.5±5.4% for expanded contours of prostate with margins of +1mm, +2mm, +3mm, +4mm, and +5mm, respectively, while it was increased by 1.6±1.2%, 2.4±2.4%, 2.7±3.2%, 2.9±4.2%, 2.9±5.1% for the contracted contours. D90 was reduced by −6.9±3.5%, −14.5±6.1%, −23.8±7.1%, − 33.6±8.5%, −40.6±8.7% and increased by 4.1±2.6%, 6.1±5.0%, 7.2±5.7%, 8.1±7.3% and 8.1±7.3% for the same set of contours. Conclusion: Systematic expansion errors of more than 1mm may likely render a plan sub-optimal. Conversely contraction errors may Result in labeling a plan likely as optimal. The use of MRI images to contour the prostate should results in better delineation of prostate organ which increases the predictive value of post-op plans. Since observers tend to overestimate the prostate volume on CT, compared with MRI, the impact of the

  10. Laparoscopy for pelvic floor disorders.

    PubMed

    Van Geluwe, B; Wolthuis, A; D'Hoore, A

    2014-02-01

    Surgical treatment of pelvic floor disorders has significantly evolved during the last decade, with increasing understanding of anatomy, pathophysiology and the minimally-invasive 'revolution' of laparoscopic surgery. Laparoscopic pelvic floor repair requires a thorough knowledge of pelvic floor anatomy and its supportive components before repair of defective anatomy is possible. Several surgical procedures have been introduced and applied to treat rectal prolapse syndromes. Transabdominal procedures include a variety of rectopexies with the use of sutures or prosthesis and with or without resection of redundant sigmoid colon. Unfortunately there is lack of one generally accepted standard treatment technique. This article will focus on recent advances in the management of pelvic floor disorders affecting defecation, with a brief overview of contemporary concepts in pelvic floor anatomy and different laparoscopic treatment options.

  11. STATISTICAL ANALYSIS OF TANK 19F FLOOR SAMPLE RESULTS

    SciTech Connect

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  12. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    SciTech Connect

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  13. Low floor mass transit vehicle

    DOEpatents

    Emmons, J. Bruce; Blessing, Leonard J.

    2004-02-03

    A mass transit vehicle includes a frame structure that provides an efficient and economical approach to providing a low floor bus. The inventive frame includes a stiff roof panel and a stiff floor panel. A plurality of generally vertical pillars extend between the roof and floor panels. A unique bracket arrangement is disclosed for connecting the pillars to the panels. Side panels are secured to the pillars and carry the shear stresses on the frame. A unique seating assembly that can be advantageously incorporated into the vehicle taking advantage of the load distributing features of the inventive frame is also disclosed.

  14. Making A Precisely Level Floor

    NASA Technical Reports Server (NTRS)

    Simpson, William G.; Walker, William H.; Cather, Jim; Burch, John B.; Clark, Keith M.; Johnston, Dwight; Henderson, David E.

    1989-01-01

    Floor-pouring procedure yields large surface level, smooth, and hard. Floor made of self-leveling, slow-curing epoxy with added black pigment. Epoxy poured to thickness no greater than 0.33 in. (0.84 cm) on concrete base. Base floor seasoned, reasonably smooth and level, and at least 4 in. (10cm) thick. Base rests on thermal barrier of gravel or cinders and contains no steel plates, dividers, or bridges to minimize thermal distortion. Metal retaining wall surrounds base.

  15. Making A Precisely Level Floor

    NASA Technical Reports Server (NTRS)

    Simpson, William G.; Walker, William H.; Cather, Jim; Burch, John B.; Clark, Keith M.; Johnston, Dwight; Henderson, David E.

    1989-01-01

    Floor-pouring procedure yields large surface level, smooth, and hard. Floor made of self-leveling, slow-curing epoxy with added black pigment. Epoxy poured to thickness no greater than 0.33 in. (0.84 cm) on concrete base. Base floor seasoned, reasonably smooth and level, and at least 4 in. (10cm) thick. Base rests on thermal barrier of gravel or cinders and contains no steel plates, dividers, or bridges to minimize thermal distortion. Metal retaining wall surrounds base.

  16. Sensitivity and uncertainty analysis for the tritium breeding ratio of a DEMO fusion reactor with a helium cooled pebble bed blanket

    NASA Astrophysics Data System (ADS)

    Nunnenmann, Elena; Fischer, Ulrich; Stieglitz, Robert

    2017-09-01

    An uncertainty analysis was performed for the tritium breeding ratio (TBR) of a fusion power plant of the European DEMO type using the MCSEN patch to the MCNP Monte Carlo code. The breeding blanket was of the type Helium Cooled Pebble Bed (HCPB), currently under development in the European Power Plant Physics and Technology (PPPT) programme for a fusion power demonstration reactor (DEMO). A suitable 3D model of the DEMO reactor with HCPB blanket modules, as routinely used for blanket design calculations, was employed. The nuclear cross-section data were taken from the JEFF-3.2 data library. For the uncertainty analysis, the isotopes H-1, Li-6, Li-7, Be-9, O-16, Si-28, Si-29, Si-30, Cr-52, Fe-54, Fe-56, Ni-58, W-182, W-183, W-184 and W-186 were considered. The covariance data were taken from JEFF-3.2 where available. Otherwise a combination of FENDL-2.1 for Li-7, EFF-3 for Be-9 and JENDL-3.2 for O-16 were compared with data from TENDL-2014. Another comparison was performed with covariance data from JEFF-3.3T1. The analyses show an overall uncertainty of ± 3.2% for the TBR when using JEFF-3.2 covariance data with the mentioned additions. When using TENDL-2014 covariance data as replacement, the uncertainty increases to ± 8.6%. For JEFF-3.3T1 the uncertainty result is ± 5.6%. The uncertainty is dominated by O-16, Li-6 and Li-7 cross-sections.

  17. 21. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR WAS USED FOR DEPLETED AND ENRICHED URANIUM FABRICATION. THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, Uranium Rolling & Forming Operations, Southeast section of plant, southeast quadrant of intersection of Central Avenue & Eighth Street, Golden, Jefferson County, CO

  18. 23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR HOUSED ADMINISTRATIVE OFFICES, THE CENTRAL COMPUTING, UTILITY SYSTEMS, ANALYTICAL LABORATORIES, AND MAINTENANCE SHOPS. THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO

  19. 22. VIEW OF THE SECOND FLOOR PLAN. THE SECOND FLOOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    22. VIEW OF THE SECOND FLOOR PLAN. THE SECOND FLOOR CONTAINS THE AIR PLENUM ND SOME OFFICE SPACE. THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, Uranium Rolling & Forming Operations, Southeast section of plant, southeast quadrant of intersection of Central Avenue & Eighth Street, Golden, Jefferson County, CO

  20. Pelvic floor muscle training exercises

    MedlinePlus

    ... nlm.nih.gov/pubmed/22258946 . Dumoulin C, Hay-Smith J. Pelvic floor muscle training versus no treatment, ... nlm.nih.gov/pubmed/20091581 . Herderschee R, Hay-Smith EJC, Herbison GP, Roovers JP, Heineman MJ. Feedback ...

  1. Sensitivity Analysis Based Approaches for Mitigating the Effects of Reducible Interval Input Uncertainty on Single- and Multi-Disciplinary Systems Using Multi-Objective Optimization

    DTIC Science & Technology

    2010-01-01

    Decomposition [e.g. Conejo et al., 2006], could provide the answer to this very real and challenging problem that must be addressed in the near future. 168...Decoupled approach to multi- disciplinary design optimization under uncertainty,” Optimization and Engineering, 8(1), pp. 21-42. [13] Conejo , A. J., E

  2. Channel Floor Yardangs

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 19 July 2004 The atmosphere of Mars is a dynamic system. Water-ice clouds, fog, and hazes can make imaging the surface from space difficult. Dust storms can grow from local disturbances to global sizes, through which imaging is impossible. Seasonal temperature changes are the usual drivers in cloud and dust storm development and growth.

    Eons of atmospheric dust storm activity has left its mark on the surface of Mars. Dust carried aloft by the wind has settled out on every available surface; sand dunes have been created and moved by centuries of wind; and the effect of continual sand-blasting has modified many regions of Mars, creating yardangs and other unusual surface forms.

    The yardangs in this image are forming in channel floor deposits. The channel itself is funneling the wind to cause the erosion.

    Image information: VIS instrument. Latitude 4.5, Longitude 229.7 East (133.3 West). 19 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are

  3. Teaching Uncertainties

    ERIC Educational Resources Information Center

    Duerdoth, Ian

    2009-01-01

    The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…

  4. Tangential Floor in a Classroom Setting

    ERIC Educational Resources Information Center

    Marti, Leyla

    2012-01-01

    This article examines floor management in two classroom sessions: a task-oriented computer lesson and a literature lesson. Recordings made in the computer lesson show the organization of floor when a task is given to students. Temporary or "incipient" side floors (Jones and Thornborrow, 2004) emerge beside the main floor. In the literature lesson,…

  5. Uncertainty of modelled urban peak O3 concentrations and its sensitivity to input data perturbations based on the Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Pineda Rojas, Andrea L.; Venegas, Laura E.; Mazzeo, Nicolás A.

    2016-09-01

    A simple urban air quality model [MODelo de Dispersión Atmosférica Ubana - Generic Reaction Set (DAUMOD-GRS)] was recently developed. One-hour peak O3 concentrations in the Metropolitan Area of Buenos Aires (MABA) during the summer estimated with the DAUMOD-GRS model have shown values lower than 20 ppb (the regional background concentration) in the urban area and levels greater than 40 ppb in its surroundings. Due to the lack of measurements outside the MABA, these relatively high ozone modelled concentrations constitute the only estimate for the area. In this work, a methodology based on the Monte Carlo analysis is implemented to evaluate the uncertainty in these modelled concentrations associated to possible errors of the model input data. Results show that the larger 1-h peak O3 levels in the MABA during the summer present larger uncertainties (up to 47 ppb). On the other hand, multiple linear regression analysis is applied at selected receptors in order to identify the variables explaining most of the obtained variance. Although their relative contributions vary spatially, the uncertainty of the regional background O3 concentration dominates at all the analysed receptors (34.4-97.6%), indicating that their estimations could be improved to enhance the ability of the model to simulate peak O3 concentrations in the MABA.

  6. Pilot study to examine use of transverse vibration nondestructive evaluation for assessing floor systems

    Treesearch

    Zhiyong. Cai; Robert J. Ross; Michael O. Hunt; Lawrence A. Soltis

    2002-01-01

    Evaluation of existing timber structures requires procedures to evaluate in situ structural members and components. This report evaluates the transverse vibration response of laboratory-built floor systems with new and salvaged joists. The objectives were to 1) compare floor system response to individual member response; 2) examine response sensitivity to location of...

  7. Functional anatomy of pelvic floor.

    PubMed

    Rocca Rossetti, Salvatore

    2016-03-31

    Generally, descriptions of the pelvic floor are discordant, since its complex structures and the complexity of pathological disorders of such structures; commonly the descriptions are sectorial, concerning muscles, fascial developments, ligaments and so on. On the contrary to understand completely nature and function of the pelvic floor it is necessary to study it in the most unitary view and in the most global aspect, considering embriology, philogenesy, anthropologic development and its multiple activities others than urological, gynaecological and intestinal ones. Recent acquirements succeeded in clarifying many aspects of pelvic floor activity, whose musculature has been investigated through electromyography, sonography, magnetic resonance, histology, histochemistry, molecular research. Utilizing recent research concerning not only urinary and gynecologic aspects but also those regarding statics and dynamics of pelvis and its floor, it is now possible to study this important body part as a unit; that means to consider it in the whole body economy to which maintaining upright position, walking and behavior or physical conduct do not share less than urinary, genital, and intestinal functions. It is today possible to consider the pelvic floor as a musclefascial unit with synergic and antagonistic activity of muscular bundles, among them more or less interlaced, with multiple functions and not only the function of pelvic cup closure.

  8. Pelvic Floor Ultrasound: A Review.

    PubMed

    Dietz, Hans Peter

    2017-03-01

    Female pelvic floor dysfunction encompasses a number of prevalent conditions and includes pelvic organ prolapse, urinary and fecal incontinence, obstructed defecation, and sexual dysfunction. In most cases neither etiology nor pathophysiology are well understood. Imaging has great potential to enhance both research and clinical management capabilities, and to date this potential is underutilized. Of the available techniques such as x-ray, computed tomography, magnetic resonance imaging, and ultrasound, the latter is generally superior for pelvic floor imaging, especially in the form of perineal or translabial imaging. The technique is safe, simple, cheap, easily accessible and provides high spatial and temporal resolutions.

  9. Stereo-particle image velocimetry uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  10. The pelvic floor in health and disease.

    PubMed Central

    Shelton, A A; Welton, M L

    1997-01-01

    Normal pelvic floor function involves a set of learned and reflex responses that are essential for the normal control and evacuation of stool. A variety of functional disturbances of the pelvic floor, including incontinence and constipation, are not life threatening, but can cause significant distress to affected patients. Understanding the normal anatomy and physiology of the pelvic floor is essential to understanding and treating these disorders of defecation. This article describes the normal function of the pelvic floor, the diagnostic tools available to investigate pelvic floor dysfunction, and the etiology, diagnosis, and management of the functional pelvic floor disorders that lead to incontinence and constipation. Images Figure 1. PMID:9291746

  11. Complete Sensitivity/Uncertainty Analysis of LR-0 Reactor Experiments with MSRE FLiBe Salt and Perform Comparison with Molten Salt Cooled and Molten Salt Fueled Reactor Models

    SciTech Connect

    Brown, Nicholas R.; Powers, Jeffrey J.; Mueller, Don; Patton, Bruce W.

    2016-12-01

    In September 2016, reactor physics measurements were conducted at Research Centre Rez (RC Rez) using the FLiBe (2 7LiF + BeF2) salt from the Molten Salt Reactor Experiment (MSRE) in the LR-0 low power nuclear reactor. These experiments were intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems using FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL), in collaboration with RC Rez, performed sensitivity/uncertainty (S/U) analyses of these experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objectives of these analyses were (1) to identify potential sources of bias in fluoride salt-cooled and salt-fueled reactor simulations resulting from cross section uncertainties, and (2) to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a final report on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. In the future, these S/U analyses could be used to inform the design of additional FLiBe-based experiments using the salt from MSRE. The key finding of this work is that, for both solid and liquid fueled fluoride salt reactors, radiative capture in 7Li is the most significant contributor to potential bias in neutronics calculations within the FLiBe salt.

  12. Flooring for Schools: Unsightly Walkways

    ERIC Educational Resources Information Center

    Baxter, Mark

    2011-01-01

    Many mattress manufacturers recommend that consumers rotate their mattresses at least twice a year to help prevent soft spots from developing and increase the product's life span. It's unfortunate that the same kind of treatment can't be applied to flooring for schools, such as carpeting, especially in hallways. Being able to flip or turn a carpet…

  13. Ploughing the deep sea floor.

    PubMed

    Puig, Pere; Canals, Miquel; Company, Joan B; Martín, Jacobo; Amblas, David; Lastras, Galderic; Palanques, Albert

    2012-09-13

    Bottom trawling is a non-selective commercial fishing technique whereby heavy nets and gear are pulled along the sea floor. The direct impact of this technique on fish populations and benthic communities has received much attention, but trawling can also modify the physical properties of seafloor sediments, water–sediment chemical exchanges and sediment fluxes. Most of the studies addressing the physical disturbances of trawl gear on the seabed have been undertaken in coastal and shelf environments, however, where the capacity of trawling to modify the seafloor morphology coexists with high-energy natural processes driving sediment erosion, transport and deposition. Here we show that on upper continental slopes, the reworking of the deep sea floor by trawling gradually modifies the shape of the submarine landscape over large spatial scales. We found that trawling-induced sediment displacement and removal from fishing grounds causes the morphology of the deep sea floor to become smoother over time, reducing its original complexity as shown by high-resolution seafloor relief maps. Our results suggest that in recent decades, following the industrialization of fishing fleets, bottom trawling has become an important driver of deep seascape evolution. Given the global dimension of this type of fishery, we anticipate that the morphology of the upper continental slope in many parts of the world’s oceans could be altered by intensive bottom trawling, producing comparable effects on the deep sea floor to those generated by agricultural ploughing on land.

  14. Flooring for Schools: Unsightly Walkways

    ERIC Educational Resources Information Center

    Baxter, Mark

    2011-01-01

    Many mattress manufacturers recommend that consumers rotate their mattresses at least twice a year to help prevent soft spots from developing and increase the product's life span. It's unfortunate that the same kind of treatment can't be applied to flooring for schools, such as carpeting, especially in hallways. Being able to flip or turn a carpet…

  15. Credible Computations: Standard and Uncertainty

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B.; VanDalsem, William (Technical Monitor)

    1995-01-01

    The discipline of computational fluid dynamics (CFD) is at a crossroad. Most of the significant advances related to computational methods have taken place. The emphasis is now shifting from methods to results. Significant efforts are made in applying CFD to solve design problems. The value of CFD results in design depends on the credibility of computed results for the intended use. The process of establishing credibility requires a standard so that there is a consistency and uniformity in this process and in the interpretation of its outcome. The key element for establishing the credibility is the quantification of uncertainty. This paper presents salient features of a proposed standard and a procedure for determining the uncertainty. A customer of CFD products - computer codes and computed results - expects the following: A computer code in terms of its logic, numerics, and fluid dynamics and the results generated by this code are in compliance with specified requirements. This expectation is fulfilling by verification and validation of these requirements. The verification process assesses whether the problem is solved correctly and the validation process determines whether the right problem is solved. Standards for these processes are recommended. There is always some uncertainty, even if one uses validated models and verified computed results. The value of this uncertainty is important in the design process. This value is obtained by conducting a sensitivity-uncertainty analysis. Sensitivity analysis is generally defined as the procedure for determining the sensitivities of output parameters to input parameters. This analysis is a necessary step in the uncertainty analysis, and the results of this analysis highlight which computed quantities and integrated quantities in computations need to be determined accurately and which quantities do not require such attention. Uncertainty analysis is generally defined as the analysis of the effect of the uncertainties

  16. Impacts of biological parameterization, initial conditions, and environmental forcing on parameter sensitivity and uncertainty in a marine ecosystem model for the Bering Sea

    NASA Astrophysics Data System (ADS)

    Gibson, G. A.; Spitz, Y. H.

    2011-11-01

    We use a series of Monte Carlo experiments to explore simultaneously the sensitivity of the BEST marine ecosystem model to environmental forcing, initial conditions, and biological parameterizations. Twenty model output variables were examined for sensitivity. The true sensitivity of biological and environmental parameters becomes apparent only when each parameter is allowed to vary within its realistic range. Many biological parameters were important only to their corresponding variable, but several biological parameters, e.g., microzooplankton grazing and small phytoplankton doubling rate, were consistently very important to several output variables. Assuming realistic biological and environmental variability, the standard deviation about simulated mean mesozooplankton biomass ranged from 1 to 14 mg C m - 3 during the year. Annual primary productivity was not strongly correlated with temperature but was positively correlated with initial nitrate and light. Secondary productivity was positively correlated with primary productivity and negatively correlated with spring bloom timing. Mesozooplankton productivity was not correlated with water temperature, but a shift towards a system in which smaller zooplankton undertake a greater proportion of the secondary production as the water temperature increases appears likely. This approach to incorporating environmental variability within a sensitivity analysis could be extended to any ecosystem model to gain confidence in climate-driven ecosystem predictions.

  17. Hospital Room Floors May Harbor 'Superbugs'

    MedlinePlus

    ... fullstory_163886.html Hospital Room Floors May Harbor 'Superbugs' But that area often overlooked when it comes ... Hospital room floors may be more of a "superbug" threat than many hospital staffers realize, new research ...

  18. Sea-Floor Spreading and Transform Faults

    ERIC Educational Resources Information Center

    Armstrong, Ronald E.; And Others

    1978-01-01

    Presents the Crustal Evolution Education Project (CEEP) instructional module on Sea-Floor Spreading and Transform Faults. The module includes activities and materials required, procedures, summary questions, and extension ideas for teaching Sea-Floor Spreading. (SL)

  19. Raise the Floor When Remodeling Science Labs

    ERIC Educational Resources Information Center

    Nation's Schools, 1972

    1972-01-01

    A new remodeling idea adopts the concept of raised floor covering gas, water, electrical, and drain lines. The accessible floor has removable panels set into an adjustable support frame 24 inches above a concrete subfloor. (Author)

  20. How Are Pelvic Floor Disorders Diagnosed?

    MedlinePlus

    ... Information Clinical Trials Resources and Publications How are pelvic floor disorders diagnosed? Skip sharing on social media links ... fee ). This test is used to evaluate the pelvic floor and rectum while the patient is having a ...

  1. Distribution of maple strip flooring in 1969

    Treesearch

    William C. Miller; William C. Miller

    1971-01-01

    This paper is the second in a series on the residential and commercial hardwood flooring industry. Unlike the first paper in the series, which dealt with oak strip flooring industry, this paper analyzes several qualitative questions pertaining to the maple flooring industry. The next paper planned for this series will analyze quantitative as well as qualitative...

  2. Distribution of parquet flooring during 1969

    Treesearch

    William C. Miller; William C. Miller

    1972-01-01

    This is the third in a series of papers dealing with the residential and commercial hardwood flooring industry. The first two paper are: PHYSICAL DISTRIBUTIOONF OAK STRIP FLOORING IN 1969 (U.S.D.A. Forest Serv. Res. Paper NE-207) and DISTRIBUTION OF MAPLE STRIP FLOORING IN 1969. (U.S.D.A. Forest Serv. Res. Paper NE-215).

  3. 14 CFR 25.793 - Floor surfaces.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Floor surfaces. 25.793 Section 25.793 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS... Floor surfaces. The floor surface of all areas which are likely to become wet in service must have...

  4. 49 CFR 38.59 - Floor surfaces.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Floor surfaces. 38.59 Section 38.59 Transportation Office of the Secretary of Transportation AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY SPECIFICATIONS FOR TRANSPORTATION VEHICLES Rapid Rail Vehicles and Systems § 38.59 Floor surfaces. Floor...

  5. 36 CFR 1192.59 - Floor surfaces.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Floor surfaces. 1192.59 Section 1192.59 Parks, Forests, and Public Property ARCHITECTURAL AND TRANSPORTATION BARRIERS COMPLIANCE... Rail Vehicles and Systems § 1192.59 Floor surfaces. Floor surfaces on aisles, places for standees,...

  6. Use of hardwood flooring in mobile homes

    Treesearch

    David G. Martens; Leonard J. Koenick; Leonard J. Koenick

    1970-01-01

    The hardwood flooring industry is losing a new and vigorous market by default. The mobile-home industry produced over 250 million square feet of single-family housing space last year, and very little of this floor space was covered with hardwood flooring. A preliminary glance at this situation seems to uncover an industry that offers many opportunities for hardwood...

  7. TINY FEET NO TREAT TO FLOORS.

    ERIC Educational Resources Information Center

    SMALLEY, DAVE E.

    A DISCUSSION OF FLOOR MAINTENANCE AND CARE INTERMS OF BROKEN, WARPED, AND OTHERWISE DAMAGED CONDITIONS WHICH OFTEN REQUIRE REPLACEMENTS GIVES SUGGESTIONS FOR VARIOUS TYPES OF FLOORING MATERIAL. WOOD FLOOR CONDITIONS MAY INCLUDE--(1) CUPPED BOARDS, (2) BUCKLING BOARDS, AND (3) BROKEN BOARDS. A DETAILED DISCUSSION IS GIVEN OF METHODS FOR REMOVING…

  8. Design issues for floor control protocols

    NASA Astrophysics Data System (ADS)

    Dommel, Hans-Peter; Garcia-Luna-Aceves, Jose J.

    1995-03-01

    Floor control allows users of networked multimedia applications to remotely share resources like cursors, data views, video and audio channels, or entire applications without access conflicts. Floors are mutually exclusive permissions, granted dynamically to collaborating users, mitigating race conditions and guaranteeing fair and deadlock- free resource access. Although floor control is an early concept within computer-supported cooperative work, no framework exists and current floor control mechanisms are often limited to simple objects. While small-scale collaboration can be facilitated by social conventions, the importance of floors becomes evident for large-scale application sharing and teleconferencing orchestration. In this paper, the concept of a scalable session protocol is enhanced with floor control. Characteristics of collaborative environments are discussed, and session and floor control are discerned. The system's and user's requirements perspectives are discussed, including distributed storage policies, packet structure and user-interface design for floor presentation, manipulation, and triggering conditions for floor migration. Interaction stages between users, and scenarios of participant withdrawal, late joins, and establishment of subgroups are elicited with respect to floor generation, bookkeeping, and passing. An API is proposed to standardize and integrate floor control among shared applications. Finally, a concise classification for existing systems with a notion of floor control is introduced.

  9. TINY FEET NO TREAT TO FLOORS.

    ERIC Educational Resources Information Center

    SMALLEY, DAVE E.

    A DISCUSSION OF FLOOR MAINTENANCE AND CARE INTERMS OF BROKEN, WARPED, AND OTHERWISE DAMAGED CONDITIONS WHICH OFTEN REQUIRE REPLACEMENTS GIVES SUGGESTIONS FOR VARIOUS TYPES OF FLOORING MATERIAL. WOOD FLOOR CONDITIONS MAY INCLUDE--(1) CUPPED BOARDS, (2) BUCKLING BOARDS, AND (3) BROKEN BOARDS. A DETAILED DISCUSSION IS GIVEN OF METHODS FOR REMOVING…

  10. 14 CFR 25.793 - Floor surfaces.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Floor surfaces. 25.793 Section 25.793 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS... Floor surfaces. The floor surface of all areas which are likely to become wet in service must have slip...

  11. 49 CFR 38.59 - Floor surfaces.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 1 2012-10-01 2012-10-01 false Floor surfaces. 38.59 Section 38.59 Transportation Office of the Secretary of Transportation AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY SPECIFICATIONS FOR TRANSPORTATION VEHICLES Rapid Rail Vehicles and Systems § 38.59 Floor surfaces. Floor surfaces...

  12. 49 CFR 38.59 - Floor surfaces.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 1 2011-10-01 2011-10-01 false Floor surfaces. 38.59 Section 38.59 Transportation Office of the Secretary of Transportation AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY SPECIFICATIONS FOR TRANSPORTATION VEHICLES Rapid Rail Vehicles and Systems § 38.59 Floor surfaces. Floor surfaces...

  13. 14 CFR 25.793 - Floor surfaces.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Floor surfaces. 25.793 Section 25.793 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS... Floor surfaces. The floor surface of all areas which are likely to become wet in service must have slip...

  14. 36 CFR 1192.59 - Floor surfaces.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Floor surfaces. 1192.59 Section 1192.59 Parks, Forests, and Public Property ARCHITECTURAL AND TRANSPORTATION BARRIERS COMPLIANCE... Rail Vehicles and Systems § 1192.59 Floor surfaces. Floor surfaces on aisles, places for standees, and...

  15. 49 CFR 38.59 - Floor surfaces.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 1 2013-10-01 2013-10-01 false Floor surfaces. 38.59 Section 38.59 Transportation Office of the Secretary of Transportation AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY SPECIFICATIONS FOR TRANSPORTATION VEHICLES Rapid Rail Vehicles and Systems § 38.59 Floor surfaces. Floor surfaces...

  16. 14 CFR 25.793 - Floor surfaces.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Floor surfaces. 25.793 Section 25.793 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS... Floor surfaces. The floor surface of all areas which are likely to become wet in service must have slip...

  17. 14 CFR 25.793 - Floor surfaces.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Floor surfaces. 25.793 Section 25.793 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS... Floor surfaces. The floor surface of all areas which are likely to become wet in service must have slip...

  18. 49 CFR 38.59 - Floor surfaces.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 1 2014-10-01 2014-10-01 false Floor surfaces. 38.59 Section 38.59 Transportation Office of the Secretary of Transportation AMERICANS WITH DISABILITIES ACT (ADA) ACCESSIBILITY SPECIFICATIONS FOR TRANSPORTATION VEHICLES Rapid Rail Vehicles and Systems § 38.59 Floor surfaces. Floor surfaces...

  19. 36 CFR 1192.59 - Floor surfaces.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Floor surfaces. 1192.59 Section 1192.59 Parks, Forests, and Public Property ARCHITECTURAL AND TRANSPORTATION BARRIERS COMPLIANCE... Rail Vehicles and Systems § 1192.59 Floor surfaces. Floor surfaces on aisles, places for standees, and...

  20. 36 CFR 1192.59 - Floor surfaces.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Floor surfaces. 1192.59 Section 1192.59 Parks, Forests, and Public Property ARCHITECTURAL AND TRANSPORTATION BARRIERS COMPLIANCE... Rail Vehicles and Systems § 1192.59 Floor surfaces. Floor surfaces on aisles, places for standees, and...

  1. The lifetime of methane over the 21st century: Sensitivity analyses, key factors, the ACCMIP/CMIP5 projections, and assessing uncertainties

    NASA Astrophysics Data System (ADS)

    Prather, M. J.; Holmes, C. D.

    2012-12-01

    Accumulations of some chemically reactive greenhouse gases, methane (CH4) and hydrofluorocarbons (HFCs), are controlled by atmospheric chemistry primarily through reaction with tropospheric hydroxyl (OH). Using three chemistry-transport models running hindcasts of the past decade, we examine the factors controlling the year-to-year variations in mean tropospheric OH, where 'mean' is defined by the globally integrated CH4xOH loss. These variations are compared with the observed decay rate of CH3CCl3, currently our best surrogate for the CH4xOH lifetime and a unique measure of OH variability. Using the factors identified as controlling OH, we project the total CH4 lifetime backward to pre-industrial era to estimate the natural sources of CH4 at the time, and then forward for a range of future climate scenarios (RCPs) to determine the change in CH4 lifetime. These changes in CH4xOH lifetime from the parametric model are compared with those from the full chemistry-climate simulations of the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP) and Coupled Model Intercomparison Project Phase 5 (CMIP5). All of this information is combined into an uncertainty analysis of the future CH4 abundance and global warming potential.

  2. Uncertainty quantification in volumetric Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos

    2016-11-01

    Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.

  3. Obesity and pelvic floor dysfunction.

    PubMed

    Ramalingam, Kalaivani; Monga, Ash

    2015-05-01

    Obesity is associated with a high prevalence of pelvic floor disorders. Patients with obesity present with a range of urinary, bowel and sexual dysfunction problems as well as uterovaginal prolapse. Urinary incontinence, faecal incontinence and sexual dysfunction are more prevalent in patients with obesity. Uterovaginal prolapse is also more common than in the non-obese population. Weight loss by surgical and non-surgical methods plays a major role in the improvement of these symptoms in such patients. The treatment of symptoms leads to an improvement in their quality of life. However, surgical treatment of these symptoms may be accompanied by an increased risk of complications in obese patients. A better understanding of the mechanism of obesity-associated pelvic floor dysfunction is essential.

  4. [Surgical dilemmas. Sinus floor elevation].

    PubMed

    ten Bruggenkate, C M; Schulten, E A J M; Zijderveld, S A

    2008-12-01

    Limited alveolar bone height prevents the placement of dental implants. Sinus floor elevation is an internal augmentation of the maxillary sinus that allows implants to be placed. The principle of this surgical procedure is the preparation of a 'top hinge door', that is raised together with the Schneiderian membrane in the cranial direction. The space which created under this lid is filled with a bone transplant. Autogenous bone is the standard transplant material, despite the fact that a second surgery site is necessary. Under certain circumstances bone substitutes can be used, with a longer healing phase. If sufficient alveolar bone height is available to secure implant stability, simultaneous implantation and sinus floor elevation are possible. Considering the significant anatomical variation in the region of the maxillary sinus, a sound knowledge of the anatomy is of great importance.

  5. Remote sensing of cirrus cloud optical thickness and effective particle size for the National Polar-orbiting Operational Environmental Satellite System Visible/Infrared Imager Radiometer Suite: sensitivity to instrument noise and uncertainties in environmental parameters.

    PubMed

    Ou, Szu-Cheng; Takano, Yoshihide; Liou, K N; Higgins, Glenn J; George, Adrian; Slonaker, Richard

    2003-12-20

    We describe sensitivity studies on the remote sensing of cirrus cloud optical thickness and effective particle size using the National Polar-orbiting Operational Environmental Satellite System Visible/Infrared Imager Radiometer Suite 0.67-, 1.24-, 1.61-, and 2.25-microm reflectances and thermal IR 3.70- and 10.76-microm radiances. To investigate the accuracy and precision of the solar and IR retrieval methods subject to instrument noise and uncertainties in environmental parameters, we carried out signal-to-noise ratio tests as well as the error budget study, where we used the University of California at Los Angeles line-by-line equivalent radiative transfer model to generate radiance tables for synthetic retrievals. The methodology and results of these error analyses are discussed.

  6. Uncertainty Quantification in Aeroelasticity

    NASA Astrophysics Data System (ADS)

    Beran, Philip; Stanford, Bret; Schrock, Christopher

    2017-01-01

    Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are